Needle Tip Force Estimation using an OCT Fiber and a Fused convGRU-CNN Architecture
Needle insertion is common during minimally invasive interventions such as biopsy or brachytherapy. During soft tissue needle insertion, forces acting at the needle tip cause tissue deformation and needle deflection. Accurate needle tip force measurement provides information on needle-tissue interaction and helps detecting and compensating potential misplacement. For this purpose we introduce an image-based needle tip force estimation method using an optical fiber imaging the deformation of an epoxy layer below the needle tip over time. For calibration and force estimation, we introduce a novel deep learning-based fused convolutional GRU-CNN model which effectively exploits the spatio-temporal data structure. The needle is easy to manufacture and our model achieves a mean absolute error of with a cross-correlation coefficient of , clearly outperforming other methods. We test needles with different materials to demonstrate that the approach can be adapted for different sensitivities and force ranges. Furthermore, we validate our approach in an ex-vivo prostate needle insertion scenario.
Keywords:Force Estimation, Optical Coherence Tomography, Convolutional GRU, Convolution Neural Network, Needle Placement
Needle insertion is widely used in minimally invasive procedures, e.g., for biopsies or brachytherapy. Automated needle insertion is challenging and includes aspects like image guidance, needle steering, and force measurement . Precise estimation of the forces acting on the needle tip is particularly interesting, e.g., for monitoring the needle-tissue interaction and detecting tissue ruptures, or to generate feedback during an intervention . One approach is to measure the forces at the needle shaft. While this allows for simple integration of conventional force-torque sensors, the measurements do not reflect the actual forces acting on the needle tip. As large frictional forces act on the needle shaft during insertion, force sensors need to be decoupled or located close to the tip in order to obtain accurate needle tip force estimates . The small diameter of the needles complicates the integration of sensors , which is particularly challenging for conventional mechatronic force sensors [6, 4]. In contrast, fiber optical force sensors are small, largely biocompatible, and not affected by electromagnetic interference, i.e., they are MRI-compatible . Therefore, sensors based on Fabry-Pérot interferometry  or Fiber Bragg Gratings  have been proposed. Although these approaches have shown promising calibration results, they are rarely validated with tissue experiments  and manufacturing and signal processing can be difficult, e.g., when the fibers are subjected to varying temperatures or lateral forces. Yet another approach is based on optical coherence tomography (OCT), where A-scan images of a cylindric instrument tip have been used to estimate the deformation and hence the strain acting on a translucent silicone layer .
We consider force sensing using OCT and a sharp needle with a cone tip mounted on a needle using epoxy resin. The axial force acting on the tip is inferred from the epoxy layer’s deformation observed in a series of A-scans. We can tailor our method to specific application scenarios by using softer epoxy resin for higher sensitivity, as required for microsurgery, or stiffer epoxy resin for larger forces, e.g., as occurring during biopsies . Generally, our approach is flexible and easy to manufacture as the epoxy material is interchangeable, the cone shape can be varied, and no accurate fiber placement is required. However, this imposes some challenges for calibration and force estimation, namely the robust identification of the deformation of the epoxy layer and a non-linear mapping of the measured deformations to forces. To this end we propose a force estimation method based on a novel convolutional gated recurrent unit-convolutional neural network (convGRU-CNN) architecture. Considering the high temporal sampling rate of OCT, we use a sequence of subsequent A-scans as an input to our model. In this way, we can take advantage of a rich spatio-temporal signal space for precise force estimates.
First, we present a detailed description of the force sensing needle and our convGRU-CNN architecture. Second, we describe our setups for calibration and evaluation of the needles. Third, we study the repeatability of force estimation for three different needles with different epoxy resin types. Finally, we present results for needle insertion into actual ex-vivo prostate tissue illustrating the feasibility of the approach and the importance of measurements at the needle tip.
2 Materials and Methods
2.1 Needle Design and Experimental Setup
A schematic drawing of our needle and the needle driver is shown in Figure 1. The needle has a diameter of . An epoxy resin layer of approximately connects the cone shaped tip to the needle shaft. An optical fiber is embedded into the shaft and glued to the epoxy. The fiber is connected to an OCT device (Thorlabs Telesto I). A linear motion stage is used to drive the needle along its axial direction. For calibration and evaluation, a force sensor (ATI Nano43) is mounted between the needle shaft and the motion stage. To study how the sensitivity of the sensor can be varied by using different epoxy resin, the resin was mixed with Norland Optical Adhesive (NOA) 1625 in different concentrations. The layer and the needle tip on top are attached using NOA 63. For calibration, the needle was driven against a metal plate. A large set of data was acquired by deforming the tip with random magnitude and velocity. For evaluation, the needle was inserted into a freshly resected human prostate at constant velocity. As the force sensor at the base measures the total force including friction, we consider a second setup using a shielding tube decoupled from the needle and the force sensor. In this way, we can measure the actual axial tip forces. Note, that this would be impractical for actual application as the tube is not flexible and would increase trauma. A photograph of the experimental setup is shown in Figure 2. We provide a video and ultrasound recording of two insertion procedures with pork liver and a human ex-vivo prostate in our supplementary material.
2.2 Model Architecture
For our model input, we consider a series of A-scans prior to the current observation, as the current force estimate likely depends on prior deformation . Furthermore, we do not extract the epoxy surface as an explicit deformation feature but instead let our model learn relevant features. In this way, we avoid inconsistencies when extracting features for different materials and tips and we exploit information captured in the deformed epoxy layer itself. Prior approaches for spatio-temporal data used CNNs to extract features from image data which are fed into a recurrent model . Alternatively, the temporal dimension can also be handled by a convolution operation . Also, convolutional long-short term memory (convLSTM) cells have been introduced which allow for temporal processing of high dimensional structured data . Based on these approaches, we propose a novel convGRU-CNN architecture, as shown in Figure 3. First, convGRU units take care of the temporal processing which results in a set of 1D feature maps. Then, a ResNet inspired  1D CNN takes care of spatial processing. Compared to LSTMs, GRUs merge the input and forget gate and they merge the cell and hidden state for higher efficiency. We use a combination of convLSTMs and GRUs by replacing the matrix multiplications with convolutions. The proposed architecture is compared to other approaches that have been introduced for spatio-temporal data processing. We consider a 2D CNN that processes both the temporal and the spatial dimension with convolutions. Its structure is the same as the CNN part in the convGRU-CNN model. Moreover, we consider a CNN-GRU model where the 1D CNN first processes the A-scans at each time step separately. Then, the CNN feature vector is fed into two standard GRUs. Next, we consider a pure 1D CNN that does not consider prior A-scans for the current force prediction. Last, we consider a pure 3-layer GRU model. All networks are trained end-to-end in a single optimization run. We use the Adam algorithm for mini-batch gradient descent with a batch size of . We implement our models using the Tensorflow environment.
2.3 Data Acquisition and Datasets
OCT is an interferometry-based image modality using near infrared light to create 1D depth scans (A-scans) of up to reflecting the inner structure of materials. We acquired A-scans at a rate of and force measurements at . We match the two data streams with nearest-neighbor interpolation, using the streams’ timestamps. Our dataset consists of sequences of subsequent A-scans, each labled with a force measurement. Given that we do not need the full imaging depth of the OCT, image data beyond the maximum depth of the cone tip surface is cropped.
We consider calibration datasets for three needles with different epoxy resin types for evaluation with our convGRU-CNN model. Each dataset contains approximately sequences of A-scans, each labeled with an axial force. By default we use a window of with a crop size pixels. We use of the data for training and validation sets, which we use to optimize hyperparamters, e.g., , , , and network depth. The remaining of the data are used for testing. Sequences from the three sets are non-overlapping. Furthermore, one of the needles was evaluated in an ex-vivo experiment in a human prostate. We evaluate our proposed architecture by comparing it to the models mentioned in Section 2.2, reporting the mean absolute error (MAE), relative MAE (rMAE) and correlation coefficient (CC) between predictions and targets. All errors are given for the test set.
First, we report the results for different needles with a different stiffness of the epoxy layer. The results with the corresponding force magnitudes are shown in Table 1. The absolute error values vary, as the corresponding force ranges differ, however, the relative measures rMAE and CC show that models perform overall similar.
Next, we present results for alternative model architectures. The results are shown in Table 2. The results show, that models that take prior A-scans into account perform better. Moreover, our proposed model outperforms previously introduced approaches for our application.
Last, we show results for a needle insertion experiment for two different scenarios in Figure 4. One experiment was conducted with the shielding tube and one without. With the tube, predicted values closely match the measured values. Without, there are large deviations as the sensor also measures friction forces.
We introduce a novel technique for needle tip force estimation using an OCT fiber that images the deformation of an epoxy layer. As OCT has been used for needle-based tissue analysis , it may become more widely available in clinical settings. Our method comes with the typical advantages of optical methods, such as MRI-compatibility and bio-compatibility, while also being flexible and easy to manufacture. This is highlighted by the results for three different needles with epoxy layers of different stiffness. All needles show similar relative calibration errors with a CC in the range of to , indicating that our method generalizes well for different epoxy resins. This allows for easy adaptation of our method to different scenarios with different requirements for force sensitivity and range.
Moreover, we propose a novel method for processing the spatio-temporal OCT data. Previously, time series of A-scans have been processed using recurrent architectures . The approach shares parameters over time, however, it lacks effective spatial exploitation with parameter sharing over space through convolutions. As our convGRU model takes care of efficient processing of both temporal and spatial information, it outperforms the pure temporal GRU and pure spatial 1D CNN with an MAE of compared to an MAE of and , respectively. Furthermore, we adopted a CNN-GRU and 2D CNN model from non-medical domains for comparison of spatio-temporal processing architectures [3, 15]. Compared to the other models, the convGRU units in our model enable temporal processing first while keeping the data structure intact for subsequent CNN processing. This leads to superior performance for the problem at hand.
Lastly, we tested our needle in an ex-vivo experiment with a human prostate. Several other needle tip force estimation methods have been proposed, however, they often lack validation in tissue experiments . One of the reasons for this is the difficulty to measure pure tip forces inside tissue as large friction forces will also be captured by external force sensors . Therefore, we use a shielding tube that decouples friction forces from the needle. Although the decoupling is not perfect due to deformations, we can show that our method accurately captures events such as ruptures. Without the mechanism, in-tissue evaluation is not possible as frictional forces overlap with tip forces. The results indicate that our method is usable for actual force estimation in soft tissue.
We propose a novel method for needle tip force estimation. Our approach uses an OCT fiber imaging the deformation of an epoxy layer to infer the force that acted on the needle tip. The concept is easy to realize and allows flexibility by using different materials for different force sensitivity and maximum range requirements. In order to process the spatio-temporal OCT data we propose a novel convGRU-CNN architecture. For our problem, the method outperforms prior approaches for similar problems and also methods from other domains. Experimental results for force estimation in human prostate tissue underline the method’s potential for practical application.
Acknowledgments. This work was partially supported by DFG grants SCHL 1844/2-1 and SCHL 1844/2-2.
-  Aviles, A.I., Alsaleh, S.M., Hahn, J.K., Casals, A. (2017) Towards retrieving force feedback in robotic-assisted surgery: A supervised neuro-recurrent-vision approach. IEEE Transactions on Haptics 10(3), 431–443
-  Beekmans, S., Lembrechts, T., van den Dobbelsteen, J., van Gerwen, D. (2016) Fiber-Optic Fabry-Pérot Interferometers for Axial Force Sensing on the Tip of a Needle. Sensors 17(1), 38
-  Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T. (2015) Long-term recurrent convolutional networks for visual recognition and description. In: CVPR, pp. 2625–2634
-  Hatzfeld, C., Wismath, S., Hessinger, M., WerthschÃ¼tzky, R., Schlaefer, A., Kupnik, M. (2017) A miniaturized sensor for needle tip force measurements. Biomedical Engineering 62(1), 109â–115
-  He, K., Zhang, X., Ren, S., Sun, J. (2016) Deep residual learning for image recognition. In: CVPR, pp. 770–778
-  Kataoka, H., Washio, T., Chinzei, K., Mizuhara, K., Simone, C., Okamura, A.M. (2002) Measurement of the tip and friction force acting on a needle during penetration. In: MICCAI, pp. 216–223. Springer
-  Kennedy, K.M., Chin, L., McLaughlin, R.A., Latham, B., Saunders, C.M., Sampson, D.D., Kennedy, B.F. (2015) Quantitative micro-elastography: imaging of tissue elasticity using compression optical coherence elastography. Scientific reports 5, 15,538
-  Kumar, S., Shrikanth, V., Amrutur, B., Asokan, S., Bobji, M.S. (2016) Detecting stages of needle penetration into tissues through force estimation at needle tip using fiber bragg grating sensors. Journal of Biomedical Optics 21(12), 127,009
-  Mo, Z., Xu, W., Broderick, N.G. (2017) Capability characterization via ex-vivo experiments of a fiber optical tip force sensing needle for tissue identification. IEEE Sensors Journal
-  Okamura, A.M., Simone, C., O’leary, M.D. (2004) Force modeling for needle insertion into soft tissue. IEEE Trans. Biomed. Eng. 51(10), 1707–1716
-  Otte, C., Otte, S., Wittig, L., Hüttmann, G., Kugler, C., Drömann, D., Zell, A., Schlaefer, A. (2014) Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis. Methods of Information in Medicine 53(4), 245–249
-  Rodrigues, S., Horeman, T., Sam, P., Dankelman, J., van den Dobbelsteen, J., Jansen, F.W. (2014) Influence of visual force feedback on tissue handling in minimally invasive surgery. British Journal of Surgery 101(13), 1766–1773
-  Sun, L., Jia, K., Yeung, D.Y., Shi, B.E. (2015) Human action recognition using factorized spatio-temporal convolutional networks. In: CVPR, pp. 4597–4605
-  Taylor, R.H., Menciassi, A., Fichtinger, G., Fiorini, P., Dario, P. (2016) Medical robotics and computer-integrated surgery. In: Springer Handbook of Robotics, pp. 1657–1684. Springer
-  Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c. (2015) Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Advances in Neural Information Processing Systems, pp. 802–810