SLAM Endoscopy enhanced by adversarial depth prediction

SLAM Endoscopy enhanced by adversarial depth prediction

Richard J. Chen Johns Hopkins University rchen40@jhu.edu Taylor L. Bobrow Johns Hopkins University tbobrow1@jhu.edu Thomas Athey Johns Hopkins University tathey1@jhmi.edu Faisal Mahmood Johns Hopkins University faisalm@jhu.edu  and  Nicholas J. Durr Johns Hopkins University ndurr@jhu.edu
Abstract.

Medical endoscopy remains a challenging application for simultaneous localization and mapping (SLAM) due to the sparsity of image features and size constraints that prevent direct depth-sensing. We present a SLAM approach that incorporates depth predictions made by an adversarially-trained convolutional neural network (CNN) applied to monocular endoscopy images. The depth network is trained with synthetic images of a simple colon model, and then fine-tuned with domain-randomized, photorealistic images rendered from computed tomography measurements of human colons. Each image is paired with an error-free depth map for supervised adversarial learning. Monocular RGB images are then fused with corresponding depth predictions, enabling dense reconstruction and mosaicing as an endoscope is advanced through the gastrointestinal tract. Our preliminary results demonstrate that incorporating monocular depth estimation into a SLAM architecture can enable dense reconstruction of endoscopic scenes.

computational endoscopy; machine learning, computer vision, biomedical imaging
price: 15.00doi: 10.475/123_4isbn: 123-4567-24-567/08/06conference: KDD Workshop on Applied Data Science for Healthcare; August 2019; Anchorage, Alaska USAjournalyear: 2019article: 4price: 15.00copyright: acmlicensedccs: Applied computing  Health Care Information Systems

1. Introduction

Colonoscopy screening is the standard of care for detecting and diagnosing gastrointestinal conditions, with 15 million colonoscopies performed annually in the United States [1]. Still, 50,000 deaths are expected to occur in the United States in 2019 from Colorectal Cancer (CRC), making it the second deadliest cancer variety in the United States [2]. The disparity in caregivers’ ability to detect colorectal lesions is a cause for concern, as an estimated 20% of lesions go undetected during routine screenings [3], with a strong correlation existing between the examining physician’s adenoma detection rate (ADR) and the patient’s likelihood of experiencing advanced stage or fatal interval cancer - the diagnosis of cancer between routine screenings [4]. As a minimally invasive surgical (MIS) procedure, scene understanding in colonoscopy is inherently challenging, as the examining physician is tasked with evaluating a patient’s gastrointestinal system using only localized, monocular images with little ability to aggregate features as the procedure progresses. For example, Cecal Intubation Rate (CIR), or percent of procedures with confirmed imaging from rectum to cecum, is considered a key performance indicator for a successful exam, yet positively identifying the cecum is often difficult with only monocular images to rely on [5]. Routine colonoscopy may benefit from the aggregation, or mosaicing, of image information to form a more complete picture when making colonoscopic assessments.

Figure 1. Our framework for SLAM-endoscopy. We first use Siemens VRT technology to create photorealistic training data (cinematic renderings) for monocular depth estimation. We then use adversarial training to incorporate context-aware information in our network to accurately predict depth, which we finally fuse with RGB into ElasticFusion to create a dense surfel point cloud of the GI.

Despite the advancements made in computer vision and visual scene understanding, optical colonoscopy remains a challenging environment with conventional endoscopes bolstering wide field of view (FOV) monocular detectors and a wide range of working distances. With the emergence of simultaneous localization and mapping (SLAM) [6] for scene reconstruction came its application to the endoscopic setting, with the first work by Montiel et. al. who developed a monocular SLAM approach to sparsely recover the 3D geometry of the abdominal cavity [7]. More recently, the landscape of SLAM endoscopy has been focused around extending feature-based SLAM systems such as ORB-SLAM with depth estimated from stereo cameras to track and map more features [8,9,10,11]. These feature-based SLAM systems tend to work well in rigid, well lit scenes with large working distances, but can fail to track or generate sufficient features for dense reconstructions in settings such as colonoscopy. The paucity of distinguishing features, tissue homogeneity, deforming surface, and highly variable specular appearance of the lumen can cause inconsistencies in estimating camera pose for systems such as ORB-SLAM, as not enough ORB features can be reliably tracked. Moreover, many of the strategies proposed above do not satisfy the hardware limitations of endoscopes maintaining a monocular camera source and wide field-of-view. Because of these issues, monocular depth estimation is a challenging, yet critical problem that must be solved to enable RGB-D SLAM systems to reconstruct scenes in the GI.

In this work, we present a strategy for monocular depth estimation in endoscopy that is robust to specular reflection, and a framework for performing SLAM-endoscopy by fusing monocular RGB images with corresponding depth predictions (Figure 1). We offer one of the first dense reconstruction of the gastrointestinal tract using only monocular images, with phantom and ex-vivo tissue models paired with ground truth for qualitative assessment, with quantitative assessment pending in work-in-progress.

2. Method

2.1. Monocular Depth Estimation using Conditional GANs

Monocular depth estimation is a challenging problem in endoscopy, with current approaches suffering from poor generalization performance due to lack of diverse training data, overfitting to patient-specific textures and colors, and fail to incorporate non-local information for learning depth cues. To overcome these issues, we develop an adversarial approach for context-aware monocular depth estimation that is able to generalize to unknown modes of patient data, shown in Figure 2 [12]. We denote and as the RGB and depth image domains respectively, as a mapping function that maps RGB to depth, and as the discriminator network for .

2.1.1. Conditional GAN Objective:

The conditional Generative Adversarial Network (GAN) framework consists of two networks that compete against each other in a min-max game to respectively minimize and maximize the objective, . The generator is our depth estimation network that learns a mapping from to , and the discriminator distinguishes between real and synthesized pairs of depth and RGB. We can use this framework for pixel-wise depth estimation by mixing the adversarial loss term with a per-pixel loss term to penalize both the joint configuration of pixels and accuracy of the estimated depth maps. In the mapping , the loss term is used to score the accuracy of the depth estimation by , with its strength controlled by . We can express the adversarial objective as the binary cross entropy loss of in classifying real/synthesized pairs. The motivation for using an adversarial loss term is to incorporate non-local information for important depth cues, such as the the inverse square fall-off in light intensity with propagation distance from the light source. This non-local loss information is calculated by the discriminator, which classifies overlapping pairs of image and depth patches as being real or synthetic as the adversarial loss. By controlling the size of the patch, we can control how much global / non-local information to include in the image. In total, we can express these loss terms as:

(1)
(2)
(3)
Figure 2. Adversarial framework for monocular depth estimation.

2.1.2. Model & Training Details:

Encoder-decoder networks, such as the U-Net, are commonly used in many deep network approaches for monocular depth estimation. The U-Net architecture draws skip connections between convolution layers on the encoder path and up-sampling layers on the decoder path that have the same spatial size. To stabilize the GAN training procedure of the U-Net as the generator, we applied spectral normalization to convolution layers, which controls the spectral norm of the convolution weights in the network would be bounded by the Lipschitz constraint . We also further stabilize the discriminator by using a buffered data input from the generator, which consists of previously generated and classified pairs and ground truth data. In our observations, we found that these techniques reduced visual artifacts made by the generator, which resulted in more smoothly-varying depth estimates.

2.1.3. Training Data:

The Cinematic VRT technology developed by Siemens Healthcare uses a novel technique that can simulate light scattering and extinction through turbid media, creating natural and photorealistic 3D representation for medical scans that mimic the physical lighting experienced in real tissue [13]. We used this software to generate a diverse set of four renderings for 1190 endoscopic scenes, with 3570 scenes with annotated depth rendered in total. The CT colonoscopy data used was acquired from 13 patients from the NIH Cancer Imaging Archive (TCIA). By including renderings of the same colon image with different colors and textures in the training set, the network can learn more domain-invariant features, which would allow it to generalize well to other tissue models. Visualization and performance metrics of predicted depths using conditional GANS for endoscopic scenes are shown in Figure 3 and Table 1. A held-out set was created by holding out data from two patients from the training set, which approximately created a 80-20 split in training and validation samples.

2.2. Dense Surface Reconstruction

We fuse endoscopic monocular RGB video frames with estimated depth frames as input to ElasticFusion for surface reconstruction [14]. At a high level, ElasticFusion seeks to: i.) utilize pairs of RGB-D frames to compute a surface element (surfel) model of the imaged scene, ii.) test a registration between the current observable scene and the neighboring unobserved scene, and iii.) compare each acquired frame against a dictionary of predicted model views. If a match is found, the algorithm computes an alignment between the current local model and the entire global model. Specular reflection is detected through a light source pose estimation process in ElasticFusion, in which and is used to estimate a set of discrete light sources, which is then used to add rendered lighting in a scene.

Figure 3. Cinematic rendering endoscopy images from a held-out set with corresponding ground truth depth and estimated depths. For visualization, distance is normalized between values 0 and 1, which correspond to closer and farther distances.
Method rel rms
U-Net-Adv. 0.312 0.012 0.054
Table 1. Performance evaluation for cinematically rendered endoscopy images on the held-out set.

2.3. Experimental Setup

Figure 4. Qualitative Assessment on phantom tissue (left) and ex-vivo porcine colon tissue (right), with corresponding RGB frame and predicted depth measurement. 3D reconstruction made available here: https://youtu.be/7I-d5LwIAQI

To validate the application of ElasticFusion to colonoscopy, we recorded phantom and tissue models featuring realistic lumen topography for qualitative evaluation. A silicone circular colon phantom model (Colonoscopy Trainer 2003, the Chamberlain Group) was used, with a conventional Olympus colonoscope (Olympus Medical, CFHQ-190L) used to navigate. For demonstration with real gastrointestinal tissue, we utilized freshly harvested porcine colon for a realistic representation of human tissue. A 3 inch diameter half pipe was cut through the surface of a dense foam block to serve as a scaffold for the colon tissue. Ridges were cut into the walls of the foam in order to mimic the semilunar folds of the large intestine. The tissue was blanketed over the foam scaffold and pinned in to place. A conventional Pentax colonoscope (Pentax Medical, EC34-i10L) was mounted to a linear stage and passed through the tissue scaffold, capturing a video sequence similar to that which is captured in colonoscopy.

3. Results & Discussion

In this work, we present the first dense reconstruction of large phantom and real porcine colon tissue models of the GI tract, shown in Figure 4. In the phantom model, we were able to accurately reconstruct many of the shape and details of the circular phantom colon, including fidicuary markers and haustral folds. The elimination of specular reflection was observed in the animal model, as ElasticFusion was able to track and recolor the surfels that changed color intensities in the video sequence. Quantitative validation is still work-in-progress, as the accuracy of the alignment of the 3D reconstruction needs to be further compared with other dense SLAM systems using our datasets and models.

Unlike other SLAM systems used in endoscopy, our framework uses a deep learning approach to estimate depth from a monocular video sequence, a Direct SLAM approach that is more robust to specular reflection and the sparsity of distinguishing features in endoscopic images, and requires no hardware modification. We also present an initial baseline for creating dense reconstructions in the GI tract, with our code and data planning to be made available.

The implications of reconstructing dense colon maps are numerous. Procedure statistics such as Cecal Intubation Rate (CIR), Adenoma Detection Rate (ADR), and withdrawal time are measured as key performance indicators [5], but the mosaicing approach outlined in this paper may provide much more relevant information about examination quality. For example, a withdrawal time (the time spent retracting the scope from the proximal colon to search for polyps) ¿6 minutes is often used as a key indicator of procedure quality [15], but what may actually be more informative is a report of areas canvased by the endoscopist, perhaps in units of fractional area captured. Experts in gastroenterology believe that in order for colonoscopy to be effective, approximately 90-95% of the colon’s surface should be inspected during the final withdraw phase of the procedure [16]. To the contrary, a review of procedures conducted by 65 endoscopist revealed that on average, only about 81% of the mucosa is examined during screenings [17]. Suspected lesion management may also benefit from an aggregated reconstruction, allowing for easy relocalization of tissue sites requiring follow-up examination in future procedures. Current practice calls for the tattooing of the tissue site [18], but patient’s run the risk of an inflammatory reaction or scarring of the colon tissue [5], making later resection more challenging.

Future work should focus on continuing to improve upon the accuracy of the reconstruction, with additional work on the depth estimation component of the input frames. The network presented in this work was trained to estimate a pseudo-depth with no absolute units, which leads inconsistencies in estimated geometric pose. A similar network may be trained with cinematic renderings and ground truth depth from CT in order to have a more accurate depth. Additionally, further research may focus on computing missed areas in the colon, such as those behind the semilunar folds of the large intestine, in order to bring these to the attention of the examining endoscopist. Missed regions may be determined by framing the cylindrical colon as a closed form model, with non-closed areas solved for to determine unobserved areas.

4. Conclusion

In summary, monocular images paired with depth estimation from a convolutional neural network show promise for improving the 3D reconstruction of endoscopic scenes. We present a depth estimation architecture that is robust to patient-specific colonic features by training with cinematically rendered colon images with domain randomization. These experiments demonstrate the capability to overcome the unique challenges of widefield endoscopic imaging, producing dense surface models of colonic features.

5. Acknowledgements

This work was supported in part with funding from the NIH NIBIB Trailblazer Award (R21 EB024700) and a sponsored research agreement with Olympus Medical.

References

  • (1) Colorectal Cancer Screening Capacity in the United States, https://www.cdc.gov/cancer/dcpc/research/articles/crc_screening_model.htm.
  • (2) Siegel, R. L., Miller, K. D., & Jemal, A. (2019). Cancer statistics, 2019. CA: A Cancer Journal for Clinicians.
  • (3) Van Rijn, J. C., Reitsma, J. B., Stoker, J., Bossuyt, P. M., Van Deventer, S. J., & Dekker, E. (2006). Polyp miss rate determined by tandem colonoscopy: a systematic review. The American Journal of Gastroenterology, 101(2), 343.
  • (4) Corley, D. A. et al. (2014). Adenoma detection rate and risk of colorectal cancer and death. New England Journal of Medicine, 370(14), 1298-1306.
  • (5) Rees, C. J., Bevan, R., Zimmermann-Fraedrich, K., Rutter, M. D., Rex, D., Dekker, E., … & Hassan, C. (2016). Expert opinions and scientific evidence for colonoscopy key performance indicators. Gut, 65(12), 2045-2060.
  • (6) Durrant-Whyte, H., & Bailey, T. (2006). Simultaneous localization and mapping: part I. IEEE Robotics & Automation Magazine, 13(2), 99-110.
  • (7) Grasa, O. G., Bernal, E., Casado, S., Gil, I., & Montiel, J. M. M. (2014). Visual SLAM for handheld monocular endoscope. IEEE Transactions on Medical Imaging.
  • (8) Marmol, A., Banach, A., & Peynot, T. (2019). Dense-ArthroSLAM: dense intra-articular 3D reconstruction with robust localization prior for arthroscopy. IEEE Robotics & Automation Letters.
  • (9) Song, J., Wang, J., Zhao, L., Huang, S., & Dissanayake, G. (2018). MIS-SLAM: Real-Time Large-Scale Dense Deformable SLAM System in Minimal Invasive Surgery Based on Heterogeneous Computing. IEEE Robotics & Automation Letters.
  • (10) Dimas, G., Iakovidis, D. K., Karargyris, A., Ciuti, G., & Koulaouzidis, A. (2017). An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy. Measurement Science and Technology, 28(9), 094005.
  • (11) Mahmoud, N., Collins, T., Hostettler, A., Soler, L., Doignon, C., & Montiel, J. M. M. (2019). Live Tracking and Dense Reconstruction for Handheld Monocular Endoscopy. IEEE Transactions on Medical Imaging, 38(1), 79-89.
  • (12) R. Chen, F. Mahmood, A. Yuille, and N. J. Durr, Rethinking monocular depth estimation with adversarial training, arXiv preprint arXiv:1808.07528, 2018.
  • (13) F. Mahmood, R. Chen, S. Sudarsky, D. Yu, and N. J. Durr, “Deep learning with cinematic rendering: Fine-tuning deep neural networks using photorealistic medical images,” Physics in Medicine and Biology, 2018.
  • (14) Whelan, T., Salas-Moreno, R. F., Glocker, B., Davison, A. J., & Leutenegger, S. (2016). ElasticFusion: Real-time dense SLAM and light source estimation. The International Journal of Robotics Research, 35(14), 1697-1716.
  • (15) Barclay RL et al. Colonoscopic withdrawal times and adenoma detection during screening colonoscopy. N England Journal of Medicine 2006;355:2533-41.
  • (16) DK Rex. Who is the best colonoscopist? Mosby, 2007.
  • (17) Cotton PB, Williams CB (eds) (2008) Colonoscopy and flexible sigmoidoscopy. In: Practical gastrointestinal endoscopy: the fundamentals, 5th edn. Blackwell Publishing Ltd, Oxford
  • (18) Miyuki, K., Hiromichi, I., Hironori F., Shinya, O., Hiroaki, M., Kunio, K. Pediatric Surgery International 21(11), 873-877, 2005-11-01
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
380933
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description