Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks

Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks

Gaurav Yengera g.yengera@gmail.com Didier Mutter Jacques Marescaux Nicolas Padoy npadoy@unistra.fr ICube, University of Strasbourg, CNRS, IHU Strasbourg, France. University Hospital of Strasbourg, IRCAD, IHU Strasbourg, France.
Abstract

Real-time algorithms for automatically recognizing surgical phases are needed to develop systems that can provide assistance to surgeons, enable better management of operating room (OR) resources and consequently improve safety within the OR. State-of-the-art surgical phase recognition algorithms using laparoscopic videos are based on fully supervised training, completely dependent on manually annotated data. Creation of manual annotations is very expensive as it requires expert knowledge and is highly time-consuming, especially considering the numerous types of existing surgeries and the vast amount of laparoscopic videos available. As a result, scaling up fully supervised surgical phase recognition algorithms to different surgery types is a difficult endeavor. In this work, we present a semi-supervised approach based on self-supervised pre-training - i.e., supervised pre-training where the labels are inherently present in the data - which is less reliant on annotated data. Hence, our proposed approach is more easily scalable to different kinds of surgeries. An additional benefit of self-supervised pre-training is that all available laparoscopic videos can be utilized, ensuring no data remains unexploited. In this work, we propose a new self-supervised pre-training approach designed to predict the remaining surgery duration (RSD) from laparoscopic videos, where the labels are automatically extracted from the time-stamps of the video. The RSD prediction task is used to pre-train a convolutional neural network (CNN) and long short-term memory (LSTM) network in an end-to-end manner. Additionally, we present EndoN2N, an end-to-end trained CNN-LSTM model for surgical phase recognition, which is optimized on complete video sequences using an approximate backpropagation through time. We provide an apples-to-apples comparison with the two-step training approach where the CNN and LSTM are trained separately (EndoLSTM). We evaluate surgical phase recognition performance on a dataset of 120 Cholecystectomy laparoscopic videos (Cholec120) and present the first systematic study of self-supervised pre-training approaches to understand the amount of annotations required for surgical phase recognition. The results show that with our self-supervised pre-training approach, similar or even slightly better surgical phase recognition performance can be obtained with 20 percent fewer manually annotated videos and that with 50 percent fewer annotated videos the difference in performance remains within 5 percent. Interestingly, the RSD pre-training approach leads to performance improvement even when all the training data is manually annotated and outperforms the single pre-training approach for surgical phase recognition presently published in the literature. It was also observed that end-to-end training of CNN-LSTM networks boosts surgical phase recognition performance.

keywords:
laparoscopic surgery, surgical phase recognition, self-supervised pre-training, deep learning, end-to-end CNN-LSTM training, cholecystectomy.
journal: arXiv

1 Introduction

Surgical phase recognition is an important step for analyzing and optimizing surgical workflow and has been an important area of research within the computer-assisted interventions (CAI) community. Real-time surgical phase recognition technology is essential for developing context-aware systems, which can be used to provide automatic notifications regarding the progress of surgeries and can also be used for alerting the surgeon in the case of an inconsistency in the surgical workflow. Additionally, context-aware systems are important for human-machine interaction within the OR and find applications in surgical education.

The rise of laparoscopic surgery, in addition to improving the quality of surgery for the patient in terms of recovery, safety and cost, provides a rich source of information in the form of videos. Our approach relies purely on these videos for automatically extracting surgical phase information in real-time and does not utilize other sources of information such as tool usage signals, RFID data or data from other specialized instruments since these are not ubiquitous in laparoscopic procedures.

State-of-the-art surgical phase recognition algorithms using laparoscopic videos have achieved good levels of performance with accuracies greater than 80 percent (Twinanda, 2017). However, these algorithms are based on fully supervised learning, which limits their potential impact on the development of context-aware systems, since there is a dearth of manually annotated data. Creating manual annotations is an expensive process in terms of time and personnel. Even though there are a few large datasets of manually annotated data, they are available for only a few types of surgeries and cover just a small fraction of the total laparoscopic videos available. For actual clinical deployment there is a clear need for algorithms which are less reliant on manually annotated data in order to scale up surgical phase recognition to different types of surgeries and to be able to use all available data to obtain optimal performance. An unsupervised algorithm would be the ideal solution, however, no purely unsupervised method for training neural networks to effectively recognize surgical phases solely from the frames of laparoscopic videos is known at present. As a result, some degree of supervised learning is necessary. In this regard, we propose an effective semi-supervised algorithm for tackling the problem.

Previous work on self-supervised pre-training (Doersch and Zisserman, 2017) has demonstrated that neural networks can learn a representation of certain inherent characteristics of data by first being trained to perform an auxiliary task for which labels are generated automatically. We propose to pre-train convolutional neural networks (CNN) and long short-term memory (LSTM) networks on the self-supervised task of predicting remaining surgery duration (RSD) from laparoscopic videos (Twinanda et al., 2018). We hypothesize that the progression of time in a surgical procedure is closely related to the phases of the surgery and that the variations in surgical phases often correspond to variations in the duration of surgeries. Hence, a model pre-trained to predict RSD could more easily adapt to surgical phase recognition and generalize better to variations in surgical phases. Additionally, the use of self-supervised learning makes it feasible to pre-train the network on a large number of laparoscopic videos. This could enable the network to generalize better to surgeries involving differences in patient characteristics, surgeon skill levels and surgeon styles. In this work, we modify the architecture and training approach used in Twinanda et al. (2018) for RSD prediction in order to make it more suitable for pre-training CNN-LSTM networks for surgical phase recognition. Our results show that the pre-training improves performance on the subsequent supervised surgical phase recognition task. Consequently, similar levels of performance could be obtained with less annotated data.

Despite its importance, very few publications have addressed the topic of semi-supervised surgical phase recognition. Bodenstedt et al. (2017), the only prior work that we know of to address this problem, presented a method to pre-train CNNs by predicting the correct temporal order of randomly sampled pairs of frames from laparoscopic videos. The idea is to enable the model to understand the temporal progression of laparoscopic workflow, quite similar to the goal of RSD pre-training. While this method does improve performance, its limitations are that it does not utilize complete video sequences to learn about surgical workflow and only the CNN is pre-trained. With the proposed RSD prediction task, the network is pre-trained on complete laparoscopic video sequences. Furthermore, LSTMs, which are responsible for learning temporal features, are pre-trained alongside CNNs. We believe this to be a more effective approach for learning about the temporal workflow of surgical procedures. The experimental results validate the advantages of our proposed RSD pre-training approach. In this work, we also present the first detailed analysis of the effect of self-supervised pre-training on surgical phase recognition performance when different amounts of annotated laparoscopic videos are available.

It has become a popular choice to combine recurrent neural networks (RNNs) with CNNs for surgical phase recognition (Jin et al., 2016; Bodenstedt et al., 2017; Twinanda, 2017). Jin et al. (2016) and Bodenstedt et al. (2017) trained CNN-RNN models in an end-to-end manner, which enables better correlation between spatial features extracted by the CNN and the temporal knowledge acquired by the RNN. However, due to the high space complexity of such an approach, the RNN is not unrolled over complete video sequences and is optimized on video segments. Twinanda (2017) optimize their model, which we refer to as EndoLSTM, over complete video sequences, which is ideal for capturing long range relationships within the surgical procedure, but achieve this by training the CNN and RNN separately in a two-step process. The aforementioned publications have not provided an apples-to-apples comparison between the two methods of training CNN-RNN networks, which we look to address. We propose a model (EndoN2N), which optimizes a CNN-RNN network in an end-to-end manner on complete video sequences using an approximate backpropagation through time (BPTT) and compare it to the EndoLSTM model based on the same architecture. Understanding the best method of training surgical phase recognition models is important for obtaining optimal performance. This helps when scaling up surgical phase recognition to different types of surgeries, since a better optimized model will require less annotated data to obtain the required levels of performance. We observe that end-to-end training leads to superior performance and better generalization within the different surgical phases.

The innovation presented in this paper can be summarized as follows: (1) introduction of RSD prediction as a self-supervised pre-training task, which outperforms the previous self-supervised pre-training approach proposed for surgical phase recognition, (2) self-supervised pre-training of CNN-LSTM networks in an end-to-end manner on long duration surgical videos, (3) the first systematic study of semi-supervised surgical phase recognition performance with variation in the amount of annotated data and (4) apples-to-apples comparison between an end-to-end CNN-LSTM training approach (EndoN2N) and the two step optimization used in the EndoLSTM model. We also present additional experiments to better understand the characteristics of the proposed RSD pre-training model and examine the potential of our models for actual clinical application.

2 Related Work

2.1 Self-Supervised Learning

Unsupervised representation learning has been an active area of research within the context of deep learning. Initial work on the topic focused on methods for initializing deep neural networks with weights close to a good local optimum, since no method was known at the time to effectively train randomly initialized deep networks. One of the most popular approaches was to learn compact representations which could be used for reconstruction of the input data (Hinton and Salakhutdinov, 2006; Bengio et al., 2006). Hinton and Salakhutdinov (2006) demonstrated a method for initializing the weights of a deep autoencoder through unsupervised training of stacked single-layer restricted Boltzmann machines (RBMs). Bengio et al. (2006) showed that deep neural networks can be initialized with meaningful weights by training each layer as an individual autoencoder. Both these works highlighted the performance improvement obtained from unsupervised pre-training for subsequent supervised learning tasks.

The availability of large datasets containing millions of labeled high-resolution images made it possible to effectively train deep CNNs for vision tasks without relying on any form of pre-training. Krizhevsky et al. (2012) trained a randomly initialized deep CNN on the ImageNet dataset for image recognition, which considerably outperformed previous state-of-the-art machine learning approaches. Since then, for several computer vision applications (Girshick et al., 2014; Donahue et al., 2017), pre-training CNNs using supervised learning on large datasets, such as ImageNet, has become the norm as it tends to outperform unsupervised pre-training approaches. However, large datasets of annotated data are not available in all domains and are difficult to generate. Hence, unsupervised representation learning is still a very attractive option, especially if it can match the performance of purely supervised pre-training approaches.

Self-supervised learning has been recently introduced as an alternate method for unsupervised pre-training. The goal is to learn underlying relationships in real-world data by utilizing inherent labels. Several approaches were presented to capture visual information from static images which would be beneficial for subsequent supervised learning tasks such as image classification and object detection. Doersch et al. (2015) built a siamese network to predict the relative position between randomly sampled pairs of image patches in order to learn spatial context within images. Noroozi and Favaro (2016) extended the method to arrange multiple randomly shuffled image patches in the correct order, essentially making the network solve a jigsaw puzzle. Zhang et al. (2016) and Larsson et al. (2016) proposed to pre-train CNNs by making them predict the original color of images which have been converted to grayscale. Dosovitskiy et al. (2014) created surrogate classes corresponding to single images and extended the classes by applying several transformation to the images. They then pre-trained CNNs by learning to differentiate between different surrogate classes while being invariant to the transformations applied. However, the aforementioned self-supervised pre-training approaches are not ideal for a task such as surgical phase recognition, where it is beneficial to utilize video data rather than static images, as it possesses temporal information in addition to visual information.

Several works have explored self-supervised representation learning using video data. Mobahi et al. (2009) presented a method to learn temporal coherence in videos by enforcing that the features extracted using a CNN from consecutive images be similar. Agrawal et al. (2015) utilize egomotion as a supervisory signal for self-supervised pre-training. Wang and Gupta (2015) proposed to learn video representations by tracking image patches through a video. Misra et al. (2016), Fernando et al. (2017), Lee et al. (2017), and Bodenstedt et al. (2017) all aimed to learn representations that capture the temporal structure of video data. Misra et al. (2016) pre-trained CNNs by predicting if a set of frames are in the correct temporal order and they formulated the task as a binary classification problem. Fernando et al. (2017) sampled subsequences, containing both correct and incorrect temporal sequences, from videos and trained a network to distinguish the subsequences that have an incorrect temporal order. Lee et al. (2017) trained a network to sort a sequence of randomly shuffled image frames into the correct temporal sequence. The method proposed by Bodenstedt et al. (2017) involved predicting the correct order of a pair of frames, which have been randomly sampled from a laparoscopic video, and is very similar to the approach of Misra et al. (2016). All of these approaches focus on pre-training CNNs. CNN-LSTM networks are often utilized for applications related to action recognition, where the LSTM is the critical component for learning temporal structure within video data. Hence, we claim that it is not always optimal to merely pre-train the CNN. We believe that pre-training both the CNN and LSTM networks would be ideal for learning representations that capture correlations between the spatial and temporal structure of video data.

For pre-training LSTM networks, future prediction in videos has been proposed as a self-supervised learning task (Srivastava et al., 2015; Lotter et al., 2017). It was argued that a learned representation which could be used to predict future frames of a video, would gather knowledge about temporal and spatial variations. Srivastava et al. (2015) train an LSTM encoder-decoder network to simultaneously predict future frames and reconstruct a video sequence. This pre-training approach was shown to improve performance on activity recognition tasks. Lotter et al. (2017) present a network for video prediction, comprising of CNNs and Convolutional LSTM networks (Shi et al., 2015), inspired by neuroscience research on ’predictive coding’. The network was trained in an end-to-end manner and was shown to be more effective in predicting the future frames of a video as compared to an LSTM encoder-decoder network, but the potential for utilizing this approach as a pre-training step for action recognition was not explored. Despite future prediction approaches being viable for pre-training CNN-LSTM networks, they have only been validated on short video sequences. Our proposed method aims to obtain long-range spatio-temporal knowledge by utilizing complete laparoscopic videos, which are of long durations.

Figure 1: Cholecystectomy surgical phases with mean ( std) duration in seconds within the Cholec120 dataset.

2.2 Surgical Phase Recognition

Previous surgical phase recognition approaches have usually relied on either visual data (Twinanda et al., 2017; Blum et al., 2010), tool usage signals (Padoy et al., 2012; Forestier et al., 2013) or surgical action triplets (Forestier et al., 2015; Katić et al., 2014). Among these, visual information is the only source of data that is ubiquitous in all laparoscopic surgical procedures, whereas tool usage and triplet information are obtained either through specialized equipment or manual provision. Since the focus of this work is on the development of real-time surgical phase recognition approaches suitable for widespread deployment in ORs, we propose models which rely purely on visual data, though they could be extended to utilize other data too. In this section, only the previous works which also utilize visual data are discussed.

Various statistical models have been utilized for modeling the temporal structure of surgical videos. Hidden Markov Models (HMMs) were used in Padoy et al. (2008), Lalys et al. (2012) and Cadène et al. (2016). Dergachyova et al. (2016) implemented a Hidden semi-Markov Model. Twinanda et al. (2017) utilized hierarchical HMMs; and Conditional Random Fields have also been a popular choice (Quellec et al., 2014; Charrière et al., 2017; Lea et al., 2015). Some works have also utilized Dynamic Time Warping (Blum et al., 2010; Lalys et al., 2013), which is not applicable for real-time surgical phase prediction though, as the algorithm requires information from the entire video and is also not well suited for complex non-sequential workflows.

Padoy et al. (2008) and Dergachyova et al. (2016) combined tool usage signals and visual cues from laparoscopic images for real-time surgical phase recognition, however widespread application of surgical phase recognition algorithms relying on tool usage signals seems to be a difficult task. Several works proposed effective approaches for surgical phase recognition in cataract surgeries (Lalys et al., 2012; Quellec et al., 2014, 2015; Charrière et al., 2017). These approaches relied on handcrafted features though and it was shown in Twinanda et al. (2017) that automatically extracting features using a CNN significantly outperformed commonly utilized handcrafted features. The approach proposed by Cadène et al. (2016) is to provide a HMM with features extracted using a deep CNN. However, the use of a RNN, such as the LSTM, for temporal sequence learning is shown to perform better than HMMs (Twinanda, 2017).

Approaches that combine CNNs with RNNs have been presented in Twinanda (2017), Jin et al. (2016) and Bodenstedt et al. (2017). The EndoLSTM model presented in Twinanda (2017) utilized a two-step approach of training a CNN and a LSTM independently for surgical phase recognition. The CNN was used to extract features specific to the surgical phases from the frames of laparoscopic videos, which were then provided to the LSTM during training. The LSTM was optimized on complete video sequences. However, theoretically, end-to-end training is ideal for combining the complementary spatial and temporal knowledge captured by the CNN and RNN networks respectively (Hajj et al., 2017).

Practically, end-to-end training of a CNN-RNN network on complete laparoscopic video sequences is not feasible due to the high space complexity of the approach. Previous works have presented end-to-end training approaches optimized on video subsequences. Jin et al. (2016) performed end-to-end optimization of a CNN-LSTM network over a set of 3 frames sampled at regular intervals from a laparoscopic video. This was the best performing model at the M2CAI 2016 surgical workflow challenge111http://camma.u-strasbg.fr/m2cai2016/index.php/workflow-challenge-results/, outperforming the EndoLSTM model. However, it is not evident if the performance improvement is due to the alternate training approach or the utilization of a deeper CNN. Bodenstedt et al. (2017) incorporated a gated recurrent unit (GRU) (Cho et al., 2014) and trained a CNN-GRU network in an end-to-end manner on laparoscopic video subsequences. They copy the GRU’s hidden state between consecutive subsequences belonging to the same video sequence. However, this model was unable to match the performance of EndoLSTM on the EndoVis15Workflow222http://endovissub-workflow.grand-challenge.org/ dataset and the authors attributed this to the large cholecystectomy specific surgical dataset used to train EndoLSTM. Essentially, no previous publication has provided an apples-to-apples comparison between an end-to-end optimization approach and the two-step optimization approach of Twinanda (2017).

(a) CNN Fine-Tuning Architecture
(b) CNN-LSTM Architecture
Figure 2: EndoN2N model for surgical phase recognition. The initial CNN fine-tuning network is depicted in (a) and (b) depicts the CNN-LSTM network which is trained in an end-to-end manner, where the layers within the dotted line are initialized by the CNN fine-tuning step.

3 Methodology

In this paper, surgical phase recognition approaches for cholecystectomy surgeries are discussed. It is to be noted though that the proposed approaches are generalizable to other surgery types as well. We divide cholecystectomy surgical procedures into 7 distinct surgical phases, depicted in Figure 1, similar to Twinanda et al. (2017). We classify each time step of a laparoscopic video as one of the 7 surgical phases, hence formulating the surgical phase recognition task as a multi-class classification problem.

This section will first discuss the CNN-LSTM architecture of the proposed EndoN2N model. This will be followed by a detailed presentation of our end-to-end training approach to contrast it with existing CNN-LSTM training approaches. Then, the proposed RSD pre-training will be presented. The motivation for using this pre-training task along with the proposed RSD prediction model and the corresponding surgical phase recognition model will be discussed. Finally, we will briefly describe the temporal context pre-training approach of Bodenstedt et al. (2017), which we use as a comparison baseline for our pre-training approach, since it is the only self-supervised pre-training approach previously validated for surgical phase recognition.

(a) Rolled graph
(b) Unrolled graph
Figure 3: Computation graph for end-to-end training of a CNN-LSTM network. and are the input frame and network prediction at the time-step of a sequence, respectively. (a) shows the rolled CNN-LSTM network. (b) depicts the unrolled computational graph, with the blue lines illustrating the backpropagation of the loss through the CNN-LSTM network for a sequence of length .
(a) Progress Regression Architecture
(b) RSD and Progress Multi-Task Network
(c) Updated EndoN2N Model
Figure 4: Proposed RSD pre-training model. CNN architecture for progress regression is shown in (a). CNN-LSTM network designed for multi-task RSD and progress regression, incorporating elapsed time and predicted progress as additional LSTM features, is depicted in (b). (c) is the updated EndoN2N CNN-LSTM network for compatibility with the RSD pre-training model. The layers within the dotted lines are fine-tuned after having been initialized on the training task mentioned within the brackets. The layers outside the dotted lines are randomly initialized.

3.1 EndoN2N

The EndoN2N model, depicted in Figure 1(b), combines a CNN with a LSTM. The model is adaptable to any CNN architecture and RNN variant. We utilize the LSTM as our recurrent network due to its robustness to the vanishing gradient problem (Hochreiter and Schmidhuber, 1997; Bengio et al., 1994). Our experiments are performed using the CaffeNet (Jia et al., 2014) CNN architecture, which is a slight modification of the AlexNet (Krizhevsky et al., 2012) architecture. Although we observed in our preliminary experiments that a deeper CNN architecture improves performance, we utilize CaffeNet, a relatively shallow architecture, for two reasons: (1) our aim is to provide an apples-to-apples comparison between the end-to-end training approach of the proposed EndoN2N model and the two-step training approach of EndoLSTM. This only requires a common CNN architecture to be used in both models and is not dependent on any specific architecture. Additionally, the advantages of the proposed RSD pre-training approach can also be demonstrated using any CNN architecture. (2) end-to-end CNN-LSTM training is computationally intensive. The use of a relatively shallow CNN makes it possible to perform an extensive experimental evaluation in order to clearly demonstrate the advantages of our proposed semi-supervised approach.

As the CNN-LSTM network is trained in an end-to-end manner, we have named the model EndoN2N. Before the end-to-end training step, the CNN is first separately fine-tuned for surgical phase recognition, depicted in Figure 1(a), in order to initialize the CNN-LSTM network with informative features corresponding to the surgical phases. In our experiments, we observe that the CNN-LSTM training converges to a poor local optima unless the CNN is first independently fine-tuned for surgical phase recognition. The CNN fine-tuning is accomplished by replacing the final fully connected layer of the CaffeNet architecture with a fully connected layer, as shown in Figure 1(a), of size equal to the number of surgical phases. In our case, there are output neurons. All the fine-tuned layers of the CNN, except , are then appended to an LSTM which in turn is followed by a new fully connected layer, , also containing as many output neurons as the number of surgical phases (Figure 1(b)). The softmax function is applied at the end of both and layers to obtain a probability distribution over the different surgical phases.

Since surgical phase recognition is formulated as a multi-class classification problem, we compute the classification loss using the multinomial logistic function defined as:

(1)

where is the total number of frames in a laparoscopic video, refers to the number of distinct surgical phases, is the ground truth for phase and is the vector of activations of at the time step of the surgery and is the softmax function computing the predicted probability of phase .

3.1.1 Training Approach

Here we present a detailed explanation of our end-to-end training approach for optimizing CNN-LSTM networks on long duration video sequences. The aim is to contrast our approach with the training approach employed in EndoLSTM and existing approaches for end-to-end training of CNN-LSTM networks on laparoscopic video subsequences. Since the following discussion focuses on the approximation in the BPTT algorithm, a generic description of the layer-wise gradient computation is presented. While the discussion is based on the basic stochastic gradient descent algorithm for simplicity, any other optimization algorithm can also be used.

As illustrated in Figure 3, end-to-end training of a CNN-LSTM network requires the loss to be backpropagated through both the LSTM and CNN. Additionally, the BPTT algorithm requires the loss to be backpropagated from the last time step of a sequence, , to the very first time step, . If we denote the CNN weights as and the weights belonging to the LSTM as , the gradients of the loss function , with respect to the network weights at a time instant can be expressed as:

(2)
(3)

where and are generic functions used to express the computation of gradients for different layers of the LSTM and CNN and is the network prediction at the time step as illustrated in Figure 3. The boundary condition of the BPTT algorithm at the end of the sequence is:

(4)
(a)
(b)
Figure 5: Illustration of the BPTT algorithm. Red arrow denotes a forward pass and a green arrow denotes a backward pass. (a) depicts the standard algorithm and (b) illustrates our approximation.

The stochastic gradient descent weight update when utilizing a mini-batch of one video sequence of length is given by:

(5)

where are the learned CNN-LSTM weights at the end of training iterations. The weights are updated using the gradients computed for the entire video sequence, as shown in Figure 4(a). The recursive structure of Equation (2) implies that to calculate the gradient of the loss function with respect to the network weights, , at the first time step of the sequence, we require the gradients from the final time step. For this to be possible, the entire unrolled computational graph of the CNN-LSTM network, Figure 2(b), needs to be stored in memory. Due to the long duration of cholecystectomy surgeries and the large number of CNN parameters, end-to-end training of a CNN-LSTM network on complete laparoscopic video sequences has a high space complexity. Since no efficient method for storing the complete unrolled graph of the CNN-LSTM network during training is known, we utilize an approximation of the BPTT algorithm.

In our approach, we restructure the loss shown in Equation (1) as:

(6)

In Equation (6) we divide the complete laparoscopic video into consecutive subsequences. is appropriately selected such that the available computational resources are sufficient for storing the unrolled CNN-LSTM graph for time-steps. The loss is backpropagated for every subsequence and the gradients are accumulated independently over the different subsequences before updating the weights.

In this method, at the boundaries between consecutive subsequences, the LSTM cell states and hidden states are forward propagated, while the BPTT algorithm is truncated as illustrated in Figure 4(b). This implies an approximation in Equation (2) at the subsequence boundaries as:

(7)

Since the gradient of the loss with respect to the weights of the CNN, Equation (3), are dependent on the loss gradient with respect to the LSTM weights, these are being approximated as well. The stochastic gradient descent weight update step is now computed as:

(8)

3.2 EndoLSTM

The architecture adopted for the EndoLSTM model in our experiments is exactly the same as that of the EndoN2N model, Figure 2. This is essential to be able to provide an accurate comparison between the two models. Similar to the EndoN2N model, the CNN is first fine-tuned for surgical phase recognition to provide the LSTM with informative features. The difference between the two models lies in the backpropagation of the gradients through the CNN-LSTM network. In the EndoLSTM model, only the weights of the LSTM are updated using the BPTT algorithm, while the weights of the CNN remain fixed, as depicted in Figure 6. Since the LSTM does not contain a large number of parameters like a CNN, it is feasible to store in memory the computational graph of the unrolled LSTM network over complete cholecystectomy video sequences. Hence, we do not need to approximate the BPTT algorithm.

Figure 6: Computation graph for two-step optimization of a CNN-LSTM network. and are the input frame and network prediction at the time-step of a sequence, respectively. is the extracted CNN features corresponding to . The rolled graph is shown on the left. The unrolled LSTM network is depicted on the right.

3.3 Remaining Surgery Duration Pre-training

A key contribution of this work is the self-supervised pre-training of CNN-LSTM networks on the RSD prediction task. We hypothesize that accurate prediction of the time remaining in a surgery requires a good understanding of the surgical workflow. It is likely that a network that has been trained to accurately predict RSD would have indirectly gained knowledge related to the different surgical phases that occur, the duration of each phase and the variations in these surgical phases, since it would correspond to variations in remaining surgery duration. This could make it easier for the network to later be adapted for surgical phase recognition, thereby requiring less manually annotated data and making it easier to scale up surgical phase recognition to many types of surgeries.

The RSD prediction task is formulated as a supervised regression task, where the network is provided with labels for the remaining surgery duration. Since for a given laparoscopic video the remaining surgery duration at a time instant is simply the remaining time in that video, the labels are available without the need for any manual annotation. As the labels are obtained for free, RSD prediction is a self-supervised learning task. This makes it feasible to utilize a large number of laparoscopic videos to train a network for RSD prediction, ensuring that potentially valuable information from even unlabeled videos is exploited. For example, the network could acquire knowledge related to variable patient conditions and surgeon styles, thereby making it generalize better.

3.3.1 RSD Prediction Model

The CNN-LSTM network for RSD prediction is shown in Figure 3(b). The model is similar to the RSDNet model presented in Twinanda et al. (2018), but for two key changes: (1) elapsed time and predicted progress are taken as additional input features into the LSTM and (2) the CNN-LSTM network is trained in an end-to-end manner. Although the original RSDNet model uses a two-step optimization, similar to the EndoLSTM model, end-to-end training is the most natural choice for pre-training a CNN-LSTM network. End-to-end training enables the optimal correlation between the features learned by the CNN and by the LSTM.

We adopt the approach proposed by Twinanda et al. (2018) to learn a RSD prediction model without any manual annotation, which is contrary to previous approaches (Aksamentov et al., 2017). This involves first the fine-tuning of the CNN for progress estimation, as depicted in Figure 3(a), which is the task of predicting the percentage of the surgery that has been completed at a given time instant. Progress estimation is also formulated as a self-supervised regression task. The CNN-LSTM model is then trained for the multi-task objective of RSD prediction and progress prediction. Twinanda et al. (2018) showed that training for this multi-task objective was better than training for RSD alone.

End-to-end training of a CNN-LSTM network on laparoscopic video sequences for RSD prediction is performed with the same approach used in EndoN2N (Section 3.1.1). We restructure the loss function as:

(9)

where and are the ground truths for RSD and progress, and are the activations of the fully connected layers and for the frame of the laparoscopic video. is the sigmoid function, and is the smooth L1 loss (Girshick, 2015) defined as:

(10)

3.3.2 Updated Surgical Phase Recognition Model

In Twinanda et al. (2018), it was argued that knowledge of the elapsed time () and progress () was beneficial for RSD prediction, since they possess a fundamental relation with RSD (), as shown below:

(11)

where is the total duration of the surgery. The RSDNet model simply concatenated elapsed time along with the output of the LSTM and incorporated progress only as an additional output to be predicted. We incorporate elapsed time as well as estimated progress, , as input features to the LSTM itself as shown in Figure 3(b). We believe the LSTM is then capable of learning more complex relationships in between the elapsed time and the model’s perception of surgery progress and RSD.

The goal of utilizing these additional features in our CNN-LSTM model is to make RSD prediction a more effective pre-training approach for surgical phase recognition. The EndoN2N model, however, needs to be modified in order to be compatible with the proposed RSD pre-training model. The updated EndoN2N model architecture, which includes these additional features as inputs to the LSTM, is shown in Figure 3(c). We later present an ablation study to demonstrate the advantages of proposed RSD pre-training model and the updated EndoN2N architecture.

3.4 Temporal Context Pre-training

Figure 7: TempCon pre-training model.

The TempCon pre-training approach of Bodenstedt et al. (2017), which we use as a baseline for comparison, aims to learn the temporal order of laparoscopic workflow by training a Siamese network to predict the relative order of two randomly sampled frames of a laparoscopic video. The specific model used in our experiments is derived from the architecture of Misra et al. (2016), which is designed for a similar task of predicting the correct temporal order of randomly sample frames, since we observed it to perform better than the architecture of Bodenstedt et al. (2017) in our preliminary experiments ( percent vs percent accuracy on the temporal context prediction task). Additionally, such an architecture ensures that all the layers of the CaffeNet CNN used in the subsequent surgical phase recognition task will be pre-trained.

This approach is designed for pre-training CNNs only. A two-stream Siamese network is created by replicating layers conv1 to fc7 of the CaffeNet architecture, as shown in Figure 7. Two randomly sampled frames from a laparoscopic video are provided as inputs to the network. Weights are shared in between the two-streams. The final layers of the Siamese network are concatenated and are followed by a fully connected layer comprising of two neurons which provides the classification output. Each neuron respectively corresponds to one of the input frames. The output of the network is either 0 or 1 depending on whether frame 1 or frame 2 is predicted as occurring first in the surgical sequence.

Model Task Hyperparameters
Optimizer Iterations Step-Size Batch-Size
EndoN2N CNN Fine-tuning Phase SGD 50k 20k 0.1 50 5
CNN-LSTM training Phase Adam 8k 2k 0.25 500 12 5
EndoLSTM CNN fine-tuning Phase SGD 50k 20k 0.1 50 5
LSTM training Phase SGD 30k 10k 0.1 6000 5
RSD Prediction CNN fine-tuning Progress SGD 50k 15k 0.1 64 5
CNN-LSTM training RSD - Progress SGD 8k 2k 0.5 500 12
Temporal Context prediction Relative Order SGD 50k 5 n/a n/a 160 5
Table 1: Training hyperparameters for each individual model, including their different training steps and their respective training task.

4 Experimental Setup

The experiments are carried out on , a dataset of 120 cholecystectomy laparoscopic videos. The surgical procedures contained in the dataset were performed by 33 surgeons at the University Hospital of Strasbourg. The videos are recorded at 25 fps and have an average duration of 38.1 mins (16.0 mins). In total, the dataset accumulates over 75 hours of recordings. All 120 videos have been annotated at a frame rate of 1 fps with surgical phase labels corresponding to the 7 phases shown in Figure 1.

We have designed experiments for two specific goals: (1) to evaluate the improvement in performance obtained by the EndoN2N model over EndoLSTM and (2) to demonstrate the benefits of the proposed self-supervised RSD pre-training approach in reducing the reliance of supervised surgical phase recognition algorithms on annotated data. The division of data for the experiments and the evaluation metrics are described below.

4.1 EndoN2N Evaluation

Surgical phase recognition performance is evaluated on the Cholec120 dataset using a 4-fold cross-validation setup. Each fold is divided into 80 training, 10 validation and 30 test videos. 60 randomly sampled training videos (75 percent) are used to first fine-tune the CNN individually for surgical phase recognition. All 80 videos are then used to train the combined CNN-LSTM network of the EndoN2N model. In the case of the EndoLSTM model, the LSTM is independently trained on the 80 training videos by utilizing features extracted from the fine-tuned CNN. All 10 validation videos are used during CNN fine-tuning as well as CNN-LSTM or just LSTM training in case of EndoN2N or EndoLSTM, respectively. The final EndoN2N model weights selected for testing correspond to the best performing model on the validation set. The validation videos are also used to perform the hyperparameter search discussed in Section 5. Both models are evaluated on all 30 of the test videos. The final results presented are the averages over the four folds.

4.2 RSD Pre-training Evaluation

The EndoN2N model is utilized to evaluate the advantages of the proposed RSD pre-training approach in reducing the amount of annotated data required for successful surgical phase recognition. We perform a comparison between the following three pre-training approaches: (1) RSD pre-training, (2) TempCon pre-training and (3) no self-supervised pre-training. The experiments are conducted using the same 4-fold cross-validation setup. For each fold, all 80 training videos are used for pre-training the network, without relying on the available annotations. The EndoN2N model is then fine-tuned for surgical phase recognition using 10, 20, 25, 40, 50, 80 and 100 percent of the labeled training videos from each fold, i.e., 8, 16, 20, 32, 40, 64 and 80 videos respectively. The 80 training videos of each fold are divided into four quartiles based on the surgery durations and the supervised fine-tuning subsets are created by sampling an equal number of videos from each quartile. 4 different subsets of 8, 16, 20, 32 and 40 videos along with 2 different subsets of 64 videos are sampled from each fold. The average performance over these different subsets is evaluated in order to ensure that the model is not biased by the particular videos selected. 75 percent of the total fine-tuning videos are randomly sampled in each case for the initial CNN fine-tuning step. The evaluation is again performed on the 30 test videos of each fold, and the final results are the averages over all four folds.

4.3 Evaluation Metrics

To provide a quantitative measure of the performance of the proposed surgical phase recognition models, we utilize the metrics of accuracy, precision and recall as defined in Padoy et al. (2012). Accuracy is defined as the percentage of correct surgical phase predictions within a laparoscopic video. Precision is defined as the ratio between correct predictions and the total number of predictions, while recall is the ratio between correct predictions and the total number of instances in the ground truth. In every laparoscopic video, precision and recall are computed for each individual phase and the average values are reported as well.

To compare the performance of the various models utilizing self-supervised pre-training, we use the F1-score metric, which is the harmonic mean of the average precision and recall values, since it provides a balanced measure of the combined precision and recall metrics. The use of a single score allows us to concisely quantify the performance of different models and eases comparisons.

Model Accuracy Average Precision Average Recall F1-Score
EndoN2N 86.79.3 81.423.0 80.922.1 81.17.5
EndoLSTM 83.010.8 77.524.0 77.224.2 77.38.0
Table 2: Surgical phase recognition performance in terms of accuracy, average precision, average recall and F1-score (percentages) evaluated on the complete Cholec120 dataset. Results have been calculated using 4-fold cross validation.
Precision P1 P2 P3 P4 P5 P6 P7
EndoN2N 84.525.0 91.511.2 76.322.9 90.213.9 79.418.6 72.332.9 75.428.1
EndoLSTM 77.528.3 90.811.5 64.428.1 83.818.7 76.620.2 74.929.1 74.526.6
Recall P1 P2 P3 P4 P5 P6 P7
EndoN2N 84.524.0 93.711.4 70.626.0 90.214.3 77.819.1 71.129.3 78.824.8
EndoLSTM 71.428.5 87.818.6 63.229.6 90.618.1 78.218.4 73.628.5 75.624.0
Table 3: Surgical phase recognition performance for each individual phase in terms of precision and recall metrics, evaluated on the complete Cholec120 dataset using 4-fold cross validation.
(a) Accuracy
(b) F1-Score
Figure 8: Comparison of surgical phase recognition performance of the EndoN2N model initialized using (1) only ImageNet pre-training without any self-supervised pre-training (vanilla EndoN2N), (2) the proposed RSD pre-training or (3) temporal context pre-training. The effect of variation in the number of annotated training videos on surgical phase recognition performance in terms of (a) accuracy and (b) F1-score is illustrated.

5 Model Training

All the experiments are performed using the Caffe library (Jia et al., 2014). In order to obtain an effective training setup for the EndoN2N and RSD pre-training models, a hyperparameter search was performed over the optimizer as well as several parameter values such as the learning rate (), size of LSTM hidden state vectors, learning rate decay factor (), learning rate decay step-size and regularization factor (). Table 1 details the hyperparameters for the different models discussed in this paper. It is to be noted that all layers with random initializations are set a 10 times higher learning rate than pre-trained layers and that weights of the layer of the RSD-Progress multi-task network (Figure 3(b)) are not updated.

The stochastic gradient descent (SGD) optimizer is utilized with a momentum of 0.9 and the Adam optimizer is implemented with the parameters proposed in Kingma and Ba (2015). While utilizing the Adam optimizer was beneficial for the EndoN2N model, we did not find it effective for EndoLSTM.

5.1 EndoN2N Weight Initialization

In the experiment comparing the EndoN2N and the EndoLSTM models, the network weights are initialized from the open-source CaffeNet model, which has been pre-trained on the ImageNet dataset. No self-supervised pre-training is utilized. This initializes the CNN layers with pre-trained weights, while the LSTM is randomly initialized. We refer to this as the vanilla EndoN2N model.

For the self-supervised pre-training experiments, the EndoN2N model is pre-trained on either the RSD prediction or TempCon prediction task. Transferring weights from a model trained for RSD prediction enables both the CNN and LSTM to be initialized with pre-trained weights. However, in the case of TempCon pre-training, only the CNN weights are pre-trained while the LSTM weights are once again randomly initialized.

5.2 End-to-End CNN-LSTM Training

The batch size used in a single forward pass corresponds to subsequences of 500 consecutive frames. We trained our models on NVidia GeForce GTX TitanX and NVidia GeForce GTX 1080 GPUs, with 12 GB and 11GB of RAM respectively, which is sufficient for storing the complete unrolled CNN-LSTM graph for 500 time-steps. Since the longest video comprises of 5987 frames when sampled at 1 fps, all videos are padded with blank images to make them equal to 6000 frames. During training the loss is accumulated for 12 forward passes before performing a weight update. The padded images are ignored from the loss computation. Hence, one complete iteration corresponds to an effective batch size of one video. Additionally, the total iterations during end-to-end CNN-LSTM training always correspond to 100 epochs. The iterations and step-size are scaled proportionally when different amounts of videos are used for training as discussed in Section 4.2.

5.3 RSD Pre-training

The CNN is first trained for progress estimation after initializing the weights from the CaffeNet model pre-trained on the ImageNet dataset. Unlike in the phase recognition pipeline, data from all training videos are used for fine-tuning the CNN with self-supervision since this leads to optimal semi-supervised surgical phase recognition performance.

As discussed in Twinanda et al. (2018), the naturally high range of RSD target values for cholecystectomy surgeries (longest surgery in Cholec120 being 100 minutes) would need to be normalized in order to be able to regress the target values, while using a sufficiently large regularization parameter to prevent overfitting. We use the same normalization factor that was used by Twinanda et al. (2018) on the Cholec120 dataset, i.e., .

5.4 TempCon Pre-training

The CNN weights are first initialized on the ImageNet dataset similar to the RSD pre-training network. Unlike our proposed RSD pre-training, which utilizes complete video sequences, TempCon pre-training requires pairs of frames to be sampled from the videos. From each of the 80 training videos, 50k pairs of frames are sampled. The model is trained for two full epochs over all the sampled pairs of frames.

6 Results

6.1 EndoN2N Evaluation

Table 2 shows a comparison between the accuracy as well as average precision and recall across all surgical phases obtained by the EndoN2N and EndoLSTM models on the complete Cholec120 dataset with the 4-fold cross-validation setup. EndoN2N outperforms EndoLSTM in each of the metrics.

A comparison between the recognition performance of EndoN2N and EndoLSTM for each of the individual surgical phases is shown in Table 3. As expected from the results of Table 2, EndoN2N performs better in most of the phases in terms of both precision and recall. The 3rd phase, the clipping and cutting phase, is the most crucial phase of cholecystectomy surgeries. It is also a short duration phase and occurs in between two of the longest duration phases, making it difficult for a surgical phase recognition algorithm to recognize. EndoN2N is seen to be considerably better at recognizing this phase. It can also be seen that even in the few cases that EndoLSTM outperforms EndoN2N, the difference is not significant in any of the metrics.

6.2 RSD Pre-training Evaluation

The graphs depicted in Figure 8 illustrate the variation in surgical phase recognition performance with different amounts of annotated training data. The proposed RSD pre-training approach (shown in red) leads to superior performance for all sets of training data in terms of both accuracy and F1-score. TempCon pre-training is only effective when the ratio of the quantity of annotated training videos to pre-training videos is small. When the number of annotated videos increases, the pre-training approach starts to become detrimental, which is a common trend in semi-supervised learning (Paine et al., 2014). On the other hand, the proposed RSD pre-training improves performance even when the training data is fully annotated, further highlighting the superiority of the approach.

(a)
(b)
Figure 9: Relative performance of the RSD pre-trained EndoN2N model with respect to the vanilla EndoN2N model, derived from Figure 8. The RSD pre-trained model is supervised using either (a) 20% or (b) 50% fewer annotated videos than the vanilla EndoN2N model. The pair of numbers on the horizontal axis represents the number of annotated training videos used by the RSD pre-trained model/vanilla EndoN2N model, respectively.

To highlight the effectiveness of the proposed RSD pre-training approach in reducing the reliance of surgical phase recognition models on annotated laparoscopic videos, we show in Figure 9 the relative performance of the RSD pre-trained EndoN2N model, trained using less annotated videos, as compared to the same model without any self-supervised pre-training, but trained on more annotated videos. We notice that similar levels of performance can be achieved with less annotated data by adopting our proposed pre-training approach. Figures 8(a) and 8(b), which are derived from Figure 8, show the difference in performance when (a) 20% and (b) 50% fewer annotated training videos are utilized respectively. The pre-training is still performed using 80 videos. In Figure 8(a), the RSD pre-trained model using less labeled videos performs better in general when the number of pre-training videos is higher than the number of annotated videos. This is of particular significance for actual clinical application, where there is a vast amount of data, but only a small fraction of it can be annotated. In Figure 8(b) we see that the accuracy drops further as the number of labeled videos increases. This is expected since pre-training is more effective when the ratio of the amount of pre-training data to annotated data is high. Yet, the difference in accuracy is still always under 5%, even though only half the number of annotated videos are used after the pre-training. The difference in F1-score also remains within a similar range, where the largest difference observed is 5.1%.

7 Discussion

7.1 Ablation Study

Figure 10: Comparison between the surgical phase recognition performance of EndoN2N when pre-trained using either our proposed RSD prediction architecture or the RSDNet architecture of Twinanda et al. (2018).

An ablation study is presented to understand the benefits of utilizing elapsed time and estimated progress as additional features in the RSD pre-training model. Figure 10 illustrates the improvement in surgical phase recognition performance of the EndoN2N model when it is pre-trained with our proposed RSD prediction model as compared to the RSDNet model of Twinanda et al. (2018). The study is performed on the first fold of the Cholec120 dataset. All 80 training videos are used for self-supervised pre-training while either 20, 40 or 80 annotated training videos are used to fine-tune EndoN2N for surgical phase recognition. The smaller subsets of 20 and 40 videos have been sampled from the 80 training videos using the method described in section 4.2. It is to be noted that the vanilla EndoN2N architecture, Figure 1(b), is pre-trained when using RSDNet, while the updated model architecture, Figure 3(c), is required when using our proposed RSD pre-training approach.

It can clearly be seen that our proposed RSD pre-training approach leads to superior surgical phase recognition performance in terms of both accuracy and F1-Score. It is also noteworthy that when the EndoN2N model is trained on all 80 annotated videos using the RSDNet model for pre-training (86.9% accuracy and 80.4% F1-score), it performs worse than the EndoN2N model without any self-supervised pre-training (88.2% accuracy and 81.9% F1-score). However, this is not the case with our proposed RSD pre-training model, which leads to an improvement in performance (89.6% accuracy and 83.4% F1-score) even when all annotated training data is used.

7.2 Amount of Pre-Training Data

Figure 11: Graph illustrating the effect of the amount of pre-training videos utilized on surgical phase recognition performance.

Here, we design an experiment to study the effect of the amount of pre-training data available on our RSD pre-training approach. We first divide the 80 training videos of each fold into four quarters. 20 videos are used to fine-tune the network for surgical phase recognition. Increasing amounts of the remaining training videos, i.e., 20, 40 and 60 training videos, are used for RSD pre-training. Figure 11 shows the results of the RSD pre-trained EndoN2N model with different amounts of pre-training videos. The results shown are the averages over the four folds. As we would intuitively expect, the accuracy and F1-score increase with greater amounts of pre-training data.

7.3 Phase Boundary Detection

(a) Temporal distance
(b) Noise
Figure 12: Graphs depicting the variation in quality of phase boundary predictions when different amounts of annotated training videos are used, with or without the proposed RSD pre-training. (a) shows the temporal distance between the actual phase boundaries in the ground truth and the phase boundaries predicted by the RSD pre-trained and vanilla EndoN2N models. The temporal distance is calculated with respect to both the first predicted and closest predicted phase boundaries. (b) shows the percentage of noise in the predictions.

We perform an additional experiment to study how well the surgical phase recognition model is able to locate phase boundaries within a laparoscopic procedure. We measure this using the temporal distance, which is the absolute time difference in seconds between the actual phase boundary in the ground truth and the corresponding predicted phase boundary. For comparison, the temporal distance is measured with respect to both the first prediction and to the closest prediction. On the one hand, the first prediction of a phase boundary is important during actual clinical application. For example, an automatic notification system will alert the required hospital staff at the first instance a new phase is detected. On the other hand, high accuracy of the closest predicted phase boundary enables good initial annotations to be generated that can be of assistance to manual annotators. This can facilitate the creation of annotated data and be further beneficial for scaling up surgical phase recognition algorithms.

Figure 13: Visualization of ground truth (above) and RSD pre-trained EndoN2N predictions (below) for one video of Cholec120. The 7 color coded phase labels are displayed at the bottom. An instance of noise and temporal distance (TD) has been highlighted. (Best seen in color.)

Another metric we compute is the noise. Certain incorrect phase intervals are predicted by the model which do not appear in the ground truth, as shown in Figure 13. Noise is computed as the percentage of total time steps of a laparoscopic video which belong to the incorrect phase intervals. Higher noise is detrimental both in clinical applications and when aiming to create a good set of initial annotations.

The calculations presented in Figure 12 are obtained after the predictions of EndoN2N are filtered using a 5 second window to remove any short-term noise or spikes. A 5 second delay in prediction is deemed acceptable for practical real-time applications. The results presented are the averages over all four folds of cholec120.

It can be seen from the graphs in Figure 12 that using as little as 40 annotated training videos along with RSD pre-training leads to a performance similar to the vanilla EndoN2N model trained on all 80 annotated videos. The proposed RSD pre-training is particularly effective in improving the accuracy of the first predicted phase boundaries and reducing false predictions which contribute to noise, making it beneficial for clinical applications. The reduction in prediction noise is also beneficial for creating good initial annotations. Though the closest phase boundaries are generally more accurately predicted by the vanilla EndoN2N model, the difference is very small (less than 5 seconds on average). It should be noted that the temporal distance is computed with respect to the phase boundaries in the ground truth, which are very strict. For practical applications, a slight error is acceptable since the phase transitions are actually more gradual. The use of a bi-directional LSTM based model could further improve predictions for creating initial annotations, although such a model can not be used in real-time applications.

8 Conclusion

A new self-supervised pre-training approach based on RSD prediction has been presented and shown to be particularly effective in reducing the amount of annotated data required for successful surgical phase recognition. This makes our approach beneficial for scaling up surgical phase recognition algorithms to different types of surgeries. Surgical phase recognition performance when using only half the amount of annotated data generally remains within 5% if the proposed RSD pre-training is utilized. Additionally, when a sufficiently large amount of pre-training data is utilized, surgical phase recognition performance can even be slightly improved despite relying on 20% less annotated data. This is especially significant for real-world clinical applications, where despite a scarcity of annotated data, which is time-consuming and difficult to generate, there exists an abundance of unlabeled data. The use of self-supervised pre-training ensures that no data remains unexploited. The proposed RSD pre-training approach also outperforms the temporal context pre-training approach, the single self-supervised pre-training approach previously implemented for surgical phase recognition. Further, it is interesting to note that the proposed RSD pre-training approach leads to improvement in performance even when all the training data is annotated.

This work also presents an apples-to-apples comparison between the end-to-end optimization and the two-step optimization of surgical phase recognition models based on CNN-LSTM networks. The results show that the proposed end-to-end optimization approach leads to better performance. Additional experiments were presented, which provide a greater insight into the proposed RSD pre-training model as well as the effectiveness of our models for both deployment in ORs and generation of initial surgical phase annotations.

We hope this paper serves as a motivation for other works to address the important problem of developing surgical phase recognition approaches, which are less reliant on annotated data. In future work, the effectiveness of other semi-supervised approaches, such as the application of generative adversarial networks or the use of synthetic data, for example, can be explored. Additional CNN-LSTM pre-training approaches based on self-supervised learning can also prove to be effective. We would also like to carry out the proposed RSD pre-training using a much larger number of laparoscopic videos than the 80 used in this work, to study the benefit in performance that can be obtained.

Acknowledgements

This work was supported by French state funds managed within the Investissements d’Avenir program by BPI France (project CONDOR) and by the ANR (references ANR-11-LABX-0004 and ANR-10-IAHU-02). The authors would also like to acknowledge the support of NVIDIA with the donation of a GPU used in this research.

References

  • Agrawal et al. (2015) Agrawal, P., Carreira, J., Malik, J., 2015. Learning to see by moving, in: The IEEE International Conference on Computer Vision (ICCV).
  • Aksamentov et al. (2017) Aksamentov, I., Twinanda, A.P., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N., 2017. Deep neural networks predict remaining surgery duration from cholecystectomy videos, in: MICCAI, pp. 586–593.
  • Bengio et al. (2006) Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., 2006. Greedy layer-wise training of deep networks, in: Proceedings of the 19th International Conference on Neural Information Processing Systems, MIT Press, Cambridge, MA, USA. pp. 153–160. URL: http://dl.acm.org/citation.cfm?id=2976456.2976476.
  • Bengio et al. (1994) Bengio, Y., Simard, P., Frasconi, P., 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5, 157–166. doi:10.1109/72.279181.
  • Blum et al. (2010) Blum, T., Feussner, H., Navab, N., 2010. Modeling and segmentation of surgical workflow from laparoscopic video, in: Proceedings of the 13th International Conference on Medical Image Computing and Computer-assisted Intervention: Part III, Springer-Verlag, Berlin, Heidelberg. pp. 400–407. URL: http://dl.acm.org/citation.cfm?id=1926877.1926929.
  • Bodenstedt et al. (2017) Bodenstedt, S., Wagner, M., Katic, D., Mietkowski, P., Mayer, B.F.B., Kenngott, H., Müller-Stich, B.P., Dillmann, R., Speidel, S., 2017. Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis. CoRR abs/1702.03684. URL: http://arxiv.org/abs/1702.03684.
  • Cadène et al. (2016) Cadène, R., Robert, T., Thome, N., Cord, M., 2016. M2CAI workflow challenge: Convolutional neural networks with time smoothing and hidden markov model for video frames classification. CoRR abs/1610.05541. URL: http://arxiv.org/abs/1610.05541, arXiv:1610.05541.
  • Charrière et al. (2017) Charrière, K., Quellec, G., Lamard, M., Martiano, D., Cazuguel, G., Coatrieux, G., Cochener, B., 2017. Real-time analysis of cataract surgery videos using statistical models. Multimedia Tools and Applications 76, 22473--22491.
  • Cho et al. (2014) Cho, K., van Merriënboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y., 2014. Learning phrase representations using rnn encoder--decoder for statistical machine translation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar. pp. 1724--1734. URL: http://www.aclweb.org/anthology/D14-1179.
  • Dergachyova et al. (2016) Dergachyova, O., Bouget, D., Huaulmé, A., Morandi, X., Jannin, P., 2016. Automatic data-driven real-time segmentation and recognition of surgical workflow. International Journal of Computer Assisted Radiology and Surgery 11, 1081--1089. URL: https://doi.org/10.1007/s11548-016-1371-x, doi:10.1007/s11548-016-1371-x.
  • Doersch et al. (2015) Doersch, C., Gupta, A., Efros, A.A., 2015. Unsupervised visual representation learning by context prediction, in: International Conference on Computer Vision (ICCV).
  • Doersch and Zisserman (2017) Doersch, C., Zisserman, A., 2017. Multi-task self-supervised visual learning, in: International Conference on Computer Vision.
  • Donahue et al. (2017) Donahue, J., Hendricks, L.A., Rohrbach, M., Venugopalan, S., Guadarrama, S., Saenko, K., Darrell, T., 2017. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 677--691. doi:10.1109/TPAMI.2016.2599174.
  • Dosovitskiy et al. (2014) Dosovitskiy, A., Springenberg, J.T., Riedmiller, M., Brox, T., 2014. Discriminative unsupervised feature learning with convolutional neural networks, in: NIPS.
  • Fernando et al. (2017) Fernando, B., Bilen, H., Gavves, E., Gould, S., 2017. Self-supervised video representation learning with odd-one-out networks, in: IEEE International Conference on Computer Vision.
  • Forestier et al. (2013) Forestier, G., Lalys, F., Riffaud, L., Collins, D.L., Meixensberger, J., Wassef, S.N., Neumuth, T., Goulet, B., Jannin, P., 2013. Multi-site study of surgical practice in neurosurgery based on surgical process models. Journal of Biomedical Informatics 46, 822 -- 829. URL: http://www.sciencedirect.com/science/article/pii/S1532046413000816, doi:https://doi.org/10.1016/j.jbi.2013.06.006.
  • Forestier et al. (2015) Forestier, G., Riffaud, L., Jannin, P., 2015. Automatic phase prediction from low-level surgical activities. International Journal of Computer Assisted Radiology and Surgery 10, 833--841. URL: https://doi.org/10.1007/s11548-015-1195-0, doi:10.1007/s11548-015-1195-0.
  • Girshick (2015) Girshick, R., 2015. Fast r-cnn, in: International Conference on Computer Vision (ICCV).
  • Girshick et al. (2014) Girshick, R., Donahue, J., Darrell, T., Malik, J., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation, in: Computer Vision and Pattern Recognition.
  • Hajj et al. (2017) Hajj, H.A., Lamard, M., Conze, P., Cochener, B., Quellec, G., 2017. Monitoring tool usage in cataract surgery videos using boosted convolutional and recurrent neural networks. CoRR abs/1710.01559. URL: http://arxiv.org/abs/1710.01559, arXiv:1710.01559.
  • Hinton and Salakhutdinov (2006) Hinton, G.E., Salakhutdinov, R.R., 2006. Reducing the dimensionality of data with neural networks. Science 313, 504--507. URL: http://science.sciencemag.org/content/313/5786/504, doi:10.1126/science.1127647.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S., Schmidhuber, J., 1997. Long short-term memory. Neural Comput. 9, 1735--1780. URL: http://dx.doi.org/10.1162/neco.1997.9.8.1735, doi:10.1162/neco.1997.9.8.1735.
  • Jia et al. (2014) Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T., 2014. Caffe: Convolutional architecture for fast feature embedding, in: Proceedings of the 22Nd ACM International Conference on Multimedia, ACM, New York, NY, USA. pp. 675--678. URL: http://doi.acm.org/10.1145/2647868.2654889, doi:10.1145/2647868.2654889.
  • Jin et al. (2016) Jin, Y., Dou, Q., Chen, H., Yu, L., Heng, P.A., 2016. EndoRCN: recurrent convolutional networks for recognition of surgical workflow in cholecystectomy procedure video. Technical Report. The Chinese University of Hong Kong.
  • Katić et al. (2014) Katić, D., Wekerle, A.L., Gärtner, F., Kenngott, H., Müller-Stich, B.P., Dillmann, R., Speidel, S., 2014. Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance, in: Stoyanov, D., Collins, D.L., Sakuma, I., Abolmaesumi, P., Jannin, P. (Eds.), Information Processing in Computer-Assisted Interventions, Springer International Publishing, Cham. pp. 158--167.
  • Kingma and Ba (2015) Kingma, D.P., Ba, J., 2015. Adam: A method for stochastic optimization, in: ICLR.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks, in: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, Curran Associates Inc., USA. pp. 1097--1105. URL: http://dl.acm.org/citation.cfm?id=2999134.2999257.
  • Lalys et al. (2013) Lalys, F., Bouget, D., Riffaud, L., Jannin, P., 2013. Automatic knowledge-based recognition of low-level tasks in ophthalmological procedures. International Journal of Computer Assisted Radiology and Surgery 8, 39--49. URL: https://doi.org/10.1007/s11548-012-0685-6, doi:10.1007/s11548-012-0685-6.
  • Lalys et al. (2012) Lalys, F., Riffaud, L., Bouget, D., Jannin, P., 2012. A framework for the recognition of high-level surgical tasks from video images for cataract surgeries. IEEE Transactions on Biomedical Engineering 59, 966--976. doi:10.1109/TBME.2011.2181168.
  • Larsson et al. (2016) Larsson, G., Maire, M., Shakhnarovich, G., 2016. Learning representations for automatic colorization, in: European Conference on Computer Vision (ECCV).
  • Lea et al. (2015) Lea, C., Hager, G.D., Vidal, R., 2015. An improved model for segmentation and recognition of fine-grained activities with application to surgical training tasks, in: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 1123--1129. doi:10.1109/WACV.2015.154.
  • Lee et al. (2017) Lee, H.Y., Huang, J.B., Singh, M.K., Yang, M.H., 2017. Unsupervised representation learning by sorting sequence, in: IEEE International Conference on Computer Vision.
  • Lotter et al. (2017) Lotter, W., Kreiman, G., Cox, D., 2017. Deep predictive coding networks for video prediction and unsupervised learning, in: ICLR.
  • Misra et al. (2016) Misra, I., Zitnick, C.L., Hebert, M., 2016. Shuffle and learn: Unsupervised learning using temporal order verification, in: ECCV.
  • Mobahi et al. (2009) Mobahi, H., Collobert, R., Weston, J., 2009. Deep learning from temporal coherence in video, in: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, New York, NY, USA. pp. 737--744. URL: http://doi.acm.org/10.1145/1553374.1553469, doi:10.1145/1553374.1553469.
  • Noroozi and Favaro (2016) Noroozi, M., Favaro, P., 2016. Unsupervised learning of visual representions by solving jigsaw puzzles, in: ECCV.
  • Padoy et al. (2012) Padoy, N., Blum, T., Ahmadi, S.A., Feussner, H., Berger, M.O., Navab, N., 2012. Statistical modeling and recognition of surgical workflow. Medical Image Analysis 16, 632 -- 641. URL: http://www.sciencedirect.com/science/article/pii/S1361841510001131, doi:https://doi.org/10.1016/j.media.2010.10.001. computer Assisted Interventions.
  • Padoy et al. (2008) Padoy, N., Blum, T., Feussner, H., Berger, M.O., Navab, N., 2008. On-line recognition of surgical activity for monitoring in the operating room, in: Proceedings of the 20th National Conference on Innovative Applications of Artificial Intelligence - Volume 3, AAAI Press. pp. 1718--1724. URL: http://dl.acm.org/citation.cfm?id=1620138.1620155.
  • Paine et al. (2014) Paine, T.L., Khorrami, P., Han, W., Huang, T.S., 2014. An analysis of unsupervised pre-training in light of recent advances. CoRR abs/1412.6597. URL: http://arxiv.org/abs/1412.6597, arXiv:1412.6597.
  • Quellec et al. (2014) Quellec, G., Lamard, M., Cochener, B., Cazuguel, G., 2014. Real-time segmentation and recognition of surgical tasks in cataract surgery videos. IEEE Transactions on Medical Imaging 33, 2352--2360. doi:10.1109/TMI.2014.2340473.
  • Quellec et al. (2015) Quellec, G., Lamard, M., Cochener, B., Cazuguel, G., 2015. Real-time task recognition in cataract surgery videos using adaptive spatiotemporal polynomials. IEEE Transactions on Medical Imaging 34, 877--887. doi:10.1109/TMI.2014.2366726.
  • Shi et al. (2015) Shi, X., Chen, Z., Wang, H., Yeung, D., Wong, W., Woo, W., 2015. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. CoRR abs/1506.04214. URL: http://arxiv.org/abs/1506.04214, arXiv:1506.04214.
  • Srivastava et al. (2015) Srivastava, N., Mansimov, E., Salakhudinov, R., 2015. Unsupervised learning of video representations using lstms, in: Bach, F., Blei, D. (Eds.), Proceedings of the 32nd International Conference on Machine Learning, PMLR, Lille, France. pp. 843--852. URL: http://proceedings.mlr.press/v37/srivastava15.html.
  • Twinanda (2017) Twinanda, A.P., 2017. Vision-based Approaches for Surgical Activity Recognition Using Laparoscopic and RGBD Videos. Ph.D. thesis. Université De Strasbourg. URL: https://tel.archives-ouvertes.fr/tel-01557522/document.
  • Twinanda et al. (2017) Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N., 2017. Endonet: A deep architecture for recognition tasks on laparoscopic videos. IEEE Transactions on Medical Imaging 36, 86--97.
  • Twinanda et al. (2018) Twinanda, A.P., Yengera, G., Mutter, D., Marescaux, J., Padoy, N., 2018. Rsdnet: Learning to predict remaining surgery duration from laparoscopic videos without manual annotations. Arxiv abs/1802.03243. URL: https://arxiv.org/abs/1802.03243.
  • Wang and Gupta (2015) Wang, X., Gupta, A., 2015. Unsupervised learning of visual representations using videos, in: IEEE International Conference on Computer Vision (ICCV), pp. 2794--2802. doi:10.1109/ICCV.2015.320.
  • Zhang et al. (2016) Zhang, R., Isola, P., Efros, A.A., 2016. Colorful image colorization, in: ECCV.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
198615
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description