Cost-efficient segmentation of electron microscopy images using active learningWe gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. Y.S. is a Marylou Ingram Scholar.

Cost-efficient segmentation of electron microscopy images using active learningthanks: We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. Y.S. is a Marylou Ingram Scholar.

Joris Roels\orcidID0000-0002-2058-8134 1 Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
1{jorisb.roels, yvan.saeys}@ugent.be 2Inflammation Research Center, Flanders Institute for Biotechnology, Ghent, Belgium 2
   Yvan Saeys\orcidID0000-0002-0415-1506 1 Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
1{jorisb.roels, yvan.saeys}@ugent.be 2Inflammation Research Center, Flanders Institute for Biotechnology, Ghent, Belgium 2
Abstract

Over the last decade, electron microscopy has improved up to a point that generating high quality gigavoxel sized datasets only requires a few hours. Automated image analysis, particularly image segmentation, however, has not evolved at the same pace. Even though state-of-the-art methods such as U-Net and DeepLab have improved segmentation performance substantially, the required amount of labels remains too expensive. Active learning is the subfield in machine learning that aims to mitigate this burden by selecting the samples that require labeling in a smart way. Many techniques have been proposed, particularly for image classification, to increase the steepness of learning curves. In this work, we extend these techniques to deep CNN based image segmentation. Our experiments on three different electron microscopy datasets show that active learning can improve segmentation quality by 10 to 15 in terms of Jaccard score compared to standard randomized sampling.

Keywords:
Electron microscopy Image segmentation Active learning.

1 Introduction

Semantic image segmentation, the task of assigning pixel-level object labels to an image, is a fundamental task in many applications and one of the most challenging problems in generic computer vision. Particularly in biomedical imaging such as electron microscopy (EM), where annotated data is very sparsely available and image data contains high resolution ( 5 nm) and ultrastructural content. Nevertheless, deep learning has caused significant improvements in this particular research domain, over the last years [6, 11, 8].

Even though the impressive advances that have been made so far, state-of-the-art techniques mostly rely on large annotated datasets. This is an impractical assumption and only satisfied for particular use-cases such as e.g. neuron segmentation [2]. For segmentation of alternative classes, research often falls back to manual segmentation or interactive approaches that rely on shallow segmentation algorithms [14, 3, 1], which is costly or sacrifices performance.

This work focuses on active learning, a subdomain of machine learning that aims to minimize supervision without sacrificing predictive accuracy. This is achieved by iteratively querying a batch of samples to a label providing oracle, adding them to the train set and retraining the predictor. The challenge is to come up with a smart selection criterion to query samples and maximize the steepness of the training curve [13].

In this work, we employ state-of-the-art active learning approaches, commonly used for classification, to image segmentation. Particularly, we illustrate on three EM datasets that the amount of annotated samples can be reduced to a few hundreds to obtain close to fully supervised performance. We start by formally defining the active learning problem in the context of image segmentation in Section 2. In Section 3, we give an overview of commonly used, recent active learning approaches in classification [13] and how these techniques can be used in segmentation. This is followed by experimental results and a discussion in Section 4. Lastly, the paper is concluded in Section 5.

2 Notations

We consider the task of image segmentation, i.e. given an pixel image , we aim to compute a pixel-level labeling , where is the label space and is the number of classes. We particularly focus on the case of binary segmentation, i.e. . Let be the probability class distribution of pixel of a parameterized segmentation algorithm (for example, an encoder-decoder network such as U-Net [11]).

Consider a large pool of i.i.d. sampled data points over the space as , where , and an initial pool of randomly chosen distinct data points indexed by . An active learning algorithm initially only has access to and and iteratively extends the currently labeled pool by querying samples from the unlabeled set to an oracle. After iteration , the predictor is retrained with the available samples and labels , thereby improving the segmentation quality. Note that, without loss of generalization, the active learning approaches below are described for as we can also query samples for iterations, without retraining. The complete active learning workflow is shown in Figure 1.

Figure 1: Iterative active learning workflow for segmentation. A predictor network predicts the class probability distributions of the unlabeled samples. These outputs are used in a sample criterion to select the ‘most informative’ samples. The selected samples are labeled by an oracle and the extended labeled pool is used to retrain the predictor.

3 Active learning

In the following sections, we will discuss 5 well known and recent active learning approaches for classification: maximum entropy selection [9, 10], least confidence selection [4], Bayesian active learning disagreement [7], k-means sampling [5] and core set active learning [12]. Furthermore, we will show how these techniques can be applied to image segmentation.

3.1 Maximum entropy sampling

Maximum entropy is a straightforward selection criterion that aims to select samples for which the predictions are uncertain [9, 10]. Formally speaking, we adjust the selection criterion to a pixel-wise entropy calculation as follows:

(1)

In other words, the entropy is calculated for each pixel and cumulated. Note that a high entropy will be obtained when , this is exactly when there is no real consensus on the predicted class (i.e. high uncertainty).

3.2 Least confidence sampling

Similar to maximum entropy sampling, the least confidence criterion selects samples for which the predictions are uncertain:

(2)

As the name suggest, the least confidence criterion selects the probability that corresponds to the predicted class. Whenever this probability is small, the predictor is not confident about this decision. For image segmentation, we cumulate the maximum probabilities to select the least confident samples.

3.3 Bayesian active learning disagreement

The Bayesian active learning disagreement (BALD) approach [7] is specifically designed for convolutional neural networks (CNNs). It makes use of Bayesian CNNs in order to cope with the small amounts of training data that are usually available in active learning workflows. A Bayesian CNN assumes a prior probability distribution placed over the model parameters . The uncertainty in the weights induces prediction uncertainty by marginalising over the approximate posterior [7]:

(3)

where is the dropout distribution, which approximates the prior probability distribution . In other words, a CNN is trained with dropout and inference is obtained by leaving dropout on. This causes uncertainty in the outcome that can be used in existing criteria such as maximum entropy (Equation (1)).

3.4 K-means sampling

Uncertainty-based approaches typically sample close to the decision boundary of the classifier. This introduces an implicit bias that does not allow for data exploration. Most explorative approaches that aim to solve this problem transform the input to a more compact and efficient representation (e.g. the feature representation before the fully connected stage in a classification CNN). The representation that we used in our segmentation approach was the bottleneck representation in the U-Net. The -means sampling approach in particular then finds clusters in this embedding using -means clustering. The selected samples are then the samples in the different clusters that are closest to the centroids.

3.5 Core set active learning

The core set approach [12] is a recently proposed active learning approach for CNNs that is not based on uncertainty or exploratory sampling. Similar to -means, samples are selected from an embedding in such a way that a model trained on the selection of samples would be competitive for the remaining samples. Similar as before, the representation that we used in our segmentation approach was the bottleneck representation in the U-Net. In order to obtain such competitive samples, this approach aims to minimize the so-called core set loss. This is the difference between average empirical loss over the set of labeled samples (i.e. ) and the average empirical loss over the entire dataset including unlabelled points (i.e. ).

4 Experiments & discussion

(a) EPFL (b) VNC (c) MiRA
Figure 2: Learning curves for the discussed active learning approaches for the different datasets.

Three public EM datasets where used to validate our approach:

  • The EPFL dataset111Data available at https://cvlab.epfl.ch/data/data-em/ represents a m section taken from the CA1 hippocampus region of the brain, corresponding to a volume. Two subvolumes were manually labeled by experts for mitochondria. The data was acquired by a focused ion-beam scanning EM and the resolution of each voxel is approximately nm.

  • The VNC dataset222Data available at https://github.com/unidesigner/groundtruth-drosophila-vnc/ represents two m sections taken from the Drosophila melanogaster third instar larva ventral nerve cord, corresponding to a volume. One stack was manually labeled by experts for mitochondria. The data was acquired by a transmission EM and the resolution of each voxel is approximately nm.

  • The MiRA dataset333Data available at http://95.163.198.142/MiRA/mitochondria31/ [15] represents a m section taken from the mouse cortex, corresponding to a volume. The complete volume was manually labeled by experts for mitochondria. The data was acquired by an automated tape-collecting ultramicrotome scanning EM and the resolution of each voxel is approximately nm.

To properly validate the discussed approaches, we split the available labeled data in a training and testing set. In the cases of a single labeled volume (VNC and MiRA), we split these datasets halfway along the axis. A smaller U-Net (with 4 times less feature maps) was initially trained on randomly selected samples in the training volume (learning rate of for 500 epochs). Next, we consider a pool of samples in the training data to be queried. Each iteration, samples are selected from this pool based on one of the discussed selection criteria, and added to the labeled set , after which the segmentation network is finetuned (learning rate of for 200 epochs). This procedure is repeated for iterations, leading to a maximum training set size of 500 samples. We validate the segmentation performance with the well known Jaccard score:

(4)

This segmentation metric is also known as the intersection-over-union (IoU).

(a) Input (b) Ground truth (c) Full supervision () (d) Random () (e) -means () (f) Maximum entropy ()
Figure 3: Segmentation results obtained from an actively learned U-Net with 120 samples of the EPFL dataset based on random, -means and maximum entropy sampling, and a comparison to the fully supervised approach. Jaccard scores are indicated between brackets.

The resulting performance curves of the discussed approaches on the three datasets are shown in Figure 2. We additionally show the performance obtained by full supervision (i.e. all labels are available during training), which is the maximum achievable segmentation performance. In comparison to the random sampling baseline, we observe that the maximum entropy, least confidence and BALD approach perform significantly better. These methods obtain about 10 to 15 performance increase for the same amount of available labels for all datasets. The recently proposed core set approach performs similar to slightly better than the baseline. We expect that this method can be improved by considering alternative embeddings. Lastly, we see that -means performs significantly worse than random sampling. Even though this could also be an embedding problem such as with the core set approach, we think that exploratory sampling alone will not allow the predictor to learn from challenging samples, which are usually outliers. We expect that a hybrid approach based on both exploration and uncertainty might lead to better results, and consider this future work.

Figure 4: Illustration of the selected samples in the VNC dataset over time in the active learning process. The top row shows the pixel-wise prediction of the selected samples at iterations 1 through 4. The bottom row show the pixel-wise least confidence score on the corresponding images.

Figure 3 shows qualitative segmentation results on the EPFL dataset. In particular, we show results of the random, -means and maximum entropy sampling methods using 120 samples, and compare this to the fully supervised approach. The maximum entropy sampling technique is able to improve the others by a large margin and closes the gap towards fully supervised learning significantly.

Lastly, we are interested in what type of samples the active learning approaches select for training. Figure 4 shows 4 samples of the VNC dataset that correspond to the highest prioritized samples, according to the least confidence criterion, that were selected in the first 4 iterations. The top row illustrates the probability predictions of the network at that point in time, whereas the bottom row shows the pixel-wise uncertainty of the sample (i.e. the maximum in Equation (2)). Note that the initial predictions at are of poor quality, as the network was only trained on 20 samples. Moreover, the uncertainty is high in regions where the network is uncertain, but it is low in regions where the network is wrong. The latter is a common issue in active learning and related to the exploration vs. uncertainty trade off. However, over time, we see that the network performance improves and more challenging samples are being queried to the oracle.

5 Conclusion

Image segmentation is one of the most challenging computer vision tasks, particularly for biomedical data such as electron microscopy as annotations are sparsely available. In order to be practically usable and scalable, image segmentation algorithms such as U-Net need to be able to cope with smaller amounts of annotated data. In this work, we propose to employ recent active learning approaches to minimize annotation efforts for training segmentation networks. Specifically, several of these approaches (e.g. maximum entropy and least confidence sampling) obtain the same performance as the random sampling baseline, but require 4 times fewer annotations. In future work, we will further minimize labeling efforts, by combining this active learning paradigm with weakly supervised approaches (i.e. using partially annotated data).

References

  • [1] I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. S. Seung (2017) Trainable Weka Segmentation: A machine learning tool for microscopy pixel classification. Bioinformatics. External Links: Document, ISSN 14602059 Cited by: §1.
  • [2] I. Arganda-Carreras, S. C. Turaga, D. R. Berger, D. C. Ciresan, A. Giusti, L. M. Gambardella, J. Schmidhuber, D. Laptev, S. Dwivedi, J. M. Buhmann, T. Liu, M. Seyedhosseini, T. Tasdizen, L. Kamentsky, R. Burget, V. Uher, X. Tan, C. Sun, T. D. Pham, E. Bas, M. G. Uzunbas, A. Cardona, J. Schindelin, and H. S. Seung (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Frontiers in Neuroanatomy 9. External Links: Link, Document, ISSN 1662-5129 Cited by: §1.
  • [3] I. Belevich, M. Joensuu, D. Kumar, H. Vihinen, and E. Jokitalo (2016) Microscopy Image Browser: A Platform for Segmentation and Analysis of Multidimensional Datasets. PLoS Biology 14 (1). External Links: ISBN 1545-7885 (Electronic)\r1544-9173 (Linking), Document, ISSN 15457885 Cited by: §1.
  • [4] K. Blinker (2003) Incorporating Diversity in Active Learning with Support Vector Machines. In Proceedings, Twentieth International Conference on Machine Learning, External Links: ISBN 1577351894 Cited by: §3.
  • [5] Z. Bodo, Z. Minier, and L. Csato (2011) Active Learning with Clustering. Active Learning and Experimental Design @ AISTATS. Cited by: §3.
  • [6] D. C. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber (2012) Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. NIPS, pp. 1–9. External Links: ISBN 9781627480031, ISSN 10495258 Cited by: §1.
  • [7] Y. Gal, R. Islam, and Z. Ghahramani (2017) Deep Bayesian active learning with image data. In International Conference on Machine Learning, External Links: ISBN 9781510855144 Cited by: §3.3, §3.
  • [8] M. Januszewski, J. Kornfeld, P. H. Li, A. Pope, T. Blakely, L. Lindsey, J. Maitin-Shepard, M. Tyka, W. Denk, and V. Jain (2018) High-precision automated reconstruction of neurons with flood-filling networks. Nature Methods. External Links: Document, ISSN 15487105 Cited by: §1.
  • [9] A. J. Joshi, F. Porikli, and N. Papanikolopoulos (2009) Multi-class active learning for image classification. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §3.1, §3.
  • [10] X. Li and Y. Guo (2013) Adaptive Active Learning for Image Classification. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 859–866. Cited by: §3.1, §3.
  • [11] O. Ronneberger, P. Fischer, and T. Brox (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pp. 234–241. External Links: ISBN 978-3-319-24573-7, Document, ISSN 16113349 Cited by: §1, §2.
  • [12] O. Sener and S. Savarese (2018) Active Learning for Convolutional Neural Networks: A Core-Set Approach. In International Conference on Learning Representations, Cited by: §3.5, §3.
  • [13] B. Settles (2010) Active Learning Literature Survey. Technical report University of Wisconsin. External Links: Document, ISSN 00483931 Cited by: §1, §1.
  • [14] C. Sommer, C. Straehle, U. Kothe, and F. A. Hamprecht (2011) Ilastik: Interactive learning and segmentation toolkit. In IEEE International Symposium on Biomedical Imaging, pp. 230–233. External Links: ISBN 9781424441280, Document, ISSN 19457928 Cited by: §1.
  • [15] C. Xiao, X. Chen, W. Li, L. Li, L. Wang, Q. Xie, and H. Han (2018) Automatic Mitochondria Segmentation for EM Data Using a 3D Supervised Convolutional Network. Frontiers in Neuroanatomy. External Links: Document Cited by: 3rd item.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398141
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description