Domain-adversarial neural networks to address the appearance variability of histopathology images

Domain-adversarial neural networks to address the appearance variability of histopathology images

Abstract

Preparing and scanning histopathology slides consists of several steps, each with a multitude of parameters. The parameters can vary between pathology labs and within the same lab over time, resulting in significant variability of the tissue appearance that hampers the generalization of automatic image analysis methods. Typically, this is addressed with ad-hoc approaches such as staining normalization that aim to reduce the appearance variability. In this paper, we propose a systematic solution based on domain-adversarial neural networks. We hypothesize that removing the domain information from the model representation leads to better generalization. We tested our hypothesis for the problem of mitosis detection in breast cancer histopathology images and made a comparative analysis with two other approaches. We show that combining color augmentation with domain-adversarial training is a better alternative than standard approaches to improve the generalization of deep learning methods.

1Introduction

Histopathology image analysis aims at automating tasks that are difficult, expensive and time-consuming for pathologists to perform. The high variability of the appearance of histopathological images, which is the result of the inconsistency of the tissue preparation process, is a well-known observation. This hampers the generalization of image analysis methods, particularly to datasets from external pathology labs.

The appearance variability of histopathology images is commonly addressed by standardizing the images before analysis, for example by performing staining normalization [7]. These methods are efficient at standardizing colors while keeping structures intact, but are not equipped to handle other sources of variability, for instance due to differences in tissue fixation.

We hypothesize that a more general and efficient approach in the context of deep convolutional neural networks (CNNs) is to impose constraints that disregard non-relevant appearance variability with domain-adversarial training [3]. We trained CNN models for mitosis detection in breast cancer histopathology images on a limited amount of data from one pathology lab and evaluated them on a test dataset from different, external pathology labs. In addition to domain-adversarial training, we investigated two additional approaches (color augmentation and staining normalization) and made a comparative analysis. As a main contribution, we show that domain-adversarial neural networks are a new alternative for improving the generalization of deep learning methods for histopathology image analysis.

2Materials and Methods

2.1Datasets

This study was performed with the TUPAC16 dataset [1] that includes 73 breast cancer cases annotated for mitotic figures. The density of mitotic figures can be directly related to the tumor proliferation activity, and is an important biomarker for breast cancer prognostication.

The cases come from three different pathology labs (23, 25 and 25 cases per lab) and were scanned with two different whole-slide image scanners (the images from the second two pathology labs were scanned with the same scanner). All CNN models were trained with eight cases (458 mitoses) from the first pathology lab. Four cases were used as a validation set (92 mitoses). The remaining 12 cases (533 mitoses) from the first pathology lab were used as an internal test set1 and the 50 cases from the two other pathology labs (469 mitoses) were used to evaluate inter-lab generalization performance.

2.2The Underlying CNN Architecture

The most successful methods for mitosis detection in breast cancer histopathology images are based on convolutional neural networks (CNN). These methods train models to classify image patches based on mitosis annotations resulting from the agreement of several expert pathologists [2].

The baseline architecture that is used in all experiments of this study is a 6-layer neural network with four convolutional and two fully connected layers that takes a image patch as an input and produces a probability that there is a mitotic figure in the center of the patch as an output. The first convolutional layer has kernels and the remaining three convolutional layers have kernels. All convolutional layers have 16 feature maps. The first fully connected layer has 64 neurons and the second layer serves as the output layer with softmax activation. Batch normalization, max-pooling and ReLU nonlinearities are used throughout. This architecture is similar to the one proposed in [2]. The neural network can be densely applied to images in order to produce a mitosis probability map for detection.

2.3Three Approaches to Handling Appearance Variability

Poor generalization occurs when there is a discrepancy between the distribution of the training and testing data. Increasing the amount of training data can be of help, however, annotation of histology images is a time-consuming process that requires scarce expertise. More feasible solutions are needed, therefore we chose to investigate three approaches.

One straightforward alternative is to artificially produce new training samples. Standard data augmentation methods include random spatial and intensity/color transformation (e.g. rotation, mirroring and scaling, and color shifts). In this study, we use spatial data augmentation (arbitrary rotation, mirroring and scaling) during the training of all models. Since the most prominent source of variability in histopathology images is the staining color appearance, the contribution of color augmentation (CA) during training is evaluated separately.

The opposite strategy is to reduce the appearance variability of all the images as a pre-processing step before training and evaluating a CNN model. For hematoxylin and eosin (H&E) stained slides, staining normalization (SN) methods can be used [7].

A more direct strategy is to constrain the weights of the model to encourage learning of mitosis-related features that are consistent for any input image appearance. We observed that the features extracted by a baseline CNN mitosis classifier carry information about the origin of the input patch (see Section 3). We expect that better generalization can be achieved by eliminating this information from the learned representation with domain-adversarial training [3].

Finally, in addition to the three individual approaches, we also investigate all possible combinations.

Color Augmentation.

Color variability can be increased by applying random color transformations to original training samples. We perform color augmentation by transforming every color channels , where and are drawn from uniform distributions and .

Illustration of the variability of histological images with 8 patches from different slides (first row), and their transformed version after staining normalization (second row). The third row illustrates the range of color variation induced by color augmentation.
Illustration of the variability of histological images with 8 patches from different slides (first row), and their transformed version after staining normalization (second row). The third row illustrates the range of color variation induced by color augmentation.

Staining Normalization.

The RBG pixel intensities of H&E-stained histopathology images can be modeled with the Beer-Lambert law of light absorption: . In this expression is the color-channel index, is the matrix of absorbance coefficients and are the stain concentrations [7]. We perform staining normalization with the method described in [6]. This is an unsupervised method that decomposes any image with estimates of its underlying and . The appearance variability over the dataset can then be reduced by recomposing all the images using some fixed absorbance coefficients.

Domain Adversarial Neural-Network.

Since every digital slide results from a unique combination of preparation parameters, we assume that all the image patches extracted from the same slide come from the same unique data distribution and thus constitute a domain. Domain-adversarial neural networks (DANN) allow to learn a classification task, while ensuring that the domain of origin of any sample of the training data cannot be recovered from the learned feature representation [3]. Such a domain-agnostic representation improves the cross-domain generalization of the trained models.

Any image patch extracted from the training data can be given two labels: its class label (assigned to “” if the patch is centered at a mitotic figure, “” otherwise) and its domain label (a unique identifier of the slide that is the origin of the patch).

Figure 1:  Architecture of the domain-adversarial neural network. The domain classification (red) bifurcates from the baseline network (blue) at the second and forth layers.
Figure 1: Architecture of the domain-adversarial neural network. The domain classification (red) bifurcates from the baseline network (blue) at the second and forth layers.

The training of the mitosis classifier introduced in Sect. Section 2.2 is performed by minimizing the cross-entropy loss , where are the parameters of the network.

The DANN is made of a second CNN that takes as input the activations of the second and fourth layers of the mitosis classifier and predicts the domain identifier . This network is constructed in parallel to the mitosis classifier, with the same corresponding architecture (Figure 1). Multiple bifurcations are used to make domain classification possible from different levels of abstraction and to improve training stability as in [4]. The cross-entropy loss of the domain classifier is , where are the parameters of the bifurcated network (note however that the loss is also a function of ).

The weights of the whole network are optimized via gradient back-propagation during an iterative training process that consists of three successive update rules:

The update rules (Equation 1) and (Equation 3) work in an adversarial way: with (Equation 1), the parameters are updated for the mitosis detection task (by minimizing ), and with (Equation 3), the same parameters are updated to prevent the domain of origin to be recovered from the learned representation (by maximizing ). The parameter controls the strength of the adversarial component.

2.4Evaluation

The performances of the mitosis detection models were evaluated with the F1-score as described in [2]. We used the trained classifiers to produce dense mitosis probability maps for all test images. All local maxima above an operating point were considered detected mitotic figures. The operating point was determined as the threshold that maximizes the F1-score over the validation set.

We used the t-distributed stochastic neighbor embedding (t-SNE) [5] method for low-dimensional feature embedding, to qualitatively compare the domain overlap of the learned feature representation for the different methods.

3Experiments and Results

For every possible combination of the three approaches developed in Section 2.3, we trained three convolutional neural networks with the same baseline architecture, under the same training procedure, but with random initialization seeds to assess the consistency of the approaches.

Baseline Training.

Training was performed with stochastic gradient descent with momentum and with the following parameters: batch size of 64 (with balanced class distribution), learning rate of 0.01 with a decay factor of 0.9 every 5000 iterations, weight decay of 0.0005 and momentum of 0.9. The training was stopped after 40000 iterations.

Because the training set has a high class imbalance, hard negative mining was performed as previously described [2]. To this purpose, an initial classifier was trained with the baseline CNN model. A set of hard negative patches was then obtained by probabilistically sampling the probability maps produced by this first classifier (excluding ground truth locations). We use the same set of hard-negative samples for all experiments.

Domain-Adversarial Training.

Every training iteration of the DANN models involves two passes. The first pass is performed in the same manner as the baseline training procedure and it involves the update (Equation 1). The second pass uses batches balanced over the domains of the training set, and is used for updates (Equation 2) and (Equation 3). Given that the training set includes eight domains, the batches for the second pass are therefore made of 8 random patches from each training case. The learning rate for these updates was fixed at 0.0025.

As remarked in [3], domain-adversarial training is an unstable process. Therefore we use a cyclic scheduling of the parameter involved in the adversarial update (Equation 3). This allows alternating between phases in which both branches learn their respective tasks without interfering, and phases in which domain-adversarial training occurs. In order to avoid getting stuck in local maxima and to ensure that domain information is not recovered over iterations in the main branch, the weights of the domain classifier are reinitialized at the beginning of every cycle.

Performance.

The F1-scores for all three approaches and their combinations are given in Table ?. t-SNE embeddings of the feature representations learned by the baseline model and the three investigated approaches are given in Fig. ?. Although the t-SNE embeddings of the first row only show two domains for clarity, the same observations can be made for almost all pairs of domains.

Mean and standard deviation of the F1-score over the three repeated experiments. Every column of the table represents the performance of one method on the internal test set (ITS; from the same pathology lab) and the external test sets (ETS; from different pathology labs). The squares indicate the different investigated methods. Multiple squares indicate a combination of methods.
CA

SN

DANN

 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
 t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.
t-SNE embeddings of 80 patches represented by the learned features at the fourth layer of the mitosis classifier. First row: patches are balanced across classes (mitosis: disk, non-mitosis: circle) and are equally sampled from two different slides of the training set (red/blue). Second row: patches of mitotic figures sampled from slides of the internal (orange) and external test set (green). Each column corresponds to one approach: (a) baseline, (b) SN, (c) CA, (d) DANN.

4Discussion and Conclusions

On the internal test set, all methods and combinations have good performance in line with previously reported results [2]. The combination of color augmentation and domain-adversarial training has the best performance (F1-score of ). The staining normalization method and combinations of staining normalization with other methods have the worst performance (F1-scores lower than the baseline method).

As with the internal test set, the best performance on the external test set is achieved by the combination of color augmentation and domain-adversarial training (F1-score of ). On the external test set, all three investigated methods show improvement since the baseline method has the worst performance (F1-score of ).

The intra-lab t-SNE embeddings presented in the first row of Fig. ? show that the baseline model learns a feature representation informative of the domain, as shown by the presence of well-defined clusters corresponding to the domains of the embedded image patches. In contrast, each of the three approaches produces some domain confusion in the model representation, since such domain clusters are not produced by t-SNE under the same conditions.

While staining normalization improves the generalization of the models to data from an external pathology lab, it clearly has a general adverse effect when combined to other methods, compared to combinations without it. A possible reason for this effect could be that by performing staining normalization, the variability of the training dataset is reduced to a point that makes overfitting more likely.

For both test datasets, the best individual method is color augmentation. The t-SNE embeddings in the second row of Fig. ? show that the models trained with CA produce a feature representation more independent of the lab than the baseline, SN or DANN. This is in line with the observation that the appearance variability in histopathology images is mostly manifested as staining variability.

The best performance for both datasets is achieved by the combination of color augmentation and domain-adversarial training. This complementary effect indicates the ability of domain-adversarial training to account for sources of variability other than color.

In conclusion, we investigated DANNs as an alternative to standard augmentation and normalization approaches, and made a comparative analysis. The combination of color augmentation and DANNs had the best performance, confirming the relevance of domain-adversarial approaches in histopathology image analysis. This study is based on the performances for a single histopathology image analysis problem and only one staining normalization method was investigated. These are limiting factors, and further confirmation of the conclusions we make is warranted.

Footnotes

  1. This test set is identical to the one used in the AMIDA13 challenge [9].

References

  1. Tumor proliferation assessment challenge 2016. http://tupac.tue-image.nl
  2. Cireşan, D.C., Giusti, A., Gambardella, L.M., Schmidhuber, J.: Mitosis detection in breast cancer histology images with deep neural networks. In: MICCAI 2013. pp. 411–418 (2013)
  3. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(59), 1–35 (2016)
  4. Kamnitsas, K., Baumgartner, C.F., Ledig, C., Newcombe, V.F.J., Simpson, J.P., Kane, A.D., Menon, D.K., Nori, A.V., Criminisi, A., Rueckert, D., Glocker, B.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: IPMI, 2017 (2017)
  5. Maaten, L.v.d., Hinton, G.: Visualizing data using t-SNE. Journal of Machine Learning Research 9, 2579–2605 (2008)
  6. Macenko, M., Niethammer, M., Marron, J., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: IEEE ISBI 2009. pp. 1107–1110 (2009)
  7. Ruifrok, A.C., Johnston, D.A., et al.: Quantification of histochemical staining by color deconvolution. Anal. Quant. Cytol. 23(4), 291–299 (2001)
  8. Veta, M., van Diest, P.J., Jiwa, M., Al-Janabi, S., Pluim, J.P.: Mitosis counting in breast cancer: Object-level interobserver agreement and comparison to an automatic method. PloS one 11(8), e0161286 (2016)
  9. Veta, M., Van Diest, P.J., Willems, S.M., Wang, H., Madabhushi, A., Cruz-Roa, A., Gonzalez, F., Larsen, A.B., Vestergaard, J.S., Dahl, A.B., et al.: Assessment of algorithms for mitosis detection in breast cancer histopathology images. Medical Image Analysis 20(1), 237–248 (2015)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10517
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description