Semantic Segmentation of Colon Glands with Deep Convolutional Neural Networks and Total Variation Segmentation

Semantic Segmentation of Colon Glands with Deep Convolutional Neural Networks and Total Variation Segmentation

Philipp Kainz Michael Pfeiffer Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland Martin Urschler
Abstract

Segmentation of histopathology sections is an ubiquitous requirement in digital pathology and due to the large variability of biological tissue, machine learning techniques have shown superior performance over standard image processing methods. As part of the GlaS@MICCAI2015 colon gland segmentation challenge, we present a learning-based algorithm to segment glands in tissue of benign and malignant colorectal cancer. Images are preprocessed according to the Hematoxylin-Eosin staining protocol and two deep convolutional neural networks (CNN) are trained as pixel classifiers. The CNN predictions are then regularized using a figure-ground segmentation based on weighted total variation to produce the final segmentation result. On two test sets, our approach achieves a tissue classification accuracy of and , making use of the inherent capability of our system to distinguish between benign and malignant tissue.

1 Introduction

The variability of glandular structures in biological tissue poses a challenge to automated analysis of histopathology slides. It has become a key requirement to quantitative morphology assessment and supporting cancer grading. Considering non-pathological cases only, automated segmentation algorithms must already be able to deal with significant variability in shape, size, location, texture and staining of glands. Moreover, in pathological cases gland objects can tremendously differ from non-pathological and benign glands, which further exacerbates finding a general solution to the segmentation problem.

Previous work on gland segmentation in colon tissue has used graphical models [1, 2, 3] or textural features [4]. Others worked on segmentation in prostatic cancer tissue using an integrated low-, high-level and contextual segmentation model [5], probabilistic Markov models [6], k-means clustering and region growing [7], spatial association of nuclei to gland lumen [8, 9]. The reader is referred to the work of Sirinukunwattana et al. [3] for a more detailed description of work related to glandular structure segmentation. Deep learning methods, especially convolutional neural networks (CNNs) [10], have found applications in biomedical image analysis for different tasks: semantic segmentation [11], mitosis detection [12] and classification [13], and blood cell counting [14].

(a) (b)
Fig. 1: Samples of (a) benign and (b) malignant colorectal cancer sections in the Warwick-QU dataset. Ground truth labels in each image are available for each pixel and overlaid in different colors for individual objects.

In this work, we propose a learning-based strategy to semantically segment glands in the Warwick-QU dataset, presented at the GlaS@MICCAI2015 challenge111http://www2.warwick.ac.uk/fac/sci/dcs/research/combi/research/bic/glascontest/. It contains 161 annotated images of benign and malignant colorectal adenocarcinoma, stained with Hematoxylin-Eosin (H&E) and scanned at magnification. Fig. 1 shows some example images and their ground truth annotation. In each image, individual objects are annotated with the same label, illustrated by the different colors. To the challenge participants, information on whether an image shows benign or malignant tissue is only available in the training dataset. Three datasets were released during the contest and the total number of non-overlapping images (benign/malignant) in the training set, test set A and test set B is , , and , respectively. These datasets further contained , , and individual glands.

The contributions of our work are twofold: (i) we present a novel deep learning scheme to generate classifier predictions for malignant and benign object and background pixels accompanied by a dedicated gland-separating refinement classifier that is able to distinguish touching objects, which pose a challenge for later segmentation. (ii) We use these classification results as the input for a simple, yet effective, globally optimal figure-ground segmentation approach based on a convex geodesic active contour formulation that regularizes the classifier predictions according to a minimal contour-length principle. Both technological contributions are described in section 2, while the subsequent sections show and discuss the results of our novel approach applied to the datasets of the GlaS@MICCAI2015 challenge.

2 Methods

We present a segmentation method for Hematoxylin-Eosin (H&E) stained histopathological sections that proceeds in three steps: The raw RGB images are preprocessed to extract a robust representation of the tissue structure. Subsequently, two classifiers are trained to predict glands (Object-Net) and gland-separating structures (Separator-Net) from the image. Finally, the outputs of the classifiers are combined and a figure-ground segmentation based on weighted total variation is used to produce the segmentation result.

2.1 Preprocessing H&E Slides

Prior to classification, the RGB images are preprocessed as shown in Fig. 2. A standard color deconvolution [15] is performed for the specific H&E staining used in the provided dataset222We used the H&E 2 setting in the implementation of G. Landini, available in Fiji [16].. It separates tissue components according to their staining, emphasizes the structure and inherently performs data whitening. The first (red) channel of the deconvolved RGB image contains most of the tissue structure information, so the other channels can be omitted. In order to account for different staining contrasts and lighting conditions during image acquisition, contrast limited adaptive histogram equalization (CLAHE) [17] is applied.

Fig. 2: Preprocessing of the RGB images. Color deconvolution [15] separates the Hematoxylin-Eosin stained tissue components. The red channel of the deconvolved image is processed by CLAHE [17] and taken as input to the pixel classifiers.

2.2 Learning Pixel Classifiers

Given the large variability of both benign and malignant tissue in the Warwick-QU dataset, we opted for CNNs due to their recently shown convincing performance in pixelwise classification of histopathology images [12] and to learn a rich set of features directly from images.

(a) Object-Net architecture
(b) Separator-Net architecture
Fig. 3: CNN classifier architectures of (a) the Object-Net and (b) the Separator-Net. Both architectures have () layers, are identical in the number of convolutional (Conv), max-pooling (Sub), and fully connected (FC) layers, but differ in convolution kernel size, size and number of the feature maps, as well as number of output units. The probability distribution over labels of the center pixel (marked as red cross in the input patch) is predicted by the CNNs.

The general architecture of both CNNs is motivated by a classical LeNet-5 architecture [18] and consists of () layers: four convolutional layers (Conv) for feature learning and three fully connected (FC) layers as feature classifier, see Fig. 3. The rectified linear unit (ReLU) nonlinearity () is used as the activation function throughout all layers in the networks. All convolutional layers consist of a set of learnable square 2D filters with pixel stride 1, followed by ReLU activation. Subsampling (max-pooling) layers (Sub, ), accounting for translation invariance, are used after the first three convolutional layers and are counted as part of the convolutional layer. The final pixelwise classification of an input image is obtained by sliding a window over the image, and classifying the center pixel of that window.

For training minibatch stochastic gradient descent (MBSGD) with momentum, weight decay, and dropout regularization is used to minimize a negative log-likelihood loss function.

2.2.1 Object-Net: Classifying Gland Objects

The goal of the Object-Net is to predict the probability of a pixel belonging to a gland or background. One could now define a binary classification problem, but malignant and benign tissue express express unique features, which are not found in the other tissue type, and which can thus complicate the learning problem. We therefore formulate an alternative, four-class classification problem, in which we distinguish (, with ): background benign (), gland benign (), background malignant (), and gland malignant (). In order to do that it is necessary to transform the provided ground truth labels to reflect benignity and malignancy as well. The annotation images are binarized and a new label is assigned to pixels belonging to each class , see Fig. 4.

                   (a)                    (b)
Fig. 4: Ground truth transformation for learning the four-class classification on the preprocessed images with the Object-Net. The first row shows a benign case, the second row shows a malignant case. (a) Preprocessed images with overlaid individual ground truth object annotations. (b) Provided annotations were transformed into four labels for benign background (), benign gland (), malignant background () and malignant gland ().

The input to the CNN is an image patch of size pixels, centered at an image location , where and denotes the image domain. A given patch is convolved with 80 filters () in the first convolutional layer, in the second layer with 96 filters (), in the third layer with 128 filters (), and in the last layer with 160 filters (), see Fig. 3(a). The three subsequent fully connected layers FC5-FC7 of the classifier contain 1024, 512, and four output units, respectively. The output of FC7 is fed into a softmax function, producing the center pixel’s probability distribution over the labels. The probability for each class is stored in a corresponding map .

2.2.2 Separator-Net: Classifying Gland-separating Structures

Initial experiments have shown that taking pixelwise predictions only from the Object-Net were insufficient in order to separate very close gland objects. Hence, a second CNN, the Separator-Net, is trained to predict structures in the image that are separating such objects. This learning problem is formulated as binary classification task.

As depicted in Fig. 3(b), the CNN structure is similar to the Object-Net: a given input image patch of size pixels is convolved with 64 filters () in the first convolutional layer, in the second layer with 96 filters (), in the third layer with 128 filters (), and in the last layer with 160 filters (). The three subsequent fully connected layers FC5-FC7 of the classifier contain 1024, 512, and two output units, respectively. The output of the last layer (FC7) is fed into a softmax function to produce the probability distribution over the labels for the center pixel. The probability for a pixel belonging to a gland-separating structure is stored in the corresponding probability map .

2.2.3 Refining CNN Outputs

Once all probability maps have been obtained, the Object-Net predictions are refined with the Separator-Net predictions to emphasize the gland borders and prevent merging of close objects. The subsequent figure-ground segmentation algorithm requires a single foreground and background map to produce the final segmentation result, so outputs are combined as follows.

The foreground probability map is constructed by

(1)

where controls the influence of the refinements done by the separator predictions. Similarly, evaluating Eq. (2) produces the background probability map:

(2)

2.3 Total Variation Segmentation

To generate a final segmentation, the following continuous non-smooth energy functional  [19, 20] is minimized:

(3)

where denotes the image domain and is smooth. The first term denotes the -weighted total variation (TV) semi-norm which is a reformulation of the geodesic active contour energy [21]. The edge function is defined as

(4)

where is the gradient of the input image, thus attracting the segmentation towards large gradients. The second term in Eq. (3) is the data term with describing a weighting map. The values in have to be chosen negative if should be foreground and positive if should be background. If values in are set to zero, the pure weighted TV energy is minimized seeking for a minimal contour length segmentation. We use the refined outputs from the previous classification step (Eqs. (1) and (2)) and introduce a threshold to ensure a minimum class confidence in a map :

(5)

The weighting map is derived by applying the logit transformation:

(6)

The regularization parameter defines the trade-off between our data term and the weighted TV semi-norm. The stated convex problem in Eq. (3) can be solved for its global optimum efficiently using the primal-dual algorithm [22], which can be implemented very efficiently using NVidia CUDA, thus making use of the parallel computing power of recent GPUs. As the segmentation is continuous, the final segmentation is achieved by thresholding with a value of . We optimize the free parameters , and by performing a grid search in a suitable range of these values (, and ), where all 85 annotated training images are used to tune these parameters based on the Dice coefficient.

2.4 Implementation Details

2.4.1 Training Dataset Sampling

For the sake of execution speed when using a sliding window approach, the images are rescaled to half resolution prior to classification and upsampled with bilinear interpolation to their original size afterwards. The size of the input patch is chosen to be pixels, such that sufficient contextual information is available to classify the center pixel.

The majority of training images (79) have a size of pixels, and resizing reduces them to pixels. If we just considered the valid part without border extension for sampling the patches for the training dataset, we would actually lose approximately of the labeled pixels when using a patch size of pixels. On the other hand, we would introduce a significant number of boundary artifacts by artificially extending the border to make use of all labeled pixels. Fortunately, most images are tiles of a bigger image and can thus be stitched seamlessly to obtain a total of 19 images333In one case, stitching was not possible, since only 3 tiles were available. These 3 tiles, and the remaining 6 images, that were not part of a bigger image, were treated as individual images. (Fig. 5), where we can sample enough patches without heavily relying on artificial border extension.

In principle, we pursued the same sampling strategy for the Separator-Net, but were required to create the ground truth labels manually. We annotated all pixels that belong to a structure very close to two or more gland borders. The green lines in Fig. 5 illustrate the additional manual annotation of the separating structures. Due to the low number of foreground samples when compared to the Object-Net, the number of foreground samples for the Separator-Net was artificially increased by exploiting the problem’s requirement for rotation-invariance and adding nine additional rotated versions of the patch, i.e. every .

Fig. 5: Manual ground truth annotations for gland-separating structures. Stitched images from four tiles (numbers in red boxes), red lines denote the tile borders. Manual annotations of pixels belonging to gland-separating structures are shown as green lines, the thickness of lines is increased for better illustration.

2.4.2 CNN Training

Fig. 6: CNN training progress. Classification error over epochs on a subset of the training data (training error), and on the held-out test set. The Object-Net reaches below test error after 43 epochs, the Seperator-Net reaches test error after 119 epochs.

Both CNNs were trained on a balanced training set of image patches per class. Patches in the training sets were sampled at random from the available pool of training images. Training and test sets reflect approximately the same distribution of samples over images. The size of the minibatches in the MBSGD was set to samples and the networks were trained until the stopping criterion was met: no further improvement of the error rate on a held-out test set over 20 epochs. We set the initial learning rate , with a linear decay saturating at after 100 epochs. For all layers, a weight decay was chosen to be and the dropout rate was set to . We used an adaptive momentum term starting at and increasing to after 50 epochs, such that with progressing training the updates are influenced by a larger number of samples than at the beginning.

Fig. 6 shows the classification error rate as a function of the training duration in epochs. Each class was represented with samples in the test set for the Object-Net, and for the Separator-Net, respectively. The training error is actually estimated on a fixed subset of the training data ( samples), to get an intuition when overfitting starts. The Object-Net achieves the best performance after epochs, with a minimum training error of and a minimum test error of . Training of the Separator-Net continued until the lowest training error of and test error of was reached after epochs. Fig. 7 shows the learned filters of the first convolutional layer in both networks. The CNN models were implemented in Pylearn2 [23], a machine learning library built on top of Theano [24, 25].

(a) (b)
Fig. 7: CNN training results. (a) 80 filters of the first layer in Object-Net and (b) 64 filters in the Separator-Net.

3 Results

3.1 Colon Gland Segmentation

The grid search resulted in , and as parameters optimizing the TV segmentation based on the Dice score. The confidence threshold for foreground and background was determined empirically and fixed to . Separator predictions were fully considered for refining the Object-Net predictions ().

Dataset Precision Recall F1-score Object-Dice Hausdorff
without separator refinement
Training 0.97(0.09) 0.67(0.21) 0.78(0.17) 0.81(0.16) 116.89(115.18)
Test A 0.83(0.22) 0.60(0.24) 0.67(0.20) 0.70(0.15) 137.44(78.53)
Test B 0.70(0.35) 0.48(0.30) 0.50(0.26) 0.58(0.19) 249.37(114.69)
with separator refinement
Training 0.91(0.15) 0.85(0.14) 0.87(0.12) 0.88(0.09) 61.36(61.36)
Test A 0.67(0.24) 0.77(0.22) 0.68(0.20) 0.75(0.13) 103.49(72.38)
Test B 0.51(0.30) 0.70(0.32) 0.55(0.28) 0.61(0.22) 213.58(119.15)

Metrics are reported as mean and standard deviation, best results are printed in bold. Performance on the training set is reported on all training images. Test set A consists of images, test set B of images. Except for values of the Hausdorff distance, higher values are superior.

Table 1: Segmentation performance metrics for the Warwick-QU dataset used in the GlaS@MICCAI2015 challenge.

In Table 1, we report performance metrics444The evaluation scripts were kindly provided by the contest organizers and are available from http://www2.warwick.ac.uk/fac/sci/dcs/research/combi/research/bic/glascontest/ evaluation/. for detection (precision, recall, F1-score), segmentation (object-level Dice), and shape (Hausdorff distance) on the training set, as well as test set A and B as mean and standard deviation (SD). Blobs with an area less than pixels were removed and all remaining blobs were labeled with unique identifiers before computing the measures.

Compared to using predictions only from the Object-Net, the segmentation performance improved with separator refinement. Malignant cases are harder to segment due to their irregular shape and pathological variations in the tissue. Fig. 8 illustrates some qualitative example segmentation results on the training data set, Fig. 9 and Fig. 10 show results on test set A and B, respectively.

The average total runtime for segmenting a image is 5 minutes using an NVidia GeForce Titan Black 6GB GPU.

(a) benign (b) malignant (c) benign
(d) malignant (e) malignant (f) benign
Fig. 8: Qualitative segmentation results on images of the training dataset. Even rows show the outline of ground truth in green and the segmentation result in blue. The numbers refer to the unique objects within the image. Odd rows show the segmentation difference: false negative pixels are colored in cyan, and false positives are colored in yellow. (a-c) show examples, where our segmentation algorithm works well, (d-f) show different types of segmentation errors.
(a) benign (b) malignant (c) benign
(d) benign (e) benign (f) malignant
Fig. 9: Qualitative segmentation results on images of test dataset A. Even rows show the segmentation (blue outline) and ground truth (green outline), odd rows show the differences, where false negative pixels are cyan, and false positive pixels are yellow. (a-c) show reasonable segmentation results, in (d-f) different segmentation errors are shown.
(a) malignant (b) benign (c) malignant
(d) benign (e) malignant (f) malignant
Fig. 10: Qualitative segmentation results on images of test dataset B. Even rows show the segmentation (blue outline) and ground truth (green outline), odd rows show the differences, where false negative pixels are cyan, and false positive pixels are yellow. (a-c) show reasonable segmentation results, in (d-f) different segmentation errors are shown.

3.2 Benignity and Malignancy Classification

In the proposed approach, the Object-Net inherently learns a discrimination of benign () and malignant () tissue, since the labels for benign and malignant are available in the training dataset and we defined a four-class classification problem. Instead of combining the probability maps for glands and background as done for segmentation, we combine the maps for benignity and malignancy. Subsequently, the average probabilities for a benign case can be computed as

(7)

and for a malignant case as

(8)

where is the number of pixels in the image domain . The maximum of both values finally indicates the prediction:

(9)

We evaluated the classification performance for benign and malignant tissue on the two test sets A and B and achieved an accuracy of and . The average (SD) decision confidence in test set A was for benign and for malignant, and in test set B and , respectively.

4 Discussion and Conclusions

This paper presented a method to segment glands in H&E stained histopathological images of colorectal cancer using deep convolutional neural networks and total variation segmentation. As our main contribution, we showed that segmentation results can be greatly improved when the predictions of the Object-Net are refined with the learned gland-separating structures of the Separator-Net. Adding the separators does not only regulate the trade-off between precision and recall, but generally improves the performance scores for detection (F1-score), segmentation (Dice) and shape (Hausdorff). The final ranking as well as the test set performance results of other algorithms participating in this challenge are available online at the contest website555http://www2.warwick.ac.uk/fac/sci/dcs/research/combi/research/bic/glascontest/ results/, which is continuously being updated by algorithms from new participating groups.

Our approach inherently allows to very accurately discriminate benign and malignant cases, because the Object-Net was trained on labels for both cases. The average confidence for a decision towards benignity and malignancy is acceptable. Nevertheless, we cannot distinguish more detailed histologic grades among these cases, since there was no information (e.g. high- or low-grade) available in addition to the segmentation ground truth.

Acknowledgements

The authors are grateful to the organizers of the GlaS@MICCAI2015 challenge for providing (i) the Warwick-QU image dataset, and (ii) the MATLAB evaluation scripts for computing performance measures that are comparable among the participating teams. Further thanks goes to Julien Martel for fruitful discussions in early phases of this challenge.

References

  • [1] C. Gunduz-Demir, M. Kandemir, A. B. Tosun, and C. Sokmensuer. Automatic segmentation of colon glands using object-graphs. Medical Image Analysis, 14(1):1 – 12, 2010.
  • [2] A.B. Tosun and C. Gunduz-Demir. Graph run-length matrices for histopathological image segmentation. IEEE Transactions on Medical Imaging, 30(3):721–732, March 2011.
  • [3] K. Sirinukunwattana, D.R.J. Snead, and N.M. Rajpoot. A stochastic polygons model for glandular structures in colon histology images. IEEE Transactions on Medical Imaging, PP(99):1–1, 2015.
  • [4] R. Farjam, H. Soltanian-Zadeh, K. Jafari-Khouzani, and R. A. Zoroofi. An image analysis approach for automatic malignancy determination of prostate pathological images. Cytometry Part B: Clinical Cytometry, 72B(4):227–240, February 2007.
  • [5] S. Naik, S. Doyle, S. Agner, A. Madabhushi, M. Feldman, and J. Tomaszewski. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In IEEE International Symposium on Biomedical Imaging – ISBI, pages 284–287, May 2008.
  • [6] J. P. Monaco, J. E. Tomaszewski, M. D. Feldman, I. Hagemann, M. Moradi, P. Mousavi, A. Boag, C. Davidson, P. Abolmaesumi, and A. Madabhushi. High-throughput detection of prostate cancer in histological sections using probabilistic pairwise markov models. Medical Image Analysis, 14(4):617–629, August 2010.
  • [7] Y. Peng, Y. Jiang, L. Eisengart, M. Healy, F. Straus, and X. Yang. Computer-aided identification of prostatic adenocarcinoma: Segmentation of glandular structures. Journal of Pathology Informatics, 2(1), 2011.
  • [8] K. Nguyen, A. Sarkar, and A. K. Jain. Structure and context in prostatic gland segmentation and classification. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012, pages 115–123, 2012.
  • [9] S. Rashid, L. Fazli, A. Boag, R. Siemens, P. Abolmaesumi, and S. E. Salcudean. Separation of benign and malignant glands in prostatic adenocarcinoma. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, pages 461–468, 2013.
  • [10] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In IEEE International Symposium on Circuits and Systems – ISCAS, pages 253–256, May 2010.
  • [11] B. Pang, Y. Zhang, Q. Chen, Z. Gao, Q. Peng, and X. You. Cell nucleus segmentation in color histopathological imagery using convolutional networks. In Chinese Conference on Pattern Recognition – CCPR, pages 1–5, October 2010.
  • [12] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, pages 411–418, 2013.
  • [13] C. D. Malon and E. Cosatto. Classification of mitotic figures with convolutional neural networks and seeded blob features. Journal of Pathology Informatics, 4:9, May 2013.
  • [14] M. Habibzadeh, A. Krzyżak, and T. Fevens. White blood cell differential counts using convolutional neural networks for low resolution images. In Artificial Intelligence and Soft Computing, pages 263–274. Springer Berlin Heidelberg, 2013.
  • [15] A. C. Ruifrok and D. A. Johnston. Quantification of histochemical staining by color deconvolution. Analytical and Quantitative Cytology and Histology, 23(4):291–299, August 2001.
  • [16] J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona. Fiji: an open-source platform for biological-image analysis. Nature Methods, 9(7):676–682, July 2012.
  • [17] K. Zuiderveld. Contrast limited adaptive histogram equalization. In Graphics gems IV, pages 474–485. Academic Press Professional, Inc., 1994.
  • [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998.
  • [19] C. Reinbacher, T. Pock, C. Bauer, and H. Bischof. Variational Segmentation of Elongated Volumetric Structures. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3177–3184, 2010.
  • [20] K. Hammernik, T. Ebner, D. Stern, M. Urschler, and T. Pock. Vertebrae segmentation in 3D CT images based on a variational framework. In Recent Advances in Computational Methods and Clinical Applications for Spine Imaging, pages 227–233. Springer, 2015.
  • [21] X. Bresson, S. Esedoglu, P. Vandergheynst, J.-P. Thiran, and S. Osher. Fast global minimization of the active contour/snake model. Journal of Mathematical Imaging and Vision, 28(2):151–167, 2007.
  • [22] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
  • [23] I. J. Goodfellow, D. Warde-Farley, P. Lamblin, V. Dumoulin, M. Mirza, R. Pascanu, J. Bergstra, F. Bastien, and Y. Bengio. Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214, 2013.
  • [24] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010.
  • [25] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10657
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description