3D Dense Separated Convolution Module for Volumetric Image Analysis

3D Dense Separated Convolution Module for Volumetric Image Analysis

Lei Qu    Changfeng Wu    Liang Zou \affiliationsAnhui University
Anhui University
China University of Mining and Technology
\emailsqulei@ahu.edu.cn, ahwcf1995@163.com, liangzou@ece.ubc.ca,
Abstract

With the thriving of deep learning, 3D Convolutional Neural Networks have become a popular choice in volumetric image analysis due to their impressive 3D contexts mining ability. However, the 3D convolutional kernels will introduce a significant increase in the amount of trainable parameters. Considering the training data is often limited in biomedical tasks, a tradeoff has to be made between model size and its representational power. To address this concern, in this paper, we propose a novel 3D Dense Separated Convolution (3D-DSC) module to replace the original 3D convolutional kernels. The 3D-DSC module is constructed by a series of densely connected 1D filters. The decomposition of 3D kernel into 1D filters reduces the risk of over-fitting by removing the redundancy of 3D kernels in a topologically constrained manner, while providing the infrastructure for deepening the network. By further introducing nonlinear layers and dense connections between 1D filters, the network’s representational power can be significantly improved while maintaining a compact architecture. We demonstrate the superiority of 3D-DSC on volumetric image classification and segmentation, which are two challenging tasks often encountered in biomedical image computing.

3D Dense Separated Convolution Module for Volumetric Image Analysis


Lei Qu, Changfeng Wu, Liang Zou

Anhui University
Anhui University
China University of Mining and Technology

qulei@ahu.edu.cn, ahwcf1995@163.com, liangzou@ece.ubc.ca,

1 Introduction

During the last few years, Deep Learning (DL) and especially Convolutional Neural Networks (CNNs) have revolutionized computer vision and set new standards for various challenging tasks, such as image classification and semantic segmentation. Since these tasks are also shared in diagnostics, pathology, high-throughput screening, cellular and molecular image analyzing and more, the thriving of deep learning was also witnessed in the field of biomedical image analysis [?].

However, compared to 2D images mostly used in computer vision, image data encountered in biomedical field are often volumetric. The substantial difficulties in annotating and interpreting of 3D volumetric data generally result in a much smaller training set than that of computer vision. In addition, in order to effectively explore the 3D contexts which is essential in the volumetric data analysis, much effort has to be paid in the designing of network. Current efforts often leading to either a significant increasing in the amount of learnable parameters or the complexity in the network designing and training. When dealing with large 3D image volumes, the computational cost as well as memory requirements will also being damaging even with the cutting-edge hardware. Therefore, how to effectively explore the 3D contexts and train an efficient volumetric network with limited training data is still an open problem in volumetric image analysis.

Figure 1: Overview of 3D-DSC. For demonstration purpose, we just show one channel of the input and the feature volumes.

In order to process 3D volumes using CNNs, many schemes have been proposed in the past few years. One straightforward solution is to apply the conventional 2D CNNs on each volume slice separately [?]. Apparently, this method is a non-optimal use of the volumetric data since the contextual information along the third dimension is disregarded. To make a better use of the 3D context, the tri-planer schemes [?] suggested to applying 2D CNNs on three orthogonal planes (i.e., xy, xz and yz planes). Since the inter-slice information is utilized through a selective choosing of input data, only a small fraction of 3D information is explored [?]. By viewing the adjacent volume slices as a time series, the recurrent neural network (RNN) was adopted to distil the 3D context from a sequence of abstracted 2D context [?]. Due to the asymmetry nature of network design, the intra and inter-slice information cannot be treated and explored equally.

Currently, the 3D CNNs that take 3D convolution kernels as the basic unit [?], and their hybrid with 2D CNNs [?], have become the most popular choice in volumetric networks design. In addition to impressive 3D contexts mining ability, the popularity of 3D CNNs is also due to the simple structure nature of 3D operations (e.g. 3D convolutions, 3D pooling and 3D up-convolutions) and their similar usage to corresponding 2D operations. As a commonly adopted strategy, a 3D CNNs can be constructed from the modern 2D CNNs by replacing the 2D operations with their 3D counterparts.

However, the utilization of 3D operations, especially the 3D convolutional kernels, will introduce a huge increase in the amount of trainable parameters, as well as significant memory and computational requirements [?]. Considering the limited training data often encountered in biomedical tasks, to avoid over-fitting, a tradeoff has to be made between the model size and its representational power. With these limitations, the existing 3D CNNs tending to contain much less layers than modern 2D CNNs. Since the impact of network’s depth has been extensively demonstrated to improve the performance in computer vision [??], there was still much room to dig the potential of 3D CNNs and improve their representational power.

In this paper, instead of modifying the network’s overall architecture to circumvent the tradeoff between model size and its representational power, we address this dilemma by looking into the very basic unit of 3D CNNs—3D convolutional kernels, and proposing to replace them with a compact module that possesses better parameter efficiency and stronger nonlinear representational power. We named the proposed module 3D Dense Separated Convolution (3D-DSC), and Figure 1 illustrates its layout schematically.

The 3D-DSC module is constructed by a series of densely connected 1D filters. The decomposition of 3D kernel into 1D filters allivating the risk of over-fitting by removing the redundancy within 3D kernels in a topologically constrained manner, while providing the infrastructure for deepening the network. The nonlinear layers inserted between 1D filters is responsible for the boosting of block’s nonlinearity as well as its representational power. The dense connections between 1D filters ensures efficient propagation of information and gradient flow, thus facilitate the training of deepened network. Finally, the 111 convolution attached in the end of block acts as a bottleneck layer to reduce the number of output feature volumes.

Compared with direct 3D convolutions, 3D-DSC not only effectively deepens the network thus improving network’s representational power, but also considerably reduces the number of parameters. This feature is especially useful when training data is limited. Note that, our 3D-DSC is not limited to any specific architecture or application and it can be used to boost the performance by directly substituting the original 3D convolutional kernels.

2 Related Work

As the depth of network in computer vision has become saturated, research focusing on designing a more compact and parameter efficient architecture has received more attention recently. However, in biomedical image computing, there is still few effort be dedicated to this aspect. The present work mainly relies on the following efforts in the field of computer vision.

On the way to improve parameter efficiency, CNNs [?] set the first milestone by introducing a parameter sharing mechanism. In the form of convolution, all neurons in a single depth slice of CNN are forced to share the same parameters, thus considerably reducing the overall number of parameters. Another prominent design pattern is the bottleneck unit introduced in ResNet [?]. Constituted by two 11 convolution layers, the bottleneck pattern explores the channel-wise redundancy in a shrink-and-expand manner. This idea was later adopted in Shufflenet [?] and Inception-v4 [?] to reduce the computation and memory consumption. In [?], a spatial separation of the convolution operator was proposed, where the 33 kernels were separated into two consecutive kernels of shapes 31 and 13. Recently MobileNet [?] took a step further and proposed depthwise separable convolutions. The resulting structure is many times more efficient in terms of memory and computation. The bypassing pattern, which was initially proposed in Highway Networks [?] to facilitate the training of deeper networks. Its impressive parameter efficiency has also been discovered and confirmed with the extension of ResNet [?] and DenseNet [?].

Rather than designing a parameter efficient network, another group of works resort to exploring the redundancy of network in a post-processing manner. Among these efforts, the Low-Rank Approximation (LRA) methods are most relevant to ours. By viewing the convolutional layers as high-order tensors, these methods compress convolutional layers of pre-trained networks by finding their appropriate LRA. Using low-rank decomposition to accelerate convolution was first suggested by  [?] in codebook learning. In the context of CNNs,  [?] proposed a canonical polyadic (CP) decomposition and clustering scheme for the convolutional kernels. Pre-trained 3D filters are approximated by a consecutive 1D filters and the error is minimized by using clustering and post training.  [?] suggested using different tensor decomposition schemes, an iterative schemes was employed to get an approximate local solution.  [?] further extend the use of CP decomposition and propose a different low-rank architecture that enable both approximating an already trained network and training from scratch. Since LRA methods are mainly aim to speed up CNNs, and by approximating the weights of pretrained convolutional layers, the improvement of network’s nonlinear representational power are generally disregarded or even scarified.

3 Methods

We start this section by discussing the 3D separability of 3D convolutional kernels and the issues it may arise. Then, based on the infrastructure provided by the spatial decomposition of kernels, we detail the construction of our 3D-DSC module.

Figure 2: Visualization of the separated convolution. The white cube is the original rank-R 3D convolutional kernel and the 1D kernels are the CP decomposition of the white cube.

3.1 3D Separability of Convolutional Kernels

Given a volumetric image, the 3D convolution operation with stride one can be formulated as bellow in an element-wise fashion:

(1)

where is the 3D kernel of size in the lth layer which connected to the kth input feature volume in the previous layer and the jth output feature volume , is the element-wise value of the 3D convolution kernel. Assume the lth layer has input feature volumes, and let denotes the element-wise non-linear activation function and the corresponding bias term, the output feature volume is obtained as:

(2)

Mathematically, the 3D kernel tensor can be factorized into a linear combination of rank-one tensors according to the CP decomposition:

(3)

where is the rank of , denotes the outer product operation and , , are 1D vectors. Element wisely, the above equation can be rewritten as:

(4)

Substituting (4) into (1) gives the following equivalent expression for the evaluation of the 3D convolution:

(5)

With this formulation, the 3D convolution can be recasted as a sequence of 1D convolutions. From inside out, the calculation within the parenthesizes can be viewed as: first convolve the feature volume with a 1D filter along the X dimension, then followed with the 1D convolution with and along Y and Z dimension successively.

Assume the rank of kernel tensor equal to one (i.e. R=1), the 3D convolution can be decomposed into a sequence of three 1D convolutions as shown in Figure 2 (). Note that convolution is a linear operator, the 1D filters as shown in Figure 2 can be arranged in any order.

Rank-1 is a strong assumption and the intrinsic rank of is generally higher than one in practice. The Equ. (3) shows that the rank-R tensor is the sum of R rank-1 tensors, this suggested that the rank-R topology can be constructed by simply concatenating R copies of rank-1 case as shown in Figure 2.

3.2 3D Dense Separated Convolution Module

Although the 3D separated convolution topology described in the previous section is mathematically equivalent to direct 3D convolution, the profits of this decomposition are:

First, the rank-constraints of 3D convolution kernels can be easily encoded in the network’s topology by stacking k (kR) groups of 1D convolutions (as seen in Figure 2). Once the model structure is defined, we can leverage the traditional CNNs training method to learn more compact weights from scratch, thus avoiding the post-processing stage of LRA. In addition, the information loss and performance degradation caused by low-rank constraints can be minimized as a whole upon training. We will show that the precision can even be increased in our experiment section.

Second, when a rank-k topology is applied to replace the original full rank 3D convolution kernel, the number of independent parameters per-filter can be reduced from to , which results in a significant reduction of overall learnable parameters for small k considering the huge number of filters deployed in the network. Since the training data size in many biomedical tasks is much smaller than that of computer vision, this reduction will alleviates the risk of over-fitting and further enable deeper network designing.

Finally, the cascaded 1D convolutions naturally provides the infrastructure to further improve its nonlinear representation power. Since the linear combination of convolutions is still linear, the current decomposed topology can only increase the network’s visual depth but its effective depth. However, with this decomposed structure, the effective depth of network can be easily increased by inserting the nonlinear activation layers (e.g. LeakyReLU layers) between the concatenated 1D convolutions, thus increasing the nonlinearity of network and encouraging the learning of more discriminative features.

However, there are two issues inherited in this kernel decomposition. First, the serialized model with 1D convolutions is more vulnerable to vanishing gradient problem than standard 3D CNNs. Accompanied with the increasing of the network’s depth, longer gradient propagation paths may result in fast gradient decaying as well as difficulty of optimization. Second, once the nonlinear activation layer is inserted between the 1D filters, the different ordering of the 1D filters will no longer be equivalent. Inspired by the recent success of densely connected networks [?], we propose to extend the 3D separated convolution discussed in the previous section by further introducing dense connections between 1D filters. Figure 1 illustrates the layout of resulting rank-R 3D Dense Separated Convolution (3D-DSC) module schematically.

Similar to the densenet, we introduce direct connections from any layer to all subsequent layers within each block. In order to maximize the information flow, the features are concatenated, and then followed with a composited operations including Batch Normalization (BN) and leaky rectified linear units (LeakyReLU) before they are passed to the next layer. In our implementation, we restrict each layer to produce the same number of feature volumes as input. Assume there are k feature volumes in the input layer, the concatenate operation after the last 1D convolution layer will accumulate the feature volume to the number of 4k. In order to make the number of output feature volume consistent with that of direct 3D convolution, an additional bottleneck layer is appended behind the last 1D convolution layer. With this designing, the extension from rank-1 3D-DSC to the rank-k case will be same as the method discussed in previous section, i.e., by simply stacking k copies of the rank-1 topology.

By introducing the within block dense connections, each 1D kernel are provided with the opportunity to directly access the input feature volume, thus to some extent alleviate their ordering problem. In addition, the employment of dense connections also bring the three following benefits that relief our previous concerns in a point-by-point manner. First, direct connections between all layers help improving the flow of information and gradients through the network, alleviating the problem of vanishing gradient. Second, short paths to all the feature volumes in the architecture introduce an implicit deep supervision. Third, dense connections have a regularizing effect, and considering the reduction in the number of parameters introduced by 3D-DSC, such a joint effort will substantially reduce the risk of over-fitting under limited training data.

Since the size and number of feature volume of our 3D-DSC module is consistent with that of direct 3D convolution, we can directly substitute the 3D convolution with 3D-DSC and enjoy the its benefits. If using a high-level library such as Keras or TensorFlow-Slim, it will takes only several lines of code.

Input(3D multi-channel MRI)
3D-conv3-32 3D-conv3-32 3D-conv3-32
3D-conv3-32 3D-conv3-32 3D-conv3-32
Maxpooling
3D-conv3-64 3D-conv3-64 3D-conv3-64
3D-conv3-64 3D-conv3-64 3D-DSC-64
Maxpooling
3D-conv3-128 3D-conv3-128 3D-conv3-128
3D-conv3-128 3D-DSC-128 3D-DSC-128
Maxpooling
3D-conv3-256 3D-conv3-256 3D-conv3-256
3D-conv3-256 3D-DSC-256 3D-DSC-256
Maxpooling
3D-conv3-512 3D-conv3-512 3D-conv3-512
3D-conv3-512 3D-DSC-512 3D-DSC-512
Global-Average-Pooling
3D-conv1-2
Soft-max
Table 1: Architecture overview (shown in columns). are the networks with normal 3D convolution, and are the networks with 3D-DSC, and represents the number of the additional convolution layers. The 3D convolutional kernel parameters are expressed as “3D-conv/3D-DSC-”. The LeakyReLU activation layer and Batch Normalization layer are not shown here for brevity.

4 Experiments and Results

In this section, we evaluate the proposed module on two different volumetric image analysis tasks (Attention Deficit Hyperactivity Disorder Diagnosis and Brain Tumor Segmentation) with comparison to several state-of-the-arts methods. In addition to the precision evaluation, the components, depth and overfitting analyses are also provided to illustrate the effectiveness and superiority of our method.

4.1 Attention Deficit Hyperactivity Disorder Diagnosis

Attention Deficit Hyperactivity Disorder Diagnosis (ADHD) is one of the most common mental-health disorders, affecting around 5%-10% of school-age children. In order to automatically diagnose this disorder, MR images, including structure MRI (sMRI) and functional MRI (fMRI), have been investigated in many studies. The MRI data analyzed in this paper is from the ADHD200 consortium [?]. Initially, they post a large training dataset including 776 samples comprised of 491 typically developing individuals and 285 patients with ADHD. For each sample, both fMRI scans and associated T1weighted structural scans are provided. Besides, three kinds of voxel-based morphometric features, including gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) are also provided in [?]. In our experiment, these features are regarded as three individual input channels of network.

Figure 3: The validation loss of and network. We do not employ dense connection or extra activation layer during the separated convolution in the network.

Network Architecture.

Table 1 shows the configurations of the baseline models () and their 3D-DSC enhanced versions ( and ). All networks start with two 3D convolutional layers and one pooling layer.The difference between and is whether 3D-DSC modules are used or not between first two pooling layers. Starting from the second maxpooling layer, both and are constructed by repeating a combination of one 3D convolution layer, n 3D-DSC layers and one pooling layer. Then, the global average pooling layer and convolution are applied on the feature volumes, and SoftMax is employed as the last layer for classification.

Training and evaluation setting.

We employ the cross-entropy as the loss function. It is worth noting that adaptive optimization methods have better performance in the early stage of training but are outperformed by SGD at later stages. To minimize the effect of random initialization, we firstly train the model with random initialization and Adam optimizer; then, we refine the model with SGD optimizer. The Early-Stopping strategy is used with patience of 50. We denote the mean difference between the training loss and the validation loss within the last 50 epochs as Overfitting Distance (OD), which can be used to evaluate the ability of network in coping with overfitting. In our experiments, we employ 5-fold cross validation to evaluate the proposed method.

Network Accuracy OD
73.17% 0.2785
74.89% 0.2806
74.53% 0.2807
71.94% 0.2961
75.22% 0.2526
75.74% 0.2809
76.70% 0.2580
Table 2: Performance comparison based on 5-fold cross validation. are normal 3D CNNs with different depth. , and are the separated 3D CNNs with 3D-DSC. OD denotes the overfitting distance.

Accuracy and analysis of network’s depth.

It is well known that the depth of network has big impact on its performance. Table 2 shows the accuracy and OD score of network with different depth configurations. For the baseline method (), we can see that achieves the best result. However, its performance will decline as the network deepens. We believe that the aggravation of overfitting is responsible for this degradation since the number of parameter will increased dramatically with deeper network. In contrast, we can observe a stable performance improvement with our 3D-DSC enhanced versions even when its depth reaches and . Interestingly, note that and have the similar number of parameters. These results confirm deeper network and be obtained and effectively trained with 3D-DSC, thus improving the network’s representational power. Moreover, as shown in Table 4, compared with several state-of-the-arts methods attempting to assist the diagnosis, outperforms the others with large margin on the ADHD-200 even if only single modality of dataset is used in our method.

Network DC Activation Accuracy OD
no no 74.26% 0.2733
no yes 74.83% 0.2638
yes yes 75.22% 0.2580
Table 3: Ablation studies for applying dense connection (DC) and activation layer (Activation for abbreviation) in the proposed 3D-DSC. OD: overfitting distance.
Method Classifier Accuracy
 [?] MKL 61.54%
 [?] SVM 63.57%
 [?] SM 3D CNN 66.04%
 [?] MM 3D CNN 69.15%
3D-DSC SM 3D CNN 73.68%
Table 4: Diagnosis performance comparisons between the proposed method and state-of-the-art methods based on the ADHD-200 dataset. MKL: multi kernel learning. SVM: support vector machine. SM: single modality. MM: multi modalities.

Ablation studies.

To investigate the effect of nonlinear activation layers and dense connections inserted between the separated 1D filters, we report the performances of with and without nonlinear layers and dense connections in Table 3. We can see that both of them contribute to the performance improvement, and the best result can be obtained by a combination of them.

Overfitting.

To further confirm the ability of 3D-DSC in coping with overfitting, we compare the performance of baseline method with ours by removing the Batch Normalization (BN) layers. For demonstration purposes, we set the batch-size to 1, the learning rate is initially set to 0.0001 and decreases by a factor of 10 when the validation error stops decreasing. Figure  3 shown the loss curve of and (without BN layer) on the validation dataset. We can see that the validation loss of increase rapidly since 60 epochs, while that of network remains stable, even after 100 epochs. Furthermore, the OD score of is 0.9962, which is significantly larger than 0.2946 of . Compared with , the more compact structure and much less learnable parameters of make it less susceptible to overfitting.

4.2 The Brain Tumor Segmentation on BRATS 2017

In this section, we evaluate the proposed method on another challenging task of brain tumor segmentation, using the public available dataset of BRATS 2017 challenge [?]. The training dataset contains 285 multisequence MRI of patients diagnosed with low-grade gliomas or high-grade gliomas. In each case, four MR sequences are available: T1, T1+gadolinium, T2 and FLAIR. In our experiment, we resize all volumes to and four MRI sequences for each sample are combined as a multichannel volume as input. Three state-of-the-arts methods including 3D U-net [?] (along with its dropout and stride 2 convolution enhanced versions), V-net [?] and method proposed in [?] are evaluated and compared.

Figure 4: The basic block of 3D U-net (shown in subfigure (a)) and the basic block of the network with 3D-DSC (shown in subfigure (b)), where we replace the second normal 3D convolution with 2 3D-DSC.

5-fold cross-validation strategy is employed in this experiment. In each fold, 228 samples are used for training and 57 samples are used for validation. Our 3D-DSC enhanced version is based on 3D U-net. We replace the original 3D U-net block as shown in Figure 4.a by our 3D-DSC enhanced version as shown in Figure 4.b. Note that only the second 3D convolution in 3D U-net is substituted by two consecutive 3D-DSC modules, and the overall architecture of network remain unchanged. The Dice score obtained by different methods are shown in Table 5. We can see that our method achieved the best performance with Dice score 0.7932, which outperforms the others with a large margin. It is worth noting that we did not adopt any trick in our method, such as dropout, replacing pooling with stride 2 convolution (s2-conv) and so on, although these tricks can slightly improve the performance of 3D U-net as reported in Table 5. The qualitative segmentation results of different methods are presented in Figure 5, we can see that fine details can be better recovered by our method.

Method Dice score
3D U-net 0.7554%
3D U-net(Dropout) 0.7592%
3D U-net(s2-conv) 0.7593%
 [?] 0.7655%
V-net [?] 0.7685%
3D U-net(3D-DSC) 0.7932%
Table 5: The experimental results of the proposed method and state-of-the-art methods. We train and evaluate these methods with the same strategy on the BRATS2017 dataset.
(a) Ground truth
(b) 3D U-net
(c) V-net
(d) ours
Figure 5: Example segmentation results on the BRATS 2017 Challenge dataset. From left to right: the ground truth, the segmentation result of 3D U-net, the segmentation result of V-net and the segmentation result of the proposed method.

5 Conclusion and Discussion

The effective and efficient exploration of 3D contextual information is essential in volumetric data analysis. Although the performance of CNNs in 2D image analysis is impressive, the predictive power of its 3D generalization (i.e., 3D CNNs) is always constrained by the number of samples, especially in biomedical image analysis. Considering the conflict between huge amount of parameters to learn in 3D CNNs and limited training samples which would quickly lead to overfitting, in this paper, we propose a novel 3D-DSC module to replace the traditional 3D convolutional kernels. The proposed 3D-DSC module consists of a series of densely connected 1D filters. This architecture is able to remove the redundancy within 3D kernels, while providing spaces for deepening the network, and therefore can effectively reduce the risk of overfitting. In addition, inspired by the recent success of residual network and densely connected networks, we extend the 3D separated convolution block by introducing dense connections within and between blocks. The dense connection provides an effective way to combine subsequent-layers and facilitates the flow of information. Furthermore, we investigate the effect of nonlinear activation layers between the concatenated 1D filters, which have the potentiality to increase representational power of the network and facilitate the learning of discriminative features. Experimental results on the ADHD classification and brain tumor segmentation demonstrate the superiority of the proposed 3D-DSC on volumetric image analysis. Note that 3D-DSC is not limited to any specific architecture or application and can be used to boost the performance by directly substituting the original 3D convolutional kernels.

References

  • [ADHD-200, 2018] ADHD-200. The adhd-200 global competition. http://fcon_1000.projects.nitrc.org/indi/adhd200/results.html, 2018.
  • [Chen et al., 2016] Jianxu Chen, Lin Yang, Yizhe Zhang, Mark S Alber, and Danny Z Chen. Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. In NIPS, pages 3036–3044, 2016.
  • [Chen et al., 2017] Muyuan Chen, Dai Wei, Stella Y Sun, Darius Jonasch, and Cynthia Yet al. He. Convolutional neural networks for automated annotation of cellular cryo-electron tomograms. Nature Methods, 14(10):983, 2017.
  • [Choromanska et al., 2015] Anna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann Lecun. The loss surfaces of multilayer networks. international conference on artificial intelligence and statistics, pages 192–204, 2015.
  • [Cicek et al., 2016] Ozgun Cicek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: Learning dense volumetric segmentation from sparse annotation. In MICCAI, pages 424–432, 2016.
  • [Dai et al., 2012] Dai Dai, Jieqiong Wang, and Huiguang He. Classification of adhd children through multimodal magnetic resonance imaging. Frontiers in systems neuroscience, 6:63, 2012.
  • [Denton et al., 2014] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann Lecun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, pages 1269–1277, 2014.
  • [Guo et al., 2014] Xiaojiao Guo, An Xiu, Deping Kuang, Yilu Zhao, and Lianghua He. ADHD-200 Classification Based on Social Network Method. 2014.
  • [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  • [Howard et al., 2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, and Weijun Wanget al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv: Computer Vision and Pattern Recognition, 2017.
  • [Huang et al., 2017] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, pages 2261–2269, 2017.
  • [IPCAS, 2018] The Magnetic Resonance Imaging Research Center IPCAS. The r-fmri maps project. http://mrirc.psych.ac.cn/RfMRIMaps, 2018.
  • [Isensee et al., 2017] Fabian Isensee, Philipp Kickingereder, Wolfgang Wick, Martin Bendszus, and Klaus H Maierhein. Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In MICCAI, pages 287–297, 2017.
  • [Jaderberg et al., 2014] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. british machine vision conference, 2014.
  • [Khosravan and Bagci, 2018] Naji Khosravan and Ulas Bagci. S4nd : Single-shot single-scale lung nodule detection. In MICCAI, pages 794–802, 2018.
  • [Lai, 2015] Matthew Lai. Deep learning for medical image segmentation. arXiv: Learning, 2015.
  • [Lebedev et al., 2015] Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan V Oseledets, and Victor S Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. international conference on learning representations, 2015.
  • [Lecun et al., 1998] Yann Lecun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [Lee et al., 2015] Kisuk Lee, Aleksandar Zlateski, Ashwin Vishwanathan, and H Sebastian Seung. Recursive training of 2d-3d convolutional networks for neuronal boundary detection. In NIPS, pages 3573–3581, 2015.
  • [Liang et al., 2017] Zou Liang, Jiannan Zheng, Chunyan Miao, Martin J. Mckeown, and Z. Jane Wang. 3d cnn based automatic diagnosis of attention deficit hyperactivity disorder using functional and structural mri. IEEE Access, 5(99):23626–23636, 2017.
  • [Menze et al., 2015] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, and K. Farahani et al. The multimodal brain tumor image segmentation benchmark (brats). IEEE Transactions on Medical Imaging, 34(10):1993–2024, 2015.
  • [Milletari et al., 2016] Fausto Milletari, Nassir Navab, and Seyed Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Fourth International Conference on 3d Vision, 2016.
  • [Rigamonti et al., 2013] Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua. Learning separable filters. pages 2754–2761, 2013.
  • [Srivastava et al., 2015] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv: Machine Learning, 2015.
  • [Szegedy et al., 2016a] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. national conference on artificial intelligence, pages 4278–4284, 2016.
  • [Szegedy et al., 2016b] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, pages 2818–2826, 2016.
  • [Wainberg et al., 2018] Michael Wainberg, Daniele Merico, and Andrew Delong & Brendan J Frey. Deep learning in biomedicine. Nature biotechnology, 36(9):829, 2018.
  • [Wolterink et al., 2016] Jelmer M Wolterink, Tim Leiner, Max A Viergever, and Ivana Isgum. Dilated convolutional neural networks for cardiovascular mr segmentation in congenital heart disease. arXiv: Computer Vision and Pattern Recognition, pages 95–102, 2016.
  • [Zhang et al., 2018] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, pages 6848–6856, 2018.
  • [Zheng et al., 2019] Hao Zheng, Yizhe Zhang, Lin Yang, Peixian Liang, Zhuo Zhao, Chaoli Wang, and Danny Z Chen. A new ensemble learning framework for 3d biomedical image segmentation. national conference on artificial intelligence, 2019.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
366205
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description