Exploring Feature Reuse in DenseNet Architectures
Densely Connected Convolutional Networks (DenseNets)  have been shown to achieve state-of-the-art results on image classification tasks while using fewer parameters and computation than competing methods. Since each layer in this architecture has full access to the feature maps of all previous layers, the network is freed from the burden of having to relearn previously useful features, thus alleviating issues with vanishing gradients. In this work we explore the question: To what extent is it necessary to connect to all previous layers in order to reap the benefits of feature reuse? To this end, we introduce the notion of local dense connectivity and present evidence that less connectivity, allowing for increased growth rate at a fixed network capacity, can achieve a more efficient reuse of features and lead to higher accuracy in dense architectures.
Deep networks have have been getting deeper in recent years [2,3,4] and with increased depth, challenges such as vanishing gradient and other issues can arise. To combat these issues, architectures have been proposed that connect more distant layers directly to other layers. Generally referred to as identity connections or skip connections, methods include variations not only of the network topology but also the nature through which connectivity occurs.
Inspired by the gating mechanism found in LSTMs , layers in Highway Networks  learn to regulate information flow across local skip connections. Instead of learning to gate local connections, ResNets  perform a similar type of regulation but instead by learning residual functions. In this case, information is carried from previous layers through the addition operation.
Other networks feature connectivity patterns that carry information directly across larger depths. Training deep ResNets with Stochastic Depth  does this implicitly during training; as subsets of layers are randomly dropped (for each mini-batch) and replaced with the identity function. By using Drop-Path training of Fractalnets , where connections from a fractal-inspired architecture are dropped during training, ResNet-level  performance is achieved but without using residual connections.
This trend towards increased connectivity and across greater depths culminated in the DenseNet architecture  wherein every layer is connected to every other layer.
In this work, we relax the fully-dense connectivity of DenseNet by introducing network architectures where dense connectivity within each dense block is limited to only previous layers. Since we parameterize these networks using this dense window size , we refer to these architectures as WinDenseNet-N. Comparing to the baseline model DenseNet-40  on CIFAR-10 , we show that limiting connectivity this way can greatly reduce the number of parameters (and thus training time) of these networks with only a small reduction in accuracy. More importantly, we provide evidence that WinDenseNets, at various window sizes, can utilize parameter capacity more effectively than DenseNets. In other words, for a fixed capacity, networks with lower dense connectivity and higher growth rate can outperform their fully-dense counterparts. Further, we provide insight into why this may be the case by visualizing feature reuse in these dense architectures.
The DenseNet architecture  consists of a series of dense blocks where each layer within a dense block is densely connected to all preceding layers (one dense block shown at top of figure 1). Notably, and in contrast to the additive connectivity of ResNets, information flow in DenseNets occurs using feature map concatenation. Since feature concatenation can only be performed with feature maps of the same size, dense blocks are separated by transition layers wherein down-sampling occurs via pooling. Capacity is parameterized by the number of new convolutional feature maps generated at each layer: the growth rate of the network.
Our proposed architecture explores limiting the connectivity of a target layer within dense blocks to only previous (source) layers. For example, for a dense connectivity window size of , each layer in a dense block takes as input, the concatenation of features maps from at most preceding source layers (bottom of figure 1). For this example, note that for the 1st and 2nd layers, only and source layers respectively, are taken as input. Further, note that the (transition or final) layer that immediately follows this dense block also takes the prior layers as input.
For a dense block having layers, a dense connectivity window of size is required for the layer immediately following a dense block to be able to reach back to include feature maps that first entered the dense block (orange maps in figure 1) and, in this case, the network has the equivalent of full DenseNet connectivity. Since maps first entering dense blocks can often number in the hundreds, lowering dense connectivity even by one can lead to a dramatic drop in the number of trainable parameters. For a fixed capacity, the notion of limited dense connectivity explores the idea of dropping distant (potentially less useful) features in exchange for the improvement that can be gained by increasing network growth rate (number of filters/feature maps at each layer). In the following section we show how our proposed locally dense-connected networks can lead to a more efficient use of parameters and, subsequently, higher accuracy than DenseNet-40 given a fixed capacity.
For our experiments we begin with the reference DenseNet-40 model from  with default dense block growth rate of . This base architecture contains dense blocks each having densely connected layers each comprised of Batch Normalization, ReLU and 3x3 convolution. The total number of trainable parameters of this network is . Neither bottleneck nor transition layer compression is used in our experiments. Training times are reported from training on an NVIDIA GeForce GTX 1080 Ti Graphics Card.
In all our experiments, unless otherwise noted, all training hyper-parameters were kept the same as the original paper. Specifically, SGD was used with a batch size of , momentum of , weight decay of and dropout set to retain . The learning rate schedule begins with then changes to at epoch and is finally lowered to at epoch . Training on CIFAR-10  ends after epochs and test results on the test portion are reported. We do not use data augmentation. For code, we expand upon the Tensorflow  port  of the original Torch implementation .
The original DenseNet paper  reported a no-augmentation DenseNet-40 result of accuracy on CIFAR-10. We achieved a comparable result of . This small difference may be due to using TensorFlow instead of their original Torch implementation and/or other factors such as random initialization etc.
3.1 Dense Window Connectivity
Leaving all other factors the same, we propose to modify the reference, layers per block, DenseNet architecture by varying the amount of local dense connectivity and measuring the affect on accuracy, training time and number of trainable network parameters. For these experiments we vary the dense window size from to . A value of is similar to a traditional feed-forward convolutional architecture; where filters of any given layer convolve only over the previous layer’s feature maps. A dense window value of corresponds to each layer within a dense block having convolutional access to all previous layers within the dense block. Note that a value of is necessary for the transition or final layer that follows the dense block to have access to that dense block’s input maps (with a dense window size of only , the subsequent transition layer can no longer reach the maps that were first input into the block).
The results in table 1 and figure 3 demonstrate that our proposed dense windowed connectivity pattern is able to greatly reduce the number of network parameters and training time with only modest reductions in accuracy for various window values. This provides evidence that the full connectivity pattern proposed in DenseNet  may not always be necessary in order to achieve good performance. With reduced training time one can expand hyper-parameter grid-search at train, and small models are more amenable to end-user applications (e.g. mobile), leading to the popularity of the network compression field in recent years.
3.2 Capacity Normalization
In the previous section, our proposed method (WinDenseNet) was compared directly to DenseNet despite having far fewer trainable parameters (figure 3). Here we wish to compare DenseNet-40 and WinDenseNet by normalizing for network capacity.
In order to normalize a network to have the same capacity as network , we vary the growth rate of network A until reaching the same number of parameters as network B. However, since networks and have different structure, it is very unlikely that an integer value for growth rate will normalize to the exact capacity of . For fair comparison, we therefore find the value such that and then train and obtain test accuracies for both and . Finally, we linearly interpolate between the two capacity bounds in order to obtain a good estimate for the accuracy of a network with the same capacity as .
Having a normalization strategy in place, we capacity-normalize WinDenseNets at varying window sizes to each have the same capacity as the full DenseNet-40 (number of trainable parameters = ), and compare accuracy. The results are shown in dark blue in figure 4. For certain dense window sizes, WinDenseNets not only benefit from the increased capacity but utilize this capacity more effectively than full DenseNet connectivity (light orange line). In other words, it can be more effective (say at window size ) to allocate network capacity toward increased growth rate rather than allocate those same parameters toward increased dense connectivity.
Next, instead of increasing the capacity of WinDenseNets to match that of DenseNet-40, we decrease the capacity of DenseNet-40 to match the reduced capacity of WinDenseNet for each window size, and measure performance. These results are shown in dark orange in figure 4. For small window sizes (less than ), and at their corresponding very low capacities (see Table 1), the full connectivity provided by DenseNet prevails. However, for dense window sizes larger than , figure 4 demonstrates that full dense connectivity can come at a cost; parameter resources allocated toward connectivity may, again, be better applied toward increasing the number of feature maps at each layer instead (increasing growth rate).
One can measure the relative importance that a given target layer places on the feature maps its filters convolve over, by measuring the relative mean filter strengths corresponding to those source feature maps. This can provide some insight into how much a given layer reuses features from previous layers. Given a target layer within a network having local dense connectivity of , will be connected to between and preceding source layers. Each of the filters of layer convolve across the concatenation of feature maps from these source layers. Now we consider the mean value of all learned filter weights that correspond to feature maps from a given source layer which results in a single number for each source layer. Finally, we normalize these mean filter strengths so that the maximum value is (each column is independently normalized so at least one value must be ). Therefore, each column in Figure 5 corresponds to a given target layer’s relative interest in (or reliance on) its input feature maps; in other words, how much a target layer reuses feature maps from previous layers.
In figure 5, one can see that networks with small dense window sizes have learned to benefit most from using feature maps from earlier layers (for each column, the highest value (dark red) occurs at the lowest source layer connection). This remains true within all three dense blocks.
At larger dense window sizes, networks begin to display the opposite affect: a tendency toward strongest feature reuse originating from the nearest source layers (for each column, the dark red value occurs at the bottom).
Another interesting observation is that, as the dense window size increases, feature reuse within each dense block becomes more diverse: dense block 1 exhibits more random feature reuse, dense block 2 exhibits a more consistent and strong reuse of prior features, and dense block 3 shows diminishing interest in features from more distance source layers.
Also, one can see in figure 5 that different dense blocks reuse features that first enter each dense block to varying degrees (top rows correspond to maps entering dense blocks). Dense block 1 exhibits strong interest in this input layer at almost all dense window sizes. Dense block 2 has a strong interest for small window sizes and lower interest for large window sizes (Dense block 3 even less so).
This analysis provides some evidence why it may be unnecessary to densely connect networks to the fullest extent as in ; if a network can learn to achieve good performance by only reusing features in a local window, then network capacity allocated toward further connectivity would be better applied to added representational expression (more filters/feature maps). This can be seen especially in Dense block 3 and this provides some justification for our results in figure 4; where mid-sized dense connectivity networks led to the highest accuracy when capacity-normalized.
Lastly, figure 6 displays the normalized mean filter strength within each dense block for varying dense window sizes (the mean of colored values in each block shown in figure 5). Once again one can see a declining trend away from feature reuse of earlier layers especially for Dense block 3 but also within the mean of all dense blocks (dashed line).
5 Conclusion/Future Work
The success of full dense connectivity  rests upon the idea that reusing features from previous layers can be more important than adding new features at each layer (full connectivity with low growth rates). In this work, by introducing the notion of local dense connectivity, we have shown that there is indeed a trade-off between the amount of dense connectivity and the amount of representational expression available at each layer (# of filters/feature maps). In other words, the fully-dense connectivity pattern of DenseNet may not always be necessary and network parameter resources may be better put toward a combination of local dense connectivity combined with increased growth rate. These findings were further supported by an analysis of to what extent features were being reused at various layers. In section 4.3 and figure 3 of , the authors show that DenseNets can make more efficient use of parameters than ResNets  and our proposed local dense connectivity pattern builds upon this showing even further parameter efficiency gains are possible.
Our examination of feature reuse provides some evidence that different dense blocks could benefit from having different amounts of dense connectivity - an interesting avenue for future work. As well, it may be fruitful to explore the interaction between locally dense networks and bottleneck and transition layer compression as part of an expanded study of the other networks found in . The potential parameter efficiency of locally dense networks should also be useful for other tasks that utilize dense networks such as semantic segmentation .
 G. Huang, Z. Liu, L. van der Maaten, and K. Weinberger. “Densely Connected Convolutional Networks,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
 A. Krizhevsky, I. Sutskever, and G. E. Hinton. "Imagenet classification with deep convolutional neural networks," Neural Information Processing Systems (NIPS), 2012.
 K. Simonyan, A. Zisserman. "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
 K. He, X. Zhang, S. Ren, and J. Sun. "Deep residual learning for image recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
 S. Hochreiter, and J. Schmidhuber. "Long short term memory," Technical Report FKI-207-95, Technische Universitaet Muenchen, 1995.
 R. K. Srivastava, K. Greff, and J. Schmidhuber. "Training very deep networks," Neural Information Processing Systems (NIPS), 2015.
 G. Huang, Y. Sun, Z. Liu, D. Sedra, K., and Q. Weinberger. "Deep Networks with Stochastic Depth," European Conference on Computer Vision, 2016.
 G. Larsson, M. Maire, and G. Shakhnarovich. "Fractalnet: Ultra-deep neural networks without residuals," arXiv preprint arXiv:1605.07648, 2016
 Tensorflow - https://www.tensorflow.org
 A. Krizhevsky, G. Hinton, "Learning multiple layers of features from tiny images," Tech Report, 2009
 S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio. "The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.