A Peek Into the Hidden Layers of a Convolutional Neural Network Through a Factorization Lens
Abstract.
Despite their increasing popularity and success in a variety of supervised learning problems, deep neural networks are extremely hard to interpret and debug: Given an already trained deep neural network, and a set of test inputs, how can we gain insight into how those inputs interact with different layers of the neural network? Furthermore, can we characterize a given deep neural network based on its observed behavior on different inputs? In this paper, we propose a novel factorizationbased approach on understanding how different deep neural networks operate. In our preliminary results, we identify fascinating patterns that link the factorization rank (typically used as a measure of interestingness in unsupervised data analysis) with how well or poorly the deep network has been trained. Finally, our proposed approach can help provide visual insights on how highlevel, interpretable patterns of the network’s input behave inside the hidden layers of the deep network.
1. Introduction
Deep neural networks have gained enormous popularity in machine learning and data science alike, and rightfully so, since they have demonstrated impeccable performance in a variety of supervised learning tasks , especially a number of computer vision problems, most prominent examples being (He et al., 2017),(Huang et al., 2017). Albeit very successful in providing accurate classifications, deep neural networks are notorious for being hard to interpret, explain, and debug, a problem amplified by their increasing complexity. This is an extremely challenging problem and the jury is still out on whether it can be solved in its entirety.
Within the confines of interpreting and debugging deep neural networks, we are interested in answering the following questions: Given an already trained deep neural network, and a set of test inputs, how can we gain insight into how those inputs interact with different layers of the neural network? Furthermore, can we characterize a given deep neural network based on its observed behavior on different inputs?
To the best of our knowledge, there is very limited prior work on the topic. One line of work that is attempting to answer such questions is the work by Bau et al. referred to as “Network Dissection” (Bau et al., 2017) and the work done by Olah, et al., ”The Building Blocks of Interpretability”, Distill, 2018, (Olah et al., 2018). Network Dissection is a framework which quantifies the interpretability of activations of hidden layers of CNNs. It does so by evaluating the alignment between neural activations in the hidden units and a set of semantic concepts. In the work done on interpretability by Olah, et al., in ”The Building Blocks of Interpretability”, their focus is to learn what each neuron or a group of neurons detect based on feature visualization, and then attempts Spatial Attribution and Channel Attribution in order to explain how the network assembles these pieces to come at a decision. More recently, Raghu et al. (Raghu et al., 2017) introduced a Canonical Correlation Analysis based study that jointly analyzes the hidden layers of a CNN, however, this analysis is not relating the derived representations of CCA to the input data, thus may not be able to provide an endtoend characterization and visualization. Finally, a concurrent study to ours by Sedhi et al. (Sedghi et al., 2018) is focusing on analyzing the singular values of the convolutional layers of a CNN towards better regularization and quality improvement. To the best of our knowledge, our work is the first approach towards characterizing the quality of a CNN through a joint factorization lens.
In this work, we propose an novel research direction that leverages factorization towards answering the above questions. The key idea behind our work is the following: we jointly factorize the raw inputs to the deep neural network and the outputs of each layer, to the same lowdimensional space. Intuitively, such a factorization will seek to identify commonalities in different parts of the raw input and how those are reflected and processed within the network. For instance, if we are dealing with a Deep Convolutional Neural Network that is classifying handwritten digits, such a joint latent factor will seek to identify different shapes that are common in a variety of input classes (e.g., round shapes for “0”, “6”, and “9”) and identify potential correlation on how different layers behave collectively for such highlevel latent shapes.
This paper reports very preliminary work in that direction. The main contributions of this paper are:

Novel problem formulation & modeling: We propose a novel problem formulation on providing insights into a deep neural network via a joint factorization model.

Experimental observations: In three experimental case studies, we train a Convolutional Neural Network in various problematic ways, and our proposed formulation reveals a persistent pattern that indicates a relation between the rank of the joint factorization and the quality training. It is very important to note that those patterns are revealed without using labels for the test data.

Visualization Tool: In addition to the link between the factorization rank and the training quality, our proposed method is able to provide visualizations that provide insights on how different highlevel shapes/parts in the input data traverse the network.
2. Proposed Method
As mentioned in the introduction, given an already trained neural network and a set of test data (without their labels), we seek to factorize the input data and the output of each hidden layer for the same data, into a joint lowdimensional space. A highlevel overview of our proposed modeling is shown in Figure 1.
The above formulation can be seen as capturing the joint latent factors that characterize the input data and the nonlinear transformations thereof through the deep neural network.
In the following lines we provide details of our model and the fitting algorithm.
2.1. Model Details
Objective Function for Coupled Nonnegative Factorization is as follows:
(1) 
,
where C is the number of channels in an input image and L is the number of layers of the neural network being analysed, P and O are sets of matrices and respectively. Each is the set of channel of the input images to the neural network, where each column of is a channel of the image in vectorized form, thus each row of is a pixel or location in the original image. For Grayscale images, the number of channels is 1, hence in such a scenario each column of represents an input image fed to the neural network. Similarly, each is the matrix of activations of the layer of the neural network, where each column of , for instance, the column , is the activation of layer of the network for input image, channel of which is represented by .
Each is a matrix that stores the latent representation of each pixel (for the channel) in it’s rows, each is a matrix that stores the latent representation of each neuron (or activation) of layer in it’s rows. Finally, is a matrix that stores the latent representation of each image fed to the neural network in it’s rows.
The first summation of the objective function is geared at finding structures at pixellevel in the input images (for all channels in the input image) to the network. The second summation term tries to find patterns between neural activations for various inputs. The coupling matrix propagates information between the 2 summations and our goal in doing so is to infer correlations or patterns between clustering of input images and clustering of neurons (across all layers),i.e., we aim to investigate whether the same cluster neurons fire for similar (yet not identical) inputs. This approach is inspired by broader goal of understanding the relation between the discriminative power of neural networks vs their interpretability.
We solve equation (1) using the algorithm provided in (Lee and Seung, 2001) and the update steps are as follows:
where stands for Hadamard (element wise) product^{1}^{1}1 and stands for element wise division ^{2}^{2}2. For numerical stability, a small constant is added elementwise to the resultant matrix in the denominator. We initialize , ’s, ’s randomly with component values between 0 and 1.
The model above is a proofofconcept and is not leveraging higherorder correlations in the input and the activations layers; such higherorder dynamics can be exploited via tensor modeling which is an active direction of extending this model.
3. Experimental Analysis
In this section we present our analysis of the neural network via our coupled Nonnegative Factorization framework. We proceed as follows:

We first provide details about the experimental setup: Dataset and the Neural Network.

We describe how we setup our model for analysis of the network.

Next we try to study the behavior of the network on a fixed test set with respect to variations in the amount of training data via our model (1).

We study similar behaviour as above, though this time with we train the network only on a subset of categories.
3.1. Dataset and The Network
We used a raw MNIST Dataset^{3}^{3}3https://github.com/myleott/mnist_png which about 60,000 training images and 6,000 test images. Each image is a single channel grayscale with a resolution of 28 by 28 pixels.
The network we analyze consists of 2 consecutive Convolutional Layers with Maxpool and ReLU, followed by a fully connected layer which feeds to a softmax output. For our study we focus on analyzing the 2 convolutional layers. The first convolutional layer has 1 input channel and yields 10 output channels with a kernel size of , which leads to a maxpool and subsequently to a ReLU output. This output is fed to the second convolutional layer which takes 10 input channels and outputs 20 channels, again with a kernel size of with a subsequent maxpool and ReLU. We refer to the output of ReLU as the activations for that Layer, given the input.
3.2. Setting up the model
In this section we describe how we construct the matrices ’s and ’s. For simplicity we consider only grayscale or singlecolor channel input images. We take an input image that is fed to the network, we vectorize it, and stack in a column of , Note that the suffix is 0 since the input is a single channel image. Thus, when we take the input image fed to the network, we vectorize it and set the column of , i.e. , equal to the vectorized form of that image. For this image, we vectorize the activations of the layer, then we store the vectorized activations of the layer for the image in . We repeat this process for all images in the test set. If the input image is of size m x n x (single channel), and the number of images in the test set is , then . Let be the number of neurons in layer of the network. Then .
3.3. Case Study I
In this section we describe the behavior of the network when we provide it with increasing amounts of training data, thereby improving it’s performance on our test set. In this evaluation exercise, we train the network on dataset size varying from 25% to 100% in increments of 25%. The accuracy on the test set for each sample is is as follows  for 25% data: 83%. for 50% data: 89%, for 75% data: 93%, for 100% data: 95%. For each sample size, we only train over 1 epoch of the data to maintain uniformity. We store the test set and the activations of the network over the test set in ’s and ’s, respectively, as explained earlier. We run the coupled nonnegative factorization model once we obtain ’s and ’s for a particular instance of the experimental exercise. The number of latent factors in (1) is varied from 10 to 50 with increments of 10.
The results are tabulated in Figure 3. We observe that the outputs of the network when trained on smaller datasets tends to be more compressible, i.e., it requires a lower number of latent dimensions to explain itself, as evinced by the lower RMSE for smaller datasets over all latent dimension values. This understanding is further emboldened when we look at top singular values of all Activation matrices, ’s of the network under different training scenarios in Figure 4 and Figure 5. Especially when we look at singular value plots given by Figure 5 of the deepest layer of the network, we observe a clear difference between the singular value spectra in various training scenarios. Usually the final layers of a network are usually Fully connected layers followed by softmax or sigmoid nonlinearities, and the goal of the previous layers is to non linearly project the input vector into a space where vectors belonging to various classes are easily separable by applying a fully connected layer with softmax. It becomes amply clear that a network with poorer performance transforms the dataset into a much lower dimensional subspace when compared with a well trained or a highperformance network. We would like to emphasize that if this assertion is accurate and omnipresent, our model doesn’t need test data annotations to investigate relative performance of neural networks.
In order to investigate the generality of the observed pattern, we also applied our factorization to a modified version of AlexNet (Krizhevsky et al., 2012) ^{4}^{4}4We obtained the modified architecture definition from https://github.com/bearpaw/pytorchclassification (where the modification was due to changing the size of the input) both on the MNIST and CIFAR10 datasets, and the observations persisted. Figure 6 shows the RMSE for varying percentages of the training input for CIFAR10 on AlexNet.
3.4. Case Study II
In this section we describe the behavior of the network when we provide it with a subset of original classes of the training data. We train the network on the following subsets of digits {0},{0,1,2,3,4},{5,6,7,8,9} and {0,1,2,3,4,5,6,7,8,9}. The test set accuracies for the respective cases are 9%,50%,47% and 93%. The number of training examples for digits 0 through 4 were slightly more compared to the case of digits 5 through 9, hence the slight variation in accuracy. For each subset, we only train over 1 epoch of the relevant data to maintain uniformity. As explained earlier, we store the test set and the activations of the network over the test set in ’s and ’s, respectively and rest of the setup is same as in subsection 3.3. The results of our model’s analysis on this training setup are shown in Figure 7. We can clearly see that when the training data is small and/or only a class based subset of the original training data, the outputs of the network are much more compressible, as indicated by the lower RMSE for respective cases. This observation is even further consolidated by evidence from the singular value spectra (Figure 8) of the final activation layer of the network when trained with different subsets of classes.
3.5. Case Study III
In this study, we train the network in such a way that the input training examples are not sent in an arbitrary order, but instead, they are fed on a class by class basis. To give a hypothetical example, initially all the images for Digit 1 are given as input to the network for training, then all images for Digit 2, and so on. For the purpose of this study, the first training class was the Digit 9. The accuracy of the network on the test set was , the network correctly recognizes most of the . As before, this is done only for 1 epoch over the dataset.
What makes this study interesting from our point of view is the fact that though the network has been trained on the entire dataset, but having been trained in such an orderly fashion, it is only good at recognizing the first class it was trained on, it would be interesting to see which neurons fire for other digits, and whether there are any pattern (similarities or contrasts) between the firing of neurons of a layer for various input digits.
During our experimentation with the MNIST dataset under this setting visualizations for the final convolutional layer in one of the test setup yielded the following results:

First, Considering the case on which the network performs well, i.e., when the input is digit , the Neurons in the final layer which were active when the input was digit (See (a)), were also active when the input images were digits and (See (b) and (c) respectively) . Indicating a commonality of structure among these digits.
Finally, in Figure 9 we show the latent factor heatmap for each neuron, for both layers of our CNN, for a rank 10 factorization. Even by a quick glance at the heatmaps, it becomes apparent that most of the network is not properly utilized in the case where we do not shuffle during training (which further corroborates our lowrank observation). We reserve further investigation of the visualization capabilities of our formulation for future work.
4. Conclusions
In this paper, we introduce a novel factorizationbased method for providing insights into a Deep Convolutional Neural Network. In three experimental case studies, we identify a prominent pattern that links the rank of the factorization, roughly a measure of the degree of “interestingness” in a highdimensional dataset, and the quality with which the network was trained: the poorer the training, the lower the rank. Importantly, this observation is derived in the absence of test labels. We intend to further investigate whether this observation holds in a wide variety of cases, and what other implications that would entail. Finally, we provide a visualization tool that helps shed light into how different cohesive highlevel patterns in the input data traverse the hidden layers of the network.
5. Acknowledgements
The authors would like to thank NVIDIA for a GPU grant which facilitated computations in this work.
References
 (1)
 Bau et al. (2017) David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In Computer Vision and Pattern Recognition.
 He et al. (2017) Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask RCNN. In Proceedings of the International Conference on Computer Vision (ICCV).
 Huang et al. (2017) Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.
 Lee and Seung (2001) Daniel D. Lee and H. Sebastian Seung. 2001. Algorithms for Nonnegative Matrix Factorization. In Advances in Neural Information Processing Systems 13, T. K. Leen, T. G. Dietterich, and V. Tresp (Eds.). MIT Press, 556–562. http://papers.nips.cc/paper/1861algorithmsfornonnegativematrixfactorization.pdf
 Olah et al. (2018) Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. 2018. The Building Blocks of Interpretability. Distill (2018). DOI:http://dx.doi.org/10.23915/distill.00010 https://distill.pub/2018/buildingblocks.
 Raghu et al. (2017) Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha SohlDickstein. 2017. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In Advances in Neural Information Processing Systems. 6078–6087.
 Sedghi et al. (2018) Hanie Sedghi, Vineet Gupta, and Philip M Long. 2018. The Singular Values of Convolutional Layers. arXiv preprint arXiv:1805.10408 (2018).