Understanding Individual Decisions of CNNs via Contrastive Backpropagation

Understanding Individual Decisions of CNNs via Contrastive Backpropagation

Jindong Gu The University of Munich, Munich, Germany
Siemens AG, Corporate Technology, Munich, Germany
   Yinchong Yang The University of Munich, Munich, Germany
Siemens AG, Corporate Technology, Munich, Germany
   Volker Tresp The University of Munich, Munich, Germany
Siemens AG, Corporate Technology, Munich, Germany

A number of backpropagation-based approaches such as DeConvNets, vanilla Gradient Visualization and Guided Backpropagation have been proposed to better understand individual decisions of deep convolutional neural networks. The saliency maps produced by them are proven to be non-discriminative. Recently, the Layer-wise Relevance Propagation (LRP) approach was proposed to explain the classification decisions of rectifier neural networks. In this work, we evaluate the discriminativeness of the generated explanations and analyze the theoretical foundation of LRP, i.e. Deep Taylor Decomposition. The experiments and analysis conclude that the explanations generated by LRP are not class-discriminative. Based on LRP, we propose Contrastive Layer-wise Relevance Propagation (CLRP), which is capable of producing instance-specific, class-discriminative, pixel-wise explanations. In the experiments, we use the CLRP to explain the decisions and understand the difference between neurons in individual classification decisions. We also evaluate the explanations quantitatively with a Pointing Game and an ablation study. Both qualitative and quantitative evaluations show that the CLRP generates better explanations than the LRP. The code is available 111https://github.com/Jindong-Explainable-AI/Contrastive-LRP.

Explainable Deep Learning LRP Discriminative Saliency Maps

1 Introduction

Deep convolutional neural networks (DCNNs) achieve start-of-the-art performance on many tasks, such as visual object recognition[10, 26, 29], and object detection[7, 18]. However, since they lack transparency, they are considered as ”black box” solutions. Recently, research on explainable deep learning has received increased attention: Many approaches have been proposed to crack the ”black box”. Some of them aim to interpret the components of a deep-architecture model and understand the image representations extracted from deep convolutional architectures [12, 5, 14]. Examples are Activation Maximization [6, 25], DeConvNets Visualization [32]. Others focus on explaining the individual classification decisions. Examples are Prediction Difference Analysis [21, 24], Guided Backpropagation [25, 27], Layer-wise Relevance Propagation (LRP) [3, 15], Class Activation Mapping [36, 23] and Local Interpretable Model-agnostic Explanations [20, 19].

More concretely, the models in [17, 36] were originally proposed to detect object only using category labels. They work by producing saliency maps of objects corresponding to the category labels. Their produced saliency maps can also explain the classification decisions to some degree. However, the approaches can only work on the model with a specific architecture. For instance, they might require a fully convolutional layer followed by a max-pooling layer, a global average pooling layer or an aggregation layer, before a final softmax output layer. The requirement is not held in most off-the-shelf models e.g., in [10, 26]. The perturbation methods [20, 19, 21] require no specific architecture. For a single input image, however, they require many instances of forward inference to find the corresponding classification explanation, which is computationally expensive.

The backpropagation-based approaches [25, 27, 3] propagate a signal from the output neuron backward through the layers to the input space in a single pass, which is computationally efficient compared to the perturbation methods. They can also be applied to the off-the-shelf models. In this paper, we focus on the backpropagation approaches. The outputs of the backpropagation approaches are instance-specific because these approaches leverage the instance-specific structure information (ISSInfo). The ISSInfo, equivalent to bottleneck information in [13], consist of selected information extracted by the forward inference, i.e., the Pooling switches and ReLU masks. With the ISSInfo, the backpropagation approaches can generate instance-specific explanations. A note on terminology: although the terms ”sensitivity map”, ”saliency map”, ”pixel attribution map” and ”explanation heatmap” may have different meanings in different contexts, in this paper, we do not distinguish them and use the term ”saliency map” and ”explanation” interchangeably.

The primal backpropagation-based approaches, e.g., the vanilla Gradient Visualization [25] and the Guided Backpropagation [27] are proven to be inappropriate to study the neurons of networks because they produce non-discriminative saliency maps [13]. The saliency maps generated by them mainly depend on ISSInfo instead of the neuron-specific information. In other words, the generated saliency maps are not class-discriminative with respect to class-specific neurons in output layer. The saliency maps are selective of any recognizable foreground object in the image [13]. Furthermore, the approaches cannot be applied to understand neurons in intermediate layers of DCNNs, either. In [32, 8], the differences between neurons of an intermediate layer are demonstrated by a large dataset. The neurons are often activated by certain specific patterns. However, the difference between single neurons in an individual classification decision has not been explored yet. In this paper, we will also shed new light on this topic.

The recently proposed Layer-wise Relevance Propagation (LRP) approach is proven to outperform the gradient-based approaches [15]. Apart from explaining image classifications[11, 15], the LRP is also applied to explain the classifications and predictions in other tasks [28, 2]. However, the explanations generated by the approach has not been fully verified. We summarise our three-fold contributions as follows:

  • We first evaluate the explanations generated by LRP for individual classification decisions. Then, we analyze the theoretical foundation of LRP, i.e., Deep Taylor Decomposition and shed new insight on LRP.

  • We propose Contrastive Layer-wise Relevance Propagation (CLRP). To generate class-discriminative explanations, we propose two ways to model the contrastive signal (i.e., an opposite visual concept). For individual classification decisions, we illustrate explanations of the decisions and the difference between neuron activations using the proposed approach.

  • We build a GPU implementation of LRP and CLRP using Pytorch Framework, which alleviates the inefficiency problem addressed in [34, 24].

Related work is reviewed in the next section. Section 3 analyzes LRP theoretically and experimentally. In Section 4, the proposed approach CLRP is introduced. Section 5 shows experimental results to evaluate the CLRP qualitatively and quantitatively on two tasks, namely, explaining the image classification decisions and understanding the difference of neuron activations in single forward inference. The last section contains conclusions and discusses future work.

2 Related Work

The DeConvNets were originally proposed for unsupervised feature learning tasks [33]. Later they were applied to visualize units in convolutional networks [32]. The DeConvNets maps the feature activity to input space using ISSInfo and the weight parameters of the forward pass. [25] proposed identifying the vanilla gradients of the output with respect to input variables are their relevance. The work also showed its relation to the DeConvNets. They use the ISSInfo in the same way except for the handling of rectified linear units (ReLUs) activation function. The Guided Backpropagation [27] combine the two approaches to visualize the units in higher layers.

The paper [3] propose LRP to generate the explanations for classification decisions. The LRP propagates the class-specific score layer by layer until to input space. The different propagation rules are applied according to the domain of the activation values. [15] proved that the Taylor Expansions of the function at the different points result in the different propagation rules. Recently, one of the propagation rules in LRP, z-rule, has been proven to be equivalent to the vanilla gradients (saliency map in [25]) multiplied elementwise with the input [9]. The vanilla Gradient Visualization and the Guided Backpropagation are shown to be not class-discriminative in [13]. This paper rethinks the LRP and evaluates the explanations generated by the approach.

Existing work that is based on discriminative and pixel-wise explanations are [4, 34, 23]. The work Guided-CAM [23] combines the low-resolution map of CAM and the pixel-wise map of Guided Backpropagation to generate a pixel-wise and class-discriminative explanation. To localize the most relevant neurons in the network, a biologically inspired attention model is proposed in [31]. The work uses a top-down (from the output layer to the intermediate layers) Winner-Take-All process to generate binary attention maps. The work [34] formulate the top-down attention of a CNN classifier as a probabilistic Winner-Take-All process. The work also uses a contrastive top-down attention formulation to enhance the discriminativeness of the attention maps. Based on their work and the LRP, we propose Contrastive Layer-wise Relevance Propagation (CLRP) to produce class-discriminative and pixel-wise explanations. Another publication related to our approach is [4], which is able to produce class-discriminative attention maps. While the work [4] requires modifying the traditional CNNs by adding extra feedback layers and optimizing the layers during the backpropagation, our proposed methods can be applied to all exiting CNNs without any modification and further optimization.

3 Rethinking Layer-wise Relevance Propagation

Each neuron in DCNNs represents a nonliear function , where is an activation function and is a bias for the neuron . The inputs of the nonliear function corresponding to a neuron are the activation values of the previous layer or the raw input of the network. The output of the function are the activation values of the neuron . The whole network are composed of the nested nonlinear functions.

To identify the relevance of each input variables, the LRP propagates the activation value from a single class-specific neuron back into the input space, layer by layer. The logit before softmax normalization is taken, as explained in [25, 3]. In each layer of the backward pass, given the relevance score of the neurons , the relevance of the neuron are computed by redistributing the relevance score using local redistribution rules. The most often used rules are the -rule and the -rule, which are defined as follows:


where connecting and is a parameter in -th layer, and , and the interval is the domain of the activation value .

3.1 Evaluation of the Explanations Generated by the LRP

(a) The explanations generated by LRP on AlexNet.
(b) The explanations generated by LRP on VGG16 Network.
(c) The explanations generated by LRP on GoogLeNet.
Figure 1: The images from validation datasets of ImageNet are classified using the off-the-shelf models pre-trained on the ImageNet. The classifications of the images are explained by the LRP approach. For each image, we generate four explanations that correspond to the top-3 predicted classes and a randomly chosen multiple-classes.

The explanations generated by LRP are known to be instance-specific. However, the discriminativeness of the explanations has not been evaluated yet. Ideally, the visualized objects in the explanation should correspond to the class the class-specific neuron represents. We evaluate the explanations generated by LRP on the off-the-shelf models from , specifically, AlexNet [10], VGG16 [26] and GoogLeNet [29] pre-trained on the ImageNet dataset [22].

The experiment settings are similar to [15]. The -rule is applied to the first convolution layer. For all higher convolutional layers and fully-connected layers, the -rule is applied. In the MaxPooling layers, the relevance is only redistributed to the neuron with the maximal value inside the pooling region, while it is redistributed evenly to the corresponding neurons in the Average Pooling layers. The biases and normalization layers are bypassed in the relevance propagation pass.

The results are shown in figure 1. For each test image, we create four saliency maps as explanations. The first three explanation maps are generated for top-3 predictions, respectively. The fourth one is created for randomly chosen 10 classes from the top-100 predicted classes (which ensure that the score to be propagated is positive). The white text in each explanation map indicates the class the output neuron represents and the corresponding classification probability. The explanations generated by AlexNet are blurry due to incomplete learning (due to the limited expressive power). The explanations of VGG16 classifications are sharper than the ones created on GoogLenet. The reason is that VGG16 contains only MaxPooling layers and GoogLenet, by contrast, contains a few average pooling layers.

The generated explanations are instance-specific, but not class-discriminative. In other words, they are independent of class information. The explanations for different target classes, even randomly chosen classes, are almost identical. The conclusion is consistent with the one summarised in the paper [5, 1], namely, almost all information about input image is contained in the pattern of non-zero pattern activations, not their precise values. The high similarity of those explanations resulted from the leverage of the same ISSInfo (see section 3.2). In summary, the explanations are not class-discriminative. The generated maps recognize the same foreground objects instead of a class-discriminative one.

3.2 Theoretical Foundation: Deep Taylor Decomposition

Motivated by the divide-and-conquer paradigm, Deep Taylor Decomposition decomposes a deep neural network (i.e. the nested nonliear functions) iteratively [15]. The propagation rules of LRP are derivated from Deep Taylor Decomposition of rectifier neuron network. The function represented by a single neuron is . The relevance of the neurons is given. The Deep Taylor Decomposition assumes . The function is expanded with Taylor Series at a point subjective to . The LRP propagation rules are resulted from the first degree terms of the expansion.

One may hypothesize that the non-discriminativeness of LRP is caused by the first-order approximation error in Deep Taylor Decomposition. We proved that, under the given assumption, the same propagation rules are derived, even though all higher-order terms are taken into consideration (see the proof in the supplementary material). Furthermore, we found that the theoretical foundation provided by the Deep Taylor Decomposition is inappropriate. The assumption is not held at all the layers except for the last layer. The assumption indicates that the relevance value is equal to the activation value for all the neurons, which, we argue, is not true.

In our opinion, the explanations generated by the LRP result from the ISSInfo (ReLU masks and Pooling Switches). The activation values of neurons are required to create explanations using LRP. In the forward pass, the network output a vector (). In the backward pass, the activation value of the class is layer-wise backpropagated into input space. In fully connected layers, only the activated neurons can receive the relevance according to any LRP propagation rule. In the Maxpooling layers, the backpropagation conducts an unpooling process, where only the neuron with maximal activations inside the corresponding pooling region can receive relevance. In the convolutional layer, only specific part of neurons in feature map have non-zero relevance in the backward pass. The part of input pixels live in the convolutional regions of those neurons . Only the pixels will receive the propagated relevance. The pattern of the is the explanation generated by LRP.

The backward pass for the class is similar to that of . The neurons that receive non-zero relevance are the same as in case of , even though their absolute values may be slightly different. Regardless of the class chosen for the backpropagation, the neurons of each layer that receive non-zero relevance stay always the same. In other words, the explanations generated by LRP are independent of the class category information, i.e., not class-discriminative.

In summary, in deep convolutional rectifier neuron network, the ReLU masks and Pooling Switches decide the pattern visualized in the explanation, which is independent of class information. That is the reason why the explanations generated by LRP on DCNNs are not class-discriminative. The analysis also explains the non-discriminative explanations generated by other backpropagation approaches, such as the DeConvNets Visualization [32], The vanilla Gradient Visualization [25] and the Guided Backpropagation [27].

4 Contrastive Layer-wise Relevance Propagation

Before introducing our CLRP, we first discuss the conservative property in the LRP. In a DNN, given the input , the output , the score (activation value) of the neuron before softmax layer, the LRP generate an explanation for the class by redistributing the score layer-wise back to the input space. The assigned relevance values of the input neurons are . The conservative property is defined as follows:

Definition 1

The generated saliency map is conservative if the sum of assigned relevance values of the input neurons is equal to the score of the class-specific neuron, .

Figure 2: The figure shows an overview of our CLRP. For each predicted class, the approach generates a class-discriminative explanation by comparing two signals. The blue line means the signal that the predicted class represents. The red line models a dual concept opposite to the predicted class. The final explanation is the difference between the two saliency maps that the two signal generate.

In this section, we consider redistributing the same score from different class-specific neurons respectively. The assigned relevance are different due to different weight connections. However, the non-zero patterns of those relevance vectors are almost identical, which is why LRP generate almost the same explanations for different classes. The sum of each relevance vector is equal to the redistributed score according to the conservative property. The input variables that are discriminative to each target class are a subset of input neurons, i.e., . The challenge of producing the explanation is to identify the discriminative pixels for the corresponding class.

In the explanations of image classification, the pixels on salient edges always receive higher relevance value than other pixels including all or part of . Those pixels with high relevance values are not necessary discriminative to the corresponding target class. We observe that receive higher relevance values than that of the same pixels in explanations for other classes. In other words, we can identify by comparing two explanations of two classes. One of the classes is the target class to be explained. The other class is selected as an auxiliary to identify of the target class. To identify more accurately, we construct a virtual class instead of selecting another class from the output layer. We propose two ways to construct the virtual class.

The overview of the CLRP are shown in figure 2. We describe the CLRP formally as follows. The th class-specific neuron is connected to input variables by the weights of layers between them, where means the weights connecting the th layer and the th layer, and means the weights connecting the th layer and the th neuron in the th layer. The neuron models a visual concept . For an input example , the LRP maps the score of the neuron back into the input space to get relevance vector .

We construct a dual virtual concept , which models the opposite visual concept to the concept . For instance, the concept models the zebra, and the constructed dual concept models the non-zebra. One way to model the is to select all classes except for the target class representing . The concept is represented by the selected classes with weights , where means the weights connected to the output layer excluding the th neuron. E.g. the dashed red lines in figure 2 are connected to all classes except for the target class zebra. Next, the score of target class is uniformly redistributted to other classes. Given the same input example , the LRP generates an explanation for the dual concept. The Contrastive Layer-wise Relevance Propagation is defined as follows:


where the function means replacing the negative elements of with zeros. The difference between the two saliency maps cancels the common parts. Without the dominant common parts, the non-zero elements in are the most relevant pixels . If the neuron lives in an intermediate layer of a neural network, the constructed can be used to understand the role of the neuron.

Similar to [34], the other way to model the concept is to negate the weights . The concept can be represented by the weights . All the weights are same as in the concept except that the weights of the last layer are negated. In the experiments section, we call the first modeling method CLRP1 and the second one CLRP2. The contrastive formulation in [34] can be applied to other backpropagation approaches by normalizing and subtracting two generated saliency maps. However, the normalization strongly depends on the maximal value that could be caused by a noisy pixel. Based on the conservative property of LRP, the normalization is avoided in the proposed CLRP.

5 Experiments and Analysis

In this section, we conduct experiments to evaluate our proposed approach. The first experiment aims to generate class-discriminative explanations for individual classification decisions. The second experiment evaluates the generated explanations quantitatively on the ILSVRC2012 validation dataset. The discriminativeness of the generated explanations is evaluated via a Pointing Game and an ablation study. The last experiment aims to understand the difference between neurons in a single classification forward pass.

5.1 Explaining Classification Decisions of DNNs

Figure 3: The images of multiple objects are classified using VGG16 network pre-trained on ImageNet. The explanations for the two relevant classes are generated by LRP and CLRP. The CLRP generates class-discriminative explanations, while LRP generates almost same explanations.

In this experiment, the LRP, the CLRP1 and the CLRP2 are applied to generate explanations for different classes. The experiments are conducted on a pre-trained VGG16 Network [26]. The propagation rules used in each layer are the same as in the section 3.1. We classify the images of multiple objects. The explanations are generated for the two most relevant predicted classes, respectively. The figure 3 shows the explanations for the two classes (i.e., and ). The explanations generated by the LRP are same for the two classes. Each generated explanation visualizes both and , which is not class-discriminative. By contrast, both CLRP1 and CLRP2 only identify the discriminative pixels related to the corresponding class. For the target class , only the pixels on the zebra object are visualized. Even for the complicated images where a zebra herd and an elephant herd co-exist, the CLRP methods are still able to find the class-discriminative pixels.

We evaluate the approach with a large number of images with multiple objects. The explanations generated by CLRP are always class-discriminative, but not necessarily semantically meaningful for every class. One of the reasons is that the VGG16 Network is not trained for multi-label classification. Other reasons could be the incomplete learning and bias in the training dataset [30].

The implementation of the LRP is not trivial. The one provided by their authors only supports CPU computation. For the VGG16 network, it takes the 30s to generate one explanation on an Intel Xeon 2.90GHz 6 machine. The computational expense makes the evaluation of LRP impossible on a large dataset [34]. We implement a GPU version of the LRP approach, which reduces the 30s to 0.1824s to generate one explanation on a single NVIDIA Telsa K80 GPU. The implementation alleviates the inefficiency problem addressed in [34, 24] and makes the quantitative evaluation of LRP on a larget dataset possible.

5.2 Evaluating the explanations

In this experiments, we quantitatively evaluate the generated explanations on the ILSVRC2012 validation dataset containing 50, 000 images. A Pointing Game and an ablation study are used to evaluate the proposed approach.

(a) Pointing Accuracy On the AlexNet
(b) Pointing Accuracy On the VGG16
Figure 4: The figure shows the localization ability of the saliency maps generated by the LRP, the CLRP1, the CLRP2, the vanilla Gradient Visualization and the Guided Backpropagation. On the pre-trained models, AlexNet and VGG16, the localization ability is evaluated at different thresholds. The x-axis corresponds to the threshold that keeps a certain percentage of energy left, and the y-axis corresponds to the pointing accuracy.

Pointing Game: To evaluate the discriminativeness of saliency maps, the paper [34] proposes a pointing game. The maximum point on the saliency map is extracted and evaluated. In case of images with a single object, a hit is counted if the maximum point lies in the bounding box of the target object, otherwise a miss is counted. The localization accuracy is measured by . In case of ILSVRC2012 dataset, the naive pointing at the center of the image shows surprisingly high accuracy. Based on the reason, we extend the pointing game into a difficult setting. In the new setting, the first step is to preprocess the saliency map by simply thresholding so that the foreground area covers percent energy out of the whole saliency map (where the energy is the sum of all pixel values in saliency map). A hit is counted if the remaining foreground area lies in the bounding box of the target object, otherwise a miss is counted.

The figure 4 show that the localization accuracy of different approaches in case of different thresholds. With more energy kept, the remained pixels are less likely to fall into the ground-truth bounding box, and the localization accuracy is low correspondingly. The CLRP1 and the CLRP2 show constantly much better pointing accuracy than that of the LRP. The positive results indicate that the pixels that the contrastive backpropagation cancels are on the cluttered background or non-target objects. The CLRP can focus on the class-discriminative part, which improves the LRP. The CLRP is also better than other primal backpropagation-based approaches. One exception is that the Guided Backpropagation shows a better localization accuracy in VGG16 network in case of high thresholds. In addition, the localization accuracy of the CLRP1 and the CLRP2 is similar in the deep VGG16 network, which indicates the equivalence of the two methods to model the opposite visual concept.

Random vanilGrad[25] GuidedBP[27] LRP[3] CLRP1 CLRP2
AlexNet 0.0766 0.1716 0.1843  0.1624  0.2093  0.2030
VGG16 0.0809 0.3760 0.4480  0.3713  0.3844  0.3913
Table 1: Ablation study on ImagNet Validation dataset. The dropped activation values after the corresponding ablation are shown in the table.

Ablation Study: In the Pointing Game above, we evaluate the discriminativeness of the explanations according to the localization ability. In this ablation study, we evaluate the discriminativeness from another perspective. We observe the changes of activation in case of ablating the found discriminative pixels. The activation value of the class-specific neuron will drop if the ablated pixels are discriminative to the corresponding class.

For an individual image classification decision, we first generate a saliency map for the ground-truth class. We identify the maximum point in the generated saliency map as the most discriminative position. Then, we ablate the pixel of the input image at the identified position with a image patch. The pixel values of the image patch are the mean value of all the pixel values at the same position across the whole dataset. We classify the perturbated image and observe the activation value of the neuron corresponding to the ground-truth class. The dropped activation value is computed as the difference between the activations of the neuron before and after the perturbation. The dropped score is averaged on all the images in the dataset.

The experimental results of different approaches are shown in the table 1. For the comparison, we also ablate the image with a randomly chosen position. The random ablation has hardly impact on the output. The saliency maps corresponding to all other approaches find the relevant pixel because the activations of the class-specific neurons dropped a lot after the corresponding ablation. In both networks, CLRP1 and CLRP2 show the better scores, which means the discriminativeness of explanations generated by CLRP is better than that of the LRP. Again, the Guided Backpropagation shows better score than CLRP. This ablation study only considers the discriminative of the pixel with maximal relevance value, which corresponds to a special case in the Pointing Game, namely, only one pixel with maximal relevance is left after the thresholding. The two experiments show the consistent result that the Guided Backpropagation is better than LRP in the special case. We do not report the performance of the GoogLeNet in the experiments. Our approach shows that the zero-padding operations of convolutional layers have a big impact on the output of the GoogLeNet model in module of Pytorch. The impact leads to a problematic saliency map (see supplementary material).

5.3 Understanding the Difference between Neurons

The neurons of DNNs have been studying with their activation values. The DeConvNets [32] visualize the patterns and collect the images that maximally activate the neurons, given an image set. The activation maximization method [6, 16] aims to generate an image in input space that maximally activates a single neuron or a group of neurons. Furthermore, the work [35, 8] understand the semantic concepts of the neurons with an annotated dataset. In this experiment, we aim to study the difference among neurons in a single classification decision.

The neurons of low layers may have different local receptive fields. The difference between them could be caused by the different input stimuli. We visualize high-level concepts learned by the neurons that have the same receptive fields, e.g., a single neuron in a fully connected layer. For a single test image, the LRP and the CLRP2 are applied to visualize the stimuli that activate a specific neuron. We do not use CLRP1 because the opposite visual concept cannot be modeled by the remaining neurons in the same layer.

In VGG16 network, we visualize 8 activated neurons from the layer. The visualized maps are shown in figure 5. The image is classified as a by the VGG16 network. The receptive field (the input image) is shown in the center, and the 8 explanation maps are shown around it. While the LRP produces almost identical saliency map for the 8 neurons (in figure 4(a)), the CLRP2 gains a meaningful insight about their difference, which shows that different neurons focus on different parts of images. By comparison (see figure 4(b)), the neurons in the first row are activated more by the , the , and the respectively. The neurons in the second row by the eye of the and the respectively. The right-down one by the . The last two neurons and focus on the similar patterns (i.e., the ).

To our knowledge, there is no known work on the difference between neurons in an individual classification decision and also no evaluation metric. We evaluate the found difference by an ablation study. More concretely, we first find the discriminative patch for each neuron (e.g., ) using CLRP2. Then, we ablate the patch and observe the changes of neuron activations in the forward pass. The discriminative patch of a neuron is identified by the point with maximal value in its explanation map created by CLRP2. The neighboring pixels around the maximum point are replaced with the values that are mean of pixel values in the same positions across the whole dataset.

The ablation study results are shown in the figure 4(c). The positive value in the grid of the figure means the decreased activation value, and the negative ones mean the activations increase after the corresponding ablation. In case of the ablation corresponding to neuron , we see that the activation of is significantly dropped (could become not-activated). The maximal droped values of each row often occur on the diagonal axis. We also try with other ablation sizes and other neurons, which shows the similar results. The ablations for the last two neurons and are same because their explanation maps are similar. The changes of activations of all other neurons are also the same for the same ablation. We found that many activated neurons correspond to same explanation maps.

(a) Explanations by LRP
(b) Explanations by CLRP
(c) Ablation Study
Figure 5: The figures show explanation maps of neurons in layers. The explanations generated by LRP are not discriminative. By contrast, the ones generated by CLRP explain the difference between the neurons.

6 Conclusion

The explanations generated by LRP are evaluated. We find that the explanations are not class-discriminative. We discuss the theoretical foundation and provide our justification for the non-discriminativeness. To improve discriminativeness of the generated explanations, we propose the Contrastive Layer-wise Relevance Propagation. The qualitative and quantitative evaluations confirm that the CLRP is better than the LRP. We also use the CLRP to shed light on the role of neurons in DCNNs.

We propose two ways to model the opposite visual concept the class-specific neuron represents. However, there could be other more appropriate modeling methods. Even though our approach produces a pixel-wise explanation for the individual classification decisions, the explanations for similar classes are similar. The fine-grained discriminativeness are needed to explain the classifications of the intra-classes. We leave the further exploration in future work.


  • [1] P. Agrawal, R. Girshick, and J. Malik (2014) Analyzing the performance of multilayer neural networks for object recognition. In European conference on computer vision, pp. 329–344. Cited by: §3.1.
  • [2] L. Arras, G. Montavon, K. Müller, and W. Samek (2017) Explaining recurrent neural network predictions in sentiment analysis. arXiv preprint arXiv:1706.07206. Cited by: §1.
  • [3] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7), pp. e0130140. Cited by: §1, §1, §2, §3, Table 1.
  • [4] C. Cao, X. Liu, Y. Yang, Y. Yu, J. Wang, Z. Wang, Y. Huang, L. Wang, C. Huang, W. Xu, et al. (2015) Look and think twice: capturing top-down visual attention with feedback convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2956–2964. Cited by: §2.
  • [5] A. Dosovitskiy and T. Brox (2016) Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4829–4837. Cited by: §1, §3.1.
  • [6] D. Erhan, Y. Bengio, A. Courville, and P. Vincent (2009) Visualizing higher-layer features of a deep network. University of Montreal 1341 (3), pp. 1. Cited by: §1, §5.3.
  • [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587. Cited by: §1.
  • [8] A. Gonzalez-Garcia, D. Modolo, and V. Ferrari (2018) Do semantic parts emerge in convolutional neural networks?. International Journal of Computer Vision 126 (5), pp. 476–494. Cited by: §1, §5.3.
  • [9] P. Kindermans, K. Schütt, K. Müller, and S. Dähne (2016) Investigating the influence of noise and distractors on the interpretation of neural networks. arXiv preprint arXiv:1611.07270. Cited by: §2.
  • [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1, §1, §3.1.
  • [11] S. Lapuschkin, A. Binder, K. Müller, and W. Samek (2017) Understanding and comparing deep neural networks for age and gender classification. arXiv preprint arXiv:1708.07689. Cited by: §1.
  • [12] A. Mahendran and A. Vedaldi (2014) Understanding deep image representations by inverting them. CoRR abs/1412.0035. Cited by: §1.
  • [13] A. Mahendran and A. Vedaldi (2016) Salient deconvolutional networks. In European Conference on Computer Vision, pp. 120–135. Cited by: §1, §1, §2.
  • [14] A. Mahendran and A. Vedaldi (2016) Visualizing deep convolutional neural networks using natural pre-images. International Journal of Computer Vision 120 (3), pp. 233–255. Cited by: §1.
  • [15] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. Müller (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition 65, pp. 211–222. Cited by: §1, §1, §2, §3.1, §3.2.
  • [16] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems, pp. 3387–3395. Cited by: §5.3.
  • [17] M. Oquab, L. Bottou, I. Laptev, and J. Sivic (2015) Is object localization for free?-weakly-supervised learning with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 685–694. Cited by: §1.
  • [18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1.
  • [19] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Nothing else matters: model-agnostic explanations by identifying prediction invariance. arXiv preprint arXiv:1611.05817. Cited by: §1, §1.
  • [20] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should i trust you?: explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §1, §1.
  • [21] M. Robnik-Šikonja and I. Kononenko (2008) Explaining classifications for individual instances. IEEE Transactions on Knowledge and Data Engineering 20 (5), pp. 589–600. Cited by: §1, §1.
  • [22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. IJCV 115, pp. 211–252. Cited by: §3.1.
  • [23] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2016) Grad-cam: visual explanations from deep networks via gradient-based localization. See https://arxiv. org/abs/1610.02391 v3 7 (8). Cited by: §1, §2.
  • [24] A. Shrikumar, P. Greenside, and A. Kundaje (2017) Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685. Cited by: item 3, §1, §5.1.
  • [25] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §1, §1, §1, §2, §2, §3.2, §3, Table 1.
  • [26] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, §1, §3.1, §5.1.
  • [27] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: §1, §1, §1, §2, §3.2, Table 1.
  • [28] V. Srinivasan, S. Lapuschkin, C. Hellge, K. Müller, and W. Samek (2017) Interpretable human action recognition in compressed domain. In Acoustics, Speech and Signal Processing, 2017 IEEE International Conference on, pp. 1692–1696. Cited by: §1.
  • [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015-06) Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 00, pp. 1–9. External Links: ISSN 1063-6919 Cited by: §1, §3.1.
  • [30] A. Torralba and A. A. Efros (2011) Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1521–1528. Cited by: §5.1.
  • [31] J. K. Tsotsos, S. M. Culhane, W. Y. K. Wai, Y. Lai, N. Davis, and F. Nuflo (1995) Modeling visual attention via selective tuning. Artificial intelligence 78 (1-2), pp. 507–545. Cited by: §2.
  • [32] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §1, §1, §2, §3.2, §5.3.
  • [33] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus (2010) Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 2528–2535. Cited by: §2.
  • [34] J. Zhang, Z. Lin, J. Brandt, X. Shen, and S. Sclaroff (2016) Top-down neural attention by excitation backprop. In European Conference on Computer Vision, pp. 543–559. Cited by: item 3, §2, §4, §5.1, §5.2.
  • [35] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2014) Object detectors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856. Cited by: §5.3.
  • [36] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016) Learning deep features for discriminative localization. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pp. 2921–2929. Cited by: §1, §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description