Improving Transparency of Deep Neural Inference Process

Improving Transparency of Deep Neural Inference Process

Hiroshi Kuwajima H. Kuwajima DENSO CORPORATION
Tokyo Institute of Technology
22email: hiroshi_kuwajima@denso.co.jpM. Tanaka National Inst. of Advanced Industrial Science and Technology
44email: masayuki.tanaka@aist.go.jpM. Okutomi Tokyo Institute of Technology
66email: mxo@sc.e.titech.ac.jp
   Masayuki Tanaka H. Kuwajima DENSO CORPORATION
Tokyo Institute of Technology
22email: hiroshi_kuwajima@denso.co.jpM. Tanaka National Inst. of Advanced Industrial Science and Technology
44email: masayuki.tanaka@aist.go.jpM. Okutomi Tokyo Institute of Technology
66email: mxo@sc.e.titech.ac.jp
   Masatoshi Okutomi H. Kuwajima DENSO CORPORATION
Tokyo Institute of Technology
22email: hiroshi_kuwajima@denso.co.jpM. Tanaka National Inst. of Advanced Industrial Science and Technology
44email: masayuki.tanaka@aist.go.jpM. Okutomi Tokyo Institute of Technology
66email: mxo@sc.e.titech.ac.jp
Received: 06 Sep 2018 / Accepted: 26 Feb 2019
Abstract

Deep learning techniques are rapidly advan-ced recently, and becoming a necessity component for widespread systems. However, the inference process of deep learning is black-box, and not very suitable to safety-critical systems which must exhibit high transparency. In this paper, to address this black-box limitation, we develop a simple analysis method which consists of 1) structural feature analysis: lists of the features contributing to inference process, 2) linguistic feature analysis: lists of the natural language labels describing the visual attributes for each feature contributing to inference process, and 3) consistency analysis: measuring consistency among input data, inference (label), and the result of our structural and linguistic feature analysis. Our analysis is simplified to reflect the actual inference process for high transparency, whereas it does not include any additional black-box mechanisms such as LSTM for highly human readable results. We conduct experiments and discuss the results of our analysis qualitatively and quantitatively, and come to believe that our work improves the transparency of neural networks. Evaluated through 12,800 human tasks, 75% workers answer that input data and result of our feature analysis are consistent, and 70% workers answer that inference (label) and result of our feature analysis are consistent. In addition to the evaluation of the proposed analysis, we find that our analysis also provide suggestions, or possible next actions such as expanding neural network complexity or collecting training data to improve a neural network.

Keywords:
transparency deep neural network black box Explainable AI visualization visual attribute
journal: Progress in Artificial Intelligence

((a)) Input image
((b)) Inference (label)
((c)) Structural & linguistic feature analysis
((d)) Side information (optional)
Figure 5: Feature analysis example

1 Introduction

Machine learning techniques such as deep neural networks has led to the widespread application of systems that assign advanced environmental perception and decision-making to computer logics learned from big data, instead of manually built rule-based logics [1, 2, 3, 4, 5, 6]. Deep learning especially achieves unprecedented performance on several tasks. For example, in the visual object recognition task outperformed humans [7].

Machine-learning models are becoming indispensable components even in systems that require safety-critical environmental perception and decision making, such as automated driving systems [8]. To build high credibility for machine-learning models, both high performance and transparency are important. In particular, safety-critical systems must exhibit transparency [9]. However, inference processes of machine-learning models such as neural networks are considered as black boxes. In this paper, a black box refers to a situation, where, although feature activation can be observed, the actual phenomenon cannot be understood. In other words, machine-learning models show high performance but low transparency. Thus, it is difficult for black-box deep learning to be applied to safety-critical systems such as automated driving in which the results of deep learning models can directly cause hazard [10].

Explainable AI (XAI) is a related research area which is focused and rapidly advanced recently [11]. There are studies in XAI that inference networks give human understandable explanations, as well as inference (label). For example, image caption generation and visual explanation are problems to provide highly human understandable natural language descriptions. Caption generation is a verbalization method, which describes the objects and the circumstances happening in the input image by natural language sentences [12, 13]. Visual explanations are generated by black-box explaining models such as LSTM, to explain rationales for classification decisions [14]. They generate highly human readable explanation, however by using mechanisms which do not reflect the actual inference process, because explanation generation and classification are done by different neural networks possibly sharing features, inference results (labels), etc. Even sharing features, explanation generation is done by black-box models (neural networks), and we cannot know they reflect the actual inference process. Inference networks which generate explanations have high performance but low transparency.

Figure 6: Consistency analysis concept

In this paper, to address the black-box property of deep learning, we develop a simple analysis method which improves the transparency of inference processes of convolutional neural networks, hereinafter referred to as CNN [15, 16], as an example of deep learning models. We assume three types of analysis for inference process; 1) structural feature analysis, 2) linguistic feature analysis, and 3) consistency analysis. Results of structural feature analysis are lists of the features contributing to inference process. The feature numbers are not human readable, but are useful when systems programmatically manage the inference process at test time. Results of linguistic feature analysis are lists of the natural language labels describing the visual attributes for each feature obtained through the structural feature analysis. It is useful for humans to understand inference processes. Figure 5 is an example result of our feature analysis. The left and right columns in Fig. ((c))(c) are structural feature analysis and linguistic feature analysis, respectively. Consistency analysis is to measure the consistency among input data, inference (label), and result of feature analysis. It is useful for discussion, such as identifying the cause of incorrect inference (label) and possible next actions to fix problems, etc. Figure 6 shows the concept of consistency analysis. To show the usefulness of our proposed method, we conduct experiments including human evaluation, and have corresponding discussion on the experimental results.

This paper is an extended version of our previous workshop paper presented in Transparent and interpretable Machine Learning in Safety Critical Environments, NIPS2017 Workshop [17].

2 Related Works

DARPA started Explainable Artificial Intelligence program in 2017 [18]. It defines three approaches: Deep Explanation, Interpretable Models, and Model Induction. The first and second are ex-ante approaches which design explainable features and explainable causal models in advance to training. The third is an ex-post approach which automatically derives new explainable models after training. Our transparent analysis is an ex-post (after training) approach, but it does not derive new models and directly analyze the actual activation observed.

Visualization of deep neural networks is an active study area recently [19, 20]. Earlier studies are basically identify attention (focus) areas of input data in receptive fields or heat maps [21, 22, 23]. It indicates the areas in the input data which the model is looking at [24] during test time. Attention area of an input image is the very beginning part of CNN inference process, and is revealed by visualization methods. On the other hand, in this paper, we would like to provide analysis not only for input data, but also for inference process of neural networks. We exploit receptive fields as side information indicating the locations of the visual attributes in input data.

There is another type of works focusing on the visual attributes and intermediate features, i.e., activation of neural network nodes. One of past works analyzed the visual attributes for each node, and it was revealed that low level attributes, such as black, brown, and furry are associated to neural network nodes [25]. Another work interprets receptive fields as with visual attributes of neural networks, and quantified the interpretability by using the number of human interpretable visual semantic concepts learned at each hidden layer [26]. Among visualization techniques, in this paper, we use visual attributes for our transparent analysis.

Pointing and Justification-based Explanation (PJ-X) is one of the latest explanation methods in XAI [27]. It can provide highly human understandable explanation, by attention areas of input data space as introspective explanations (true explanation) and justification explanations at the same time. The former provides explanations of the input space, but does not provide analysis for the inference processes. The latter does not address the black-box property of target models, because it uses another black-box method LSTM to generate the explanation. PJ-X does not analyze the relationship between the inference results and features of neural networks, and introducing an additional black-box model for explanation cannot address transparency. Therefore, the purpose of PJ-X is not an analysis for improving transparency of deep neural inference process.

3 Observation of Feature Contribution

We observed conv5 feature of CaffeNet [28, 16] on selected ImageNet training data, to understand the behavior of features. Although ImageNet has approximately training images per class, for simplicity, we selected examples for each class, with top- softmax probability on the ground truth classes.

We first make a natural assumption that inference (label) is based not on inactive features, but on highly activated features, and derived the following assumption.

  • Assumption 1. Features highly activated in the inference process have contributions to inference (label).

This assumption applies especially for ReLU, which CaffeNet uses as activation functions, because ReLU is a half-linear positive monotonic function.

3.1 Magnitude of Feature Activation

Then we look into activation on each feature map in conv5, and found that the magnitude of activation changes for different features. Therefore, definition of high activation varies depending on features. Figure 7 shows the histograms of activation on example feature maps and , which have the smallest and largest mean values, respectively. The modes of activation magnitude are different each other. These distributions are not gaussian, because negative values are cancelled by ReLU activation function. It is clear that the distributions of activation on feature maps and are different. By this analysis, we derived the following assumption.

  • Assumption 2. Activation in different features have different dynamic ranges.

Figure 7: Dynamic ranges of different feature maps’ activation. The mode activation on feature map is , whereas that on feature map is . The shapes of distribution are also different.

3.2 Features and Visual Attributes

Figure 10 includes three visual attributes: furly, rubber tires, and fine cell patterns, but only two feature maps and . These features share the visual attribute furly, while at the same time, have the other visual attributes different each other. This observation implies the following assumption.

  • Assumption 3. Visual attributes and features are in a many-to-many relationship.

4 Proposed Analysis

In this section, we propose a transparent analysis method to improve the transparency of deep neural inference processes based on assumptions. We carry out both training time and testing time feature analysis to obtain three types of features, as described in 4.1. We performed manual feature annotation to associate features with visual attributes, as described in 4.2. Then, three types of consistency ratios among input image, result of our proposed feature analysis, and inference (label) are measured through human tasks, as described in 4.3.

4.1 Structural Feature Analysis

We propose three concepts of features; 1) activated feature, 2) class frequent feature, and 3) inference feature as depicted in Fig. 11, 12, and 13. Activated feature and inference feature are defined for each inference, whereas class frequent feature is defined for each class.

furly                                       rubber tires

((a)) Feature map 226

furly                                              fine cell patterns

((b)) Feature map 230
Figure 10: Visual attributes associated with features. Left column: furly; right column: rubber tires and fine cell patterns visual attributes appear on feature maps and .
Figure 11: Activated feature
Figure 12: Class frequent feature
Figure 13: Inference feature

To analyze inference process, we focus on the activation of an intermediate feature called conv5 which is the final convolved feature in CaffeNet. It is reported that conv5 of AlexNet, which is also the final convolution layer, learns high level visual concepts such as objects and parts, and they are interpretable for humans [26]. Let and are the input and the output of CaffeNet. Specifically , and are those of the training data and the testing data.

Activated feature in Fig. 11 is the binarized feature vector generated from conv5. Activated feature is a binary feature vector, however, CNN feature maps have spacial dimension. We decided to ignore the location of activation for simplicity, and applied global max pooling to contract conv5, which is originally tensor, into 256-dimensional feature vector , as it is the simplest way to obtain a vector from a tensor. Therefore, we consider a feature map, with spacial feature elements, as a single feature. The element of the vector is one if the associated feature, i.e., the feature map in conv5, is activated. Based on Assumption 2, in order to judge weather the feature is activated or not, it is necessary to use statistical information such as mean, variance, or higher moment to capture the differences among features. In this paper, we decided to use mean normalization and thresholding. We compute a mean-normalized feature vector from a feature vector , as each element of has varying dynamic range, and normalization makes them comparable each other. Thresholding at gives a binarized feature vector corresponding to .

Class frequent feature in Fig. 12 is binary vectors indicating the frequently activated features for each class. We hypothesize that each class has a different frequent activation pattern which is obtained by the following procedure. Fig. 12 shows how to compute the class frequent feature for an example class: dog. The training data of the dog class is binarized into , and their summation over counts how many times each feature is activated for the dog class in the training data. After summation, we select the top- frequent features which consist the class frequent features for the dog class, where in the case of Fig. 12. Class frequent features are computed for each class at the training time, and stored in a lookup table to be used in the testing time, like , and .

To check the validity of class frequent features, we made two CaffeNets whose randomly selected feature maps in conv5, and frequently activated feature maps in conv5, are replaced by zero, respectively. Figure 14 shows how CNN accuracy for a sample class decays when we delete random feature maps or the class frequent feature maps for the class. We see fast accuracy decay when the deleted feature maps are frequently activated for the class. On the other hand, the convolutional neural network is robust against deleting randomly selected feature maps. Original CaffeNet probably has redundant feature maps in conv5. This observation shows that class frequent features play important rolls in inference.

Figure 14: Accuracy decay with feature map deletion
Figure 15: Receptive fields of feature maps included in the class frequent feature for ambulance class. Left to right: receptive fields of feature maps , , , , and .

To understand the relationship between class frequent features and inference (label), we display the areas on which the active elements on the feature maps included in a class frequent feature focus. The ambulance class has the class frequent feature including feature maps , , , , and , and the corresponding receptive fields are visualized in Fig. 15. Receptive fields are generated for each feature map, which we call the target feature map below. For simplicity, we generated receptive fields by 1) replacing the elements on the target feature map with the activated features, i.e., binarizing the target feature map, 2) replacing the off target feature maps (any feature maps except feature map if we generate receptive fields for the feature map ) with zero, 3) back propagating the modified feature including both target and off target feature maps to the input space with unpooling using the stored pooled location on max pooling layers, and 4) post processing including image binarization and dilation by a disk shape. Feature maps and respond the white-red (or orange) two tone color; feature map responds windows; and feature maps and respond tires in Fig. 15. We observe key parts of ambulance vehicles in the receptive fields. It is suggested that deep neural inference process is based on these key parts, and we derived the following assumption.

  • Assumption 4. Features frequently activated for the class of inference (label) have high contributions to inference (label).

Inference feature in Fig. 13 is the overlap between the activated feature and the class frequent feature for a single test data point , where denotes element-wise product. Based on Assumption 1 and Assumption 4, features contributing to inference process should be a part of both activated feature and class frequent feature. The dotted box in Fig. 13 is the conventional inference without feature analysis. is computed based on , whereas the class frequent feature is just lookup by the inference (label) given by CaffeNet, because the ground truth is unknown in the testing time. is the result of structural feature analysis. The number of feature vector elements in an inference feature is generally variable for each inference. Due to the human readability, in an inference feature , we show at most top- feature vector elements with the maximum mean-normalized activation.

4.2 Linguistic Feature Analysis

To generate human readable analysis, we annotate visual attributes for each feature by looking at the input samples on which it is activated in the focused network. Although there are many ways to achieve human readable visual attributes, we decided to conduct human annotation, because it is the most simple method.

Annotation Data is prepared by using the training data set. At first, we select a subset of training data suitable to feature annotation. And then, for each feature, we sample images so that their inference features (identified by the ground truth labels) include it. We generated receptive fields for the human annotator to understand the part of the image where visual attributes appear, as shown in Fig. 16.

Annotation Process is to iterative way to annotate the combination of multiple features representing a single visual attribute, and vice versa. In order to annotate this many-to-many relationship based on Assumption 3, starting from free description, feature annotation is repetitively refined. We defined a process which consists of three steps; 1) open annotation, 2) label organization, 3) closed annotation. The open annotation is the first step where a human annotator annotates all features by free description. Human annotation may have some fluctuation at this step. Second in the label organization step, similar visual attributes are integrated, different visual attributes with the same label are divided, new visual attributes are introduced, etc., so that the fluctuated labels of feature annotation are well organized. Then, the human annotator works on the closed annotation to classify features into a set of visual attributes, i.e., multiple answers allowed, from the all visual attributes defined in the last step. Label organization and closed annotation steps are repeated to refine the feature annotation, as depicted in Fig. 17.

Figure 16: Sample images and receptive fields for feature annotation. Each feature has a folder, and a folder has images with high activation on it. Workers see receptive fields in a folder, and annotate them.
Figure 17: Feature annotation process

4.3 Consistency Analysis

To gain further insight, we measure the consistency among input data, inference (label), and result of our proposed feature analysis, i.e., inference feature. This measurement is for discussion, checking whether our analysis method or the target neural network are incorrect when we get incorrect analysis, identifying possible next actions to fix problems, etc.

We propose physical consistency ratio (PCR) and logical consistency ratio (LCR), which are the consistency between inference feature and input data, and consistency between inference feature and inference (label), respectively (Fig. 6). These two ratios are measured through human tasks. In addition to two measures for consistency, we use the softmax probability corresponding to the class of inference (label), i.e. the maximum softmax probability, as inference consistency ratio (ICR), the consistency between input data and inference (label). All the ratios are in the range of to .

5 Experiments

In this section, we conducted experiments to test our proposed analysis method. We try to analyze the inference processes of the publicly available CaffeNet with the weighs pre-trained on ImageNet. These feature vectors were binarized by the method introduced above, with the binarization threshold . We chosen the number of feature vector elements in a class frequent feature, and the maximum number of feature vector elements in an inference feature. Receptive fields for each feature vector elements in inference features are accompanied with the result of feature analysis as informative clue for human feature annotation, and side information to support the analysis.

As with the feature analysis, selected training images per class are used for computing mean values for each feature map in conv5, class frequent features, and annotating visual attributes by human. On the other hand, we reduced the object categories of ImageNet to for testing, because it is difficult for human to distinguish categories and understand the corresponding analysis precisely. The classes are a subset of ImageNet classes, which are programmatically selected according to the WordNet [29] hierarchy, such that each new class has approximately the same number of WordNet synsets.

Human evaluation was done on Amazon Mechanical Turk. For each input image, we made two questions for the physical consistency and logical consistency on our feature analysis.

  1. Is the inference feature relevant to the whole or parts of the input image?

  2. If an object satisfies the inference feature, is it an object in the class of inference (label)?

The first question was asked without showing inference (label), and the second one was asked without showing the input image. The list of response alternatives shown to workers were strongly agree, agree, disagree, and strongly disagree. After obtaining the results from workers, we merged the former two and the latter two into agree and disagree, respectively. The results of the questions are used to evaluate physical consistency ratio and logical consistency ratio, respectively. Each question is redundantly assigned to discrete workers to eliminate individual biases, and the averaged ratios are in the range of (all workers disagree) to (all workers agree). To evaluate these ratios, we conducted human tasks for total (Table 1). We also recorded the softmax probability of the class of inference (label), as inference consistency ratio.

Measure Class Sample Worker Total tasks
PCR 32 10 20 6,400
LCR 32 10 20 6,400
Table 1: Number of human tasks to evaluate consistency measures. Redundancy added by discrete samples and workers is to eliminate individual biases. Inference consistency ratio (maximum softmax probability) is automatically computed.

Figure ((a))(a) and ((b))(b) show the joint discrete probability distribution between physical consistency ratio and logical consistency ratio on correct inference and incorrect inference, respectively.

((a)) Distribution on correct inference
((b)) Distribution on incorrect inference
Figure 20: Joint discrete probability distribution for physical consistency ratio and logical consistency ratio.

When inference is incorrect, the peak is low although it is located in , and the both physical consistency ratio and logical consistency ratio spread throughout from low consistency to high consistency. On the other hand, both ratios tend to be high when inference is correct. The distribution on correct inference is clearly high contrast compared to that on incorrect inference. Therefore, our method provides better analysis for correct inference than that for incorrect inference. The mean values of physical consistency ratio, logical consistency ratio, and inference consistency ratio, i.e., softmax probability, over entire experimental data even including incorrect inference were , , and , respectively. According to these results, our method gained consensus on humans, overall distribution of consistency ratios are reasonable.

6 Discussion

In this section, we show whether our proposed simple analysis improves the transparency of inference processes of convolutional neural networks, and we study what practical discussion in a machine learning training and testing process can be done on a neural network thanks to that improved accuracy.

Let’s assume that we have a CNN which we are currently train and test. We have inference (label) by the currently trained model, and the results of our proposed analysis. Figure 23 and Fig. 24 show results of analysis for correct inference and incorrect inference, respectively. Images in Fig. 23 and Fig. 24 are converted into which is the actual size of the input image to CaffeNet, and receptive fields are omitted due to space limitation.

((a)) Analysis results with low ICR
((b)) Analysis results with high ICR
Figure 23: Feature and consistency analysis on images with correct inference (label). left to right: PCR increases; bottom to top: LCR increases.
Figure 24: Analysis results of analysis on images with incorrect inference (label)

We walk through these results of analysis to see what we can read from them.

For the images on the left column in Fig. ((a))(a) which have the results of analysis in low physical consistency ratio and inference consistency ratio, the number of feature vector elements in inference features can be less than , the maximum number of feature vector elements in the inference features. Human workers may have evaluated these images’ physical consistency ratios low, because they saw few feature vector elements in inference features. It is interesting that inference consistency, which is the maximum softmax probability, is also low, if the number of feature vector elements in inference features is small. It is suggested that the inference process we hypothesized in this work is not far from actual process in neural networks.

The images on the right column in Fig. ((a))(a) have high physical consistency ratio and low inference consistency ratio. The bottom one has low logical consistency ratio, because the labels of visual attributes are not appropriate. Cassette players should have two large speakers on the left and right, and the feature maps and may represent them. However, the labels (visual attributes) for these feature maps are rubber tires or rounded (shape), and human workers may not able to associate them. There is room to improve the labels of visual attributes so that humans can easily comprehend the linguistic feature analysis.

The top right image in Fig. ((b))(b) has high physical, logical, and inference consistency. We see three types of visual attributes; 1) shape (fine lattice patterns, accumulated fine boxes/circles, leopard patterns), 2) color (two-tone red/white), and 3) concrete object (black square windows, faces of small animals) in the linguistic feature analysis, and these visual attributes are relevant to ambulance vehicles for humans, too. This is one of the the best examples.

The images on the right bottom in Fig. ((b))(b) has high physical and inference consistency ratios, and low logical consistency ratio. The logical consistency ratio is low, because it is difficult for humans to associate visual attributes in the linguistic feature analysis with the inference class: baseball. However, the inference consistency ratio, i.e. the maximum softmax probability, is , therefore the neural network is very confident on this inference. This example shows that the trained neural network may work with inference processes which humans cannot understand in some cases.

The images on the left top in Fig. ((b))(b) has high logical and inference consistency ratios, and low physical consistency ratio. We can see fur like visual attributes in the result of linguistic feature analysis, however they are not found in the input image. This example shows that there are features in the trained model which humans cannot understand.

The images on the left bottom in Fig. ((b))(b) has low physical and logical consistency ratio and high inference consistency ratio. The linguistic feature analysis indicates sharp roofs/caps and accumulated fine boxes/circles or rubber tires, but humans may not find these visual attributes in the image. On top of them, even if these visual attributes are in the scene, humans cannot understand why they are associated with inference (label): barber chair. This example shows the combination of the above two situations. These three examples show the limitations of deep neural networks in terms of transparency. There must be the essential complexity of deep neural network which we cannot make transparent.

In the second example from the left on the first row in Fig. 24, inference (label) of CNN is snake, but the correct label is brambling, a type of bird. Our analysis indicates that the inference feature includes feature vector elements for ”squiggle” visual attributes. This is an example of understandable mistake of CNN. Although the inference (label) is incorrect, we see squiggle in the input image; and squiggle is likely to be a snake. We assume that the size of the bird was too small compared to the size of squiggle patterns, and CNN may put high priority on snake class. If squiggle patterns, which are made by roof tiles, are larger than the bird, there is room for discussion if the ground truth class should be roof tile, rather than bird.

In the second example from the right on the first row in Fig. 24, the inference (label) of CNN is flat-coated retriever, but the correct label is groenendael. Both of them are black dogs. The inference features for this incorrect inference (label) produced by the CNN: flat-coated retriever are very similar to the inference features for the correct inference (label): groenendael. This is another pattern of understandable mistake of CNN that the currently learnt visual attributes are not enough to distinguish between two classes. We need to collect training data more to acquire relevant visual attributes. If inference (label) is incorrect with correct inference features, then it suggests insufficient training data to train relevant visual attributes for these classes. Possible action for this case is to collect additional training data for these classes. If 1) inference features are correct for the input image, and 2) inference (label) is correct for the inference features, however 3) inference (label) is incorrect for the input image, then an inaccurate ground truth label is suggested. Possible action for this case is to review and fix the ground truth label.

It is important in practice to know the actions we should take next. Low physical consistency ratio suggests that the feature extraction part of the neural network is not well trained to capture enough visual attributes. On the other hand, low logical consistency ratio suggests that the decision-making part of the neural network, such as classification or regression, is not well trained. Possible action for the former case is to increase the layers in the feature extraction part, which is considered as the layers before conv5 for CaffeNet. Possible action for the latter case is to increase the layers in decision-making part which is considered as the layers after conv5.

7 Conclusion

In this paper, we developed three types of simple analysis; 1) structural feature analysis, 2) linguistic feature analysis, and 3) consistency analysis which improve the transparency of deep neural inference process, to address the black-box property of deep neural networks for safety critical applications. We then evaluated and discussed our analysis methods and the results both qualitatively and quantitatively, and introduced the usefulness of our proposed analysis by showing how to use the analysis results in the development process of deep learning models.

It is known that quantitative evaluation of the transparency of algorithms is challenging [30], and we cannot say our work solved the problem completely. However, deep neural inference process was black-box until now, and the experiments and discussion in this paper shows that our work moved it forward to transparency. For example, there was no clue to improve a neural network when it produced incorrect inference (label). Now our method give suggestions, or the possible next actions, such as expanding networks or collecting training data.

References

  • [1] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, L. Fei-Fei, International Journal of Computer Vision (IJCV) 115(3), 211 (2015). DOI 10.1007/s11263-015-0816-y
  • [2] T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, J. Dean, in Advances in Neural Information Processing Systems 26, ed. by C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, K.Q. Weinberger (Curran Associates, Inc., 2013), pp. 3111–3119
  • [3] J. Pennington, R. Socher, C.D. Manning, in Empirical Methods in Natural Language Processing (EMNLP) (2014), pp. 1532–1543
  • [4] T. Young, D. Hazarika, S. Poria, E. Cambria, CoRR abs/1708.02709 (2017)
  • [5] A. Graves, N. Jaitly, A. Mohamed, in 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, Olomouc, Czech Republic, December 8-12, 2013 (IEEE, 2013), pp. 273–278. DOI 10.1109/ASRU.2013.6707742
  • [6] K. Uchida, M. Tanaka, M. Okutomi, Neural Networks 105, 197 (2018)
  • [7] IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 (IEEE Computer Society, 2015)
  • [8] M. Bojarski, D.D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L.D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, K. Zieba, CoRR abs/1604.07316 (2016)
  • [9] H. Kuwajima, H. Yasuoka, T. Nakae, in Joint Workshop between ICML, AAMAS and IJCAI on Deep (or Machine) Learning for Safety-Critical Applications in Engineering (2018)
  • [10] P. Koopman, M. Wagner, SAE International Journal of Transportation Safety 4(2016-01-0128), 15 (2016)
  • [11] T. Miller, CoRR (2017)
  • [12] K. Xu, J. Ba, R. Kiros, K. Cho, A.C. Courville, R. Salakhutdinov, R.S. Zemel, Y. Bengio, in Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, JMLR Workshop and Conference Proceedings, vol. 37, ed. by F.R. Bach, D.M. Blei (JMLR.org, 2015), JMLR Workshop and Conference Proceedings, vol. 37, pp. 2048–2057
  • [13] O. Vinyals, A. Toshev, S. Bengio, D. Erhan, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 (2015), pp. 3156–3164. DOI 10.1109/CVPR.2015.7298935
  • [14] L.A. Hendricks, Z. Akata, M. Rohrbach, J. Donahue, B. Schiele, T. Darrell, in Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, Lecture Notes in Computer Science, vol. 9908, ed. by B. Leibe, J. Matas, N. Sebe, M. Welling (Springer, 2016), Lecture Notes in Computer Science, vol. 9908, pp. 3–19. DOI 10.1007/978-3-319-46493-0˙1
  • [15] K. Fukushima, S. Miyake, Pattern Recognition 15(6), 455 (1982). DOI 10.1016/0031-3203(82)90024-3
  • [16] A. Krizhevsky, I. Sutskever, G.E. Hinton, in Advances in Neural Information Processing Systems 25, ed. by F. Pereira, C.J.C. Burges, L. Bottou, K.Q. Weinberger (Curran Associates, Inc., 2012), pp. 1097–1105
  • [17] H. Kuwajima, M. Tanaka, in Transparent and interpretable Machine Learning in Safety Critical Environments (NIPS2017 Workshop) (2017)
  • [18] D. Gunning. Explainable artificial intelligence (xai) (2016). URL https://www.darpa.mil/program/explainable-artificial-intelligence
  • [19] F. Grün, C. Rupprecht, N. Navab, F. Tombari, CoRR abs/1606.07757 (2016)
  • [20] G. Montavon, W. Samek, K. Müller, CoRR abs/1706.07979 (2017)
  • [21] A. Shrikumar, P. Greenside, A. Shcherbina, A. Kundaje, CoRR abs/1605.01713 (2016)
  • [22] A. Binder, G. Montavon, S. Lapuschkin, K. Müller, W. Samek, in Artificial Neural Networks and Machine Learning - ICANN 2016 - 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II, Lecture Notes in Computer Science, vol. 9887, ed. by A.E.P. Villa, P. Masulli, A.J.P. Rivero (Springer, 2016), Lecture Notes in Computer Science, vol. 9887, pp. 63–71. DOI 10.1007/978-3-319-44781-0˙8
  • [23] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, K. Müller, Pattern Recognition 65, 211 (2017). DOI 10.1016/j.patcog.2016.11.008
  • [24] M. Bojarski, A. Choromanska, K. Choromanski, B. Firner, L.D. Jackel, U. Muller, K. Zieba, CoRR abs/1611.05418 (2016)
  • [25] V. Escorcia, J.C. Niebles, B. Ghanem, in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015 (2015), pp. 1256–1264. DOI 10.1109/CVPR.2015.7298730
  • [26] D. Bau, B. Zhou, A. Khosla, A. Oliva, A. Torralba, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
  • [27] D.H. Park, L.A. Hendricks, Z. Akata, B. Schiele, T. Darrell, M. Rohrbach, CoRR abs/1612.04757 (2016)
  • [28] W. Ding, R. Wang, F. Mao, G. Taylor, arXiv preprint arXiv:1412.2302 (2014)
  • [29] G.A. Miller, R. Beckwith, C. Fellbaum, D. Gross, K.J. Miller, International Journal of Lexicography 3(4), 235 (1990)
  • [30] H.K. Dam, T. Tran, A. Ghose, in Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2018, Gothenburg, Sweden, May 27 - June 03, 2018, ed. by A. Zisman, S. Apel (ACM, 2018), pp. 53–56. DOI 10.1145/3183399.3183424
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
   
Add comment
Cancel
Loading ...
345467
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description