References
Abstract

Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes. Recently, a new framework for providing explanations, called TED (Hind et al., 2019), has been proposed to provide meaningful explanations for predictions. This framework augments training data to include explanations elicited from domain users, in addition to features and labels. This approach ensures that explanations for predictions are tailored to the complexity expectations and domain knowledge of the consumer.

In this paper, we build on this foundational work, by exploring more sophisticated instantiations of the TED framework and empirically evaluate their effectiveness in two diverse domains, chemical odor and skin cancer prediction. Results demonstrate that meaningful explanations can be reliably taught to machine learning algorithms, and in some cases, improving modeling accuracy.

oddsidemargin has been altered.
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.

 

Teaching AI to Explain its Decisions
Using Embeddings and Multi-Task Learning

 

Noel C. F. Codella* 1  Michael Hind* 1  Karthikeyan Natesan Ramamurthy* 1  Murray Campbell1  Amit Dhurandhar1  Kush R. Varshney1  Dennis Wei1  Aleksandra Mojsilović1 


footnotetext: *Equal contribution 1IBM Research AI, Yorktown Heights, NY, USA.. Correspondence to: Michael Hind <hindm@us.ibm.com>.  
2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA. Copyright by the author(s).
\@xsect

New regulations call for automated decision making systems to provide “meaningful information” on the logic used to reach conclusions (Goodman & Flaxman, 2016; Wachter et al., 2017; Selbst & Powles, 2017). Selbst & Powles (2017) interpret the concept of “meaningful information” as information that should be understandable to the audience (potentially individuals who lack specific expertise), is actionable, and is flexible enough to support various technical approaches.

Recently, Hind et al. (2019) introduced a new framework, called TED (Teaching Explanations for Decisions), for providing meaningful explanations for machine learning predictions. The framework requires the training dataset to include an explanation (), along with the features () and target label (). A model is learned from this training set so that predictions include a label and an explanation, both of which are from the set of possible labels () and explanations () that are provided in the training data.

This approach has several advantages. The explanations provided should meet the complexity capability and domain knowledge of the target user because they are providing the training explanations. The explanations should be as accurate as the underlying predictions. The framework is general in that it can be applied to any supervised machine learning algorithm.

In addition to describing the framework and other advantages, Hind et al. (2019) also describe a simple instantiation of the framework, based on the Cartesian product, that combines the label and explanation to form a new training dataset (, ), which is then fed into any machine learning algorithm. The resulting model will produce predictions () that can be decomposed to the and component. This instantiation is evaluated on two synthetic datasets for playing tic-tac-toe and loan repayment. Their results show “(1) To the extent that user explanations follow simple logic, very high explanation accuracy can be achieved; (2) Accuracy in predicting not only does not suffer but actually improves.

In this work, we explore more sophisticated instantiations of the TED framework that leverage training explanations to improve the accuracy of prediction as well as the generation of explanations. Specifically, we explore bringing together the labels and explanations in a multi-task setting, as well as building upon the tradition of similarity metrics, case-based reasoning and content-based retrieval.

Existing approaches that only have access to features and labels are unable to find meaningful similarities. However, with the advantage of having training features, labels, and explanations, we propose to learn feature embeddings guided by labels and explanations. This allows us to infer explanations for new data using nearest neighbor approaches. We present a new objective function to learn an embedding to optimize -nearest neighbor (NN) search for both prediction accuracy as well as holistic human relevancy to enforce that returned neighbors present meaningful information. The proposed embedding approach is easily portable to a diverse set of label and explanation spaces because it only requires a notion of similarity between examples in these spaces. Since any predicted explanation or label is obtained from a simple combination of training examples, complexity and domain match is achieved with no further effort. We also demonstrate the multi-task instantiation wherein labels and explanations are predicted together from features. In contrast to the embedding approach, we need to change the structure of the ML model for this method due to the modality and type of the label and explanation space.

We demonstrate the proposed paradigm using the two instantiations on publicly-available olfactory pleasantness dataset (Keller et al., 2017) and melanoma classification dataset (Codella et al., 2018a).111An extended version of this paper (Codella et al., 2018c) also evaluates an image aesthetics dataset (Kong et al., 2016). Teaching explanations requires a training set that contains explanations. Since such datasets are not readily available, we use the attributes given with the pleasantness dataset in a unique way: as collections of meaningful explanations. For the melanoma classification dataset, we will use the groupings given by human users described in Codella et al. (2018b) as the explanations.

The main contributions of this work are:

  • Instantations and evaluation of several candidate TED approaches, some that learn efficient embeddings that can be used to infer labels and explanations for novel data, and some that use multi-task learning to predict labels and explanations.

  • Evaluation on disparate datasets with diverse label and explanation spaces demonstrating the efficacy of the paradigm.

\@xsect

Prior work in providing explanations can be partitioned into several areas:

  1. Making existing or enhanced models interpretable, i.e. to provide a precise description of how the model determined its decision (e.g., Ribeiro et al. (2016); Montavon et al. (2017); Lundberg & Lee (2017)).

  2. Creating a simpler-to-understand model, such as a small number of logical expressions, that mostly matches the decisions of the deployed model (e.g., Bastani et al. (2018); Caruana et al. (2015)), or directly training a simple-to-understand model from the data (e.g., Dash et al. (2018); Wang et al. (2017); Cohen (1995); Breiman (2017)).

  3. Leveraging “rationales”, “explanations”, “attributes”, or other “privileged information” in the training data to help improve the accuracy of the algorithms (e.g., Sun & DeJong (2005); Zaidan & Eisner (2008); Donahue & Grauman (2011); Peng et al. (2016)).

  4. Work in the natural language processing and computer vision domains that generate rationales/explanations derived from input text (e.g., Lei et al. (2016); Ainur et al. (2010); Hendricks et al. (2016)).

  5. Content-based retrieval methods that provide explanations as evidence employed for a prediction, i.e. -nearest neighbor classification and regression (e.g., Wan et al. (2014); Jimenez-del-Toro et al. (2015); Li et al. (2018); Sun et al. (2012)).

The first two groups attempt to precisely describe how a machine learning decision was made, which is particularly relevant for AI system builders. This insight can be used to improve the AI system and may serve as the seeds for an explanation to a non-AI expert. However, work still remains to determine if these seeds are sufficient to satisfy the needs of a non-AI expert. In particular, when the underlying features are not human comprehensible, these approaches are inadequate for providing human consumable explanations.

The third group, like this work, leverages additional information (explanations) in the training data, but with different goals. The third group uses the explanations to create a more accurate model; the TED framework leverages the explanations to teach how to generate explanations for new predictions.

The fourth group seeks to generate textual explanations with predictions. For text classification, this involves selecting the minimal necessary content from a text body that is sufficient to trigger the classification. For computer vision (Hendricks et al., 2016), this involves utilizing textual captions to automatically generate new textual captions of images that are both descriptive as well as discriminative. While serving to enrich an understanding of the predictions, these systems do not necessarily facilitate an improved ability for a human user to understand system failures.

The fifth group creates explanations in the form of decision evidence: using some feature embedding to perform k-nearest neighbor search, using those k neighbors to make a prediction, and demonstrating to the user the nearest neighbors and any relevant information regarding them. Although this approach is fairly straightforward and holds a great deal of promise, it has historically suffered from the issue of the semantic gap: distance metrics in the realm of the feature embeddings do not necessarily yield neighbors that are relevant for prediction. More recently, deep feature embeddings, optimized for generating predictions, have made significant advances in reducing the semantic gap. However, there still remains a “meaning gap” — although systems have gotten good at returning neighbors with the same label as a query, they do not necessarily return neighbors that agree with any holistic human measures of similarity. As a result, users are not necessarily inclined to trust system predictions.

Doshi-Velez et al. (2017) discuss the societal, moral, and legal expectations of AI explanations, provide guidelines for the content of an explanation, and recommend that explanations of AI systems be held to a similar standard as humans. Our approach is compatible with their view. Biran & Cotton (2017) provide an excellent overview and taxonomy of explanations and justifications in machine learning.

Miller (2017) and Miller et al. (2017) argue that explainable AI solutions need to meet the needs of the users, an area that has been well studied in philosophy, psychology, and cognitive science. They provides a brief survey of the most relevant work in these fields to the area of explainable AI. They, along with Doshi-Velez & Kim (2017), call for more rigor in this area.

\@xsect

The primary motivation of the TED paradigm is to provide meaningful explanations to consumers by leveraging the consumers’ knowledge of what will be meaningful to them. Section id1 formally describes the problem space that defines the TED approach.

This paper focuses on instantiations of the TED approach that leverages the explanations to improve model prediction and possibly explanation accuracy. Section id1 takes this approach to learn feature embeddings and explanation embeddings in a joint and aligned way to permit neighbor-based explanation prediction. It presents a new objective function to learn an embedding to optimize NN search for both prediction accuracy as well as holistic human relevancy to enforce that returned neighbors present meaningful information. We also discuss multi-task learning in the label and explanation space as another instantiation of the TED approach, that we will use for comparisons.

\@xsect

Let denote the input-output space, with denoting the joint distribution over this space, where . Then typically, in supervised learning one wants to estimate .

In our setting, we have a triple that denotes the input space, output space, and explanation space, respectively. We then assume that we have a joint distribution over this space, where . In this setting we want to estimate . Thus, we not only want to predict the labels , but also the corresponding explanations for the specific and based on historical explanations given by human experts. The space in most of these applications is quite different than and has similarities with in that it requires human judgment.

We provide methods to solve the above problem. Although these methods can be used even when is human-understandable, we envision the most impact for applications where this is not the case, such as the olfaction dataset described in Section id1.

\@xsect

We propose several candidate implementation approaches to teach labels and explanations from training data, and predict them for unseen test data. We describe the baseline regression and embedding approaches. The particular parameters and specific instantiations are provided in Section id1.

\@xsect

To set the baseline, we trained a regression (classification) network on the datasets to predict from using the mean-squared error (cross-entropy) loss. This cannot be used to infer for a novel . A similar learning approach was used to predict from . If is vector-valued, we used multi-task learning.

\@xsect

We trained a multi-task network to predict and together from . Similar to the previous case, we used appropriate loss functions.

\@xsect

We propose to use the activations from the last fully connected hidden layer of the network trained to predict or as embeddings for . Given a novel , we obtain its nearest neighbors in the embedding space from the training set, and use the corresponding and values to obtain predictions as weighted averages. The weights are determined using a Gaussian kernel on the distances in the embedding space of the novel to its neighbors in the training set. This procedure is used with all the NN-based prediction approaches.

\@xsect

Since our key instantiation is to predict and using the NN approach described above, we propose to improve upon the embeddings of from the regression network by explicitly ensuring that points with similar and values are mapped close to each other in the embedding space. For a pair of data points with inputs , labels , and explanations , we define the following pairwise loss functions for creating the embedding , where the shorthand for is for clarity below:

(1)
(2)

The cosine similarity , where denotes the dot product between the two vector embeddings and denotes the norm. Eqn. (1) defines the embedding loss based on similarity in the space. If and are close, the cosine distance between and will be minimized. If and are far, the cosine similarity will be minimized (up to some margin ), thus maximizing the cosine distance. It is possible to set to create a clear buffer between neighbors and non-neighbors. The loss function (2) based on similarity in the space is exactly analogous. We combine the losses using and similarities as

(3)

where denotes the scalar weight on the loss. We set in our experiments. The neighborhood criteria on and in (1) and (2) are only valid if they are continuous valued. If they are categorical, we will adopt a different neighborhood criteria, whose specifics are discussed in the relevant experiment below.

\@xsect

To evaluate the TED instantiations presented in this work, we focus on two fundamental questions:

  1. Does the instantiation provide useful explanations?

  2. How is the prediction accuracy impacted by incorporating explanations into the training?

Since the TED approach can be incorporated into many kinds of learning algorithms, tested against many datasets, and used in many different situations, a definitive answer to these questions is beyond the scope of this paper. Instead we try to address these two questions on two datasets, evaluating accuracy in the standard way.

Determining if any approach provides useful explanations is a challenge and no consensus metric has yet to emerge (Doshi-Velez et al., 2017). However, the TED approach has a unique advantage in dealing with this challenge. Specifically, since it requires explanations be provided for the target dataset (training and testing), one can evaluate the accuracy of a model’s explanation () in a similar way that one evaluates the accuracy of a predicted label (). We provide more details on the metrics used in Section id1. In general, we expect several metrics of explanation efficacy to emerge, including those involving the target explanation consumers (Dhurandhar et al., 2017).

\@xsect

The TED approach requires a training set that contains explanations. Since such datasets are not readily available, we evaluate the approach on 2 publicly available datasets in a unique way: Olfactory (Keller et al., 2017) and Melanoma detection (Codella et al., 2018a).

The Olfactory dataset (Keller et al., 2017) is a challenge dataset describing various scents (chemical bondings and labels). Each of the 476 rows represents a molecule with approximately chemoinformatic features () (angles between bonds, types of atoms, etc.). Each row also contains 21 human perceptions of the molecule, such as intensity, pleasantness, sour, musky, burnt. These are average values among 49 diverse individuals and lie in . We take to be the pleasantness perception and to be the remaining 19 perceptions except for intensity, since these 19 are known to be more fundamental semantic descriptors while pleasantness and intensity are holistic perceptions (Keller et al., 2017). We use the standard training, test, and validation sets provided by the challenge organizers with , , and instances respectively.

Figure 1: Example images from the ISIC Melanoma detection dataset. The visual similarity between Melanoma and non-Melanoma images is seen from the left and middle images. In the right image, the visually similar lesions are placed in the same group (i.e., have the same value).

The 2017 International Skin Imaging Collaboration (ISIC) challenge on Skin Lesion Analysis Toward Melanoma Detection dataset (Codella et al., 2018a) is a public dataset with training and test images. Each image belongs to one of the three classes: melanoma (513 images), seborrheic keratosis (339 images) and benign nevus (1748 images). We use a version of this dataset described by Codella et al. (2018b), where the melanoma images were partitioned to 20 groups, the seborrheic keratosis images were divided into 12 groups, and 15 groups were created for benign nevus, by a non-expert human user. We show some example images from this dataset in Figure 1. We take the class labels to be and the total groups to be . In this dataset, each maps to a unique . We partition the original training set into a training set with images, and a validation set with images, for use in our experiments. We continue using the original test set with images.

\@xsect

An open question that we do not attempt to resolve here is the precise form that explanations should take. It is important that they match the mental model of the explanation consumer. For example, one may expect explanations to be categorical (as in loan approval reason codes or our melanoma dataset) or discrete ordinal, as in human ratings. Explanations may also be continuous in crowd sourced environments, where the final rating is an (weighted) average over the human ratings. This is seen in the Olfactory datasets that we consider, where each explanation is averaged over 49 individuals.

In the Olfactory dataset, since we use the existing continuous-valued attributes as explanations, we choose to treat them both as-is and discretized into bins, , representing negative, neutral, and positive values. The latter mimics human ratings (e.g., not pleasing, neutral, or pleasing). Specifically, we train on the original continuous values and report absolute error (MAE) between and a continuous-valued prediction . We also similarly discretize and as . We then report both absolute error in the discretized values (so that and ) as well as - error ( or ), where the latter corresponds to conventional classification accuracy. We use bin thresholds of and for Olfactory to partition the scores in the training data into thirds.

The explanations are treated similarly to by computing distances (sum of absolute differences over attributes) before and after discretizing to . We do not, however, compute the - error for . We use thresholds of and for Olfactory, which roughly partitions the values into thirds based on the training data.

For the melanoma classification dataset, since both and are categorical, we use classification accuracy as the performance metric for both and .

Algorithm or K Y Accuracy E Accuracy
Baseline () NA 0.7045 NA
Baseline () NA 0.6628 0.4107
0.01 0.6711 0.2838
0.1 0.6644 0.2838
1 0.6544 0.4474
Multi-task 10 0.6778 0.4274
classification 25 0.7145 0.4324
( & ) 50 0.6694 0.4057
100 0.6761 0.4140
250 0.6711 0.3957
500 0.6327 0.3907
1 0.6962 0.2604
Embedding 2 0.6995 0.2604
+ 5 0.6978 0.2604
NN 10 0.6962 0.2604
15 0.6978 0.2604
20 0.6995 0.2604
1 0.6978 0.4357
Embedding 2 0.6861 0.4357
+ 5 0.6861 0.4357
NN 10 0.6745 0.4407
15 0.6828 0.4374
20 0.6661 0.4424
1 0.7162 0.1619
Pairwise 2 0.7179 0.1619
+ 5 0.7179 0.1619
NN 10 0.7162 0.1619
15 0.7162 0.1619
20 0.7162 0.1619
1 0.7245 0.3406
Pairwise 2 0.7279 0.3406
+ 5 0.7229 0.3389
NN 10 0.7279 0.3389
15 0.7329 0.3372
20 0.7312 0.3356
Performance on Y Performance on E
MAE MAE
Algorithm Class. Accuracy Discretized Continuous Discretized Continuous
Baseline LASSO () NA 0.4928 0.5072 8.6483 NA NA
Baseline RF () NA 0.5217 0.4783 8.9447 NA NA
Multi-task
regression NA 0.4493 0.5507 11.4651 0.5034 3.6536
( & )
Multi-task
regression NA NA NA NA 0.5124 3.3659
( only)
1 0.5362 0.5362 11.7542 0.5690 4.2050
Embedding 2 0.5362 0.4928 9.9780 0.4950 3.6555
+ 5 0.6087 0.4058 9.2840 0.4516 3.3488
NN 10 0.5652 0.4783 10.1398 0.4622 3.4128
15 0.5362 0.4928 10.4433 0.4798 3.4012
20 0.4783 0.5652 10.9867 0.4813 3.4746
1 0.6087 0.4783 10.9306 0.5515 4.3547
Pairwise 2 0.5362 0.5072 10.9274 0.5095 3.9330
+ 5 0.5507 0.4638 10.4720 0.4935 3.6824
NN 10 0.5072 0.5072 10.7297 0.4912 3.5969
15 0.5217 0.4928 10.6659 0.4889 3.6277
20 0.4638 0.5507 10.5957 0.4889 3.6576
1 0.6087 0.4493 11.4919 0.5728 4.2644
Pairwise 2 0.4928 0.5072 9.7964 0.5072 3.7131
+ 5 0.5507 0.4493 9.6680 0.4767 3.4489
NN 10 0.5507 0.4493 9.9089 0.4897 3.4294
15 0.4928 0.5072 10.1360 0.4844 3.4077
20 0.4928 0.5072 10.0589 0.4760 3.3877
1 0.6522 0.3913 10.4714 0.5431 4.0833
Pairwise & 2 0.5362 0.4783 10.0081 0.4882 3.6610
+ 5 0.5652 0.4638 10.0519 0.4622 3.4735
NN 10 0.5072 0.5217 10.3872 0.4653 3.4786
15 0.5072 0.5217 10.7218 0.4737 3.4955
20 0.4493 0.5797 10.8590 0.4790 3.5027
Table 1: Accuracy of predicting and for ISIC (left) and Olfactory (right) using different methods (Section id1). For ISIC, Baseline for and are classification networks. For Olfactory, Baseline LASSO and RF predict from . Multi-task LASSO regression with regularization on the coefficient matrix predicts Y&E together, or just . For both, Multi-task learning predicts both and together, Embedding + NN uses the embedding from the last hidden layer of the baseline network that predicts . Pairwise + NN and Pairwise + NN use the cosine embedding loss in (1) and (2) respectively to optimize the embeddings of . Pairwise & + NN uses the sum of cosine embedding losses in (3) to optimize the embeddings of .
\@xsect

We use all the approaches proposed in Section id1 to obtain results for the Melanoma dataset: (a) simple regression baselines for predicting and for predicting , (b) multi-task classification to predict and together, (c) NN using embeddings from the simple regression network (), and from the baseline network, (d) NN using embeddings optimized for pairwise loss using , and using . We do not obtain embeddings using weighted pairwise loss with and because there is a one-to-one map from to in this dataset.

The networks used a modified PyTorch implementation of AlexNet for fine-tuning (Krizhevsky et al., 2012). We simplified the fully connected layers for the regression variant of AlexNet to 1024-ReLU-Dropout-64-, where for predicting , and for predicting . We used cross-entropy losses.

In the multi-task case for predicting and together, the convolutional layers were shared and two separate sets of fully connected layers with and outputs were used. The multi-task network used a weighted sum of regression losses for and : . All these single-task and multi-task networks were trained for epochs with a batch size of 64. The embedding layer that provides the dimensional output had a learning rate of , whereas all other layers had a learning rate of . For training the embeddings using pairwise losses, we used pairs chosen from the training data, and optimized the loss for epochs. The hyper-parameters were set to , and were chosen to based on the validation set performance. For the loss (1), and were said to be neighbors if and non-neighbors otherwise. For the loss (2), and were said to be neighbors if and non-neighbors . The pairs where , but were not considered.

The left side of Table 1 provides accuracy numbers for and using the proposed approaches. Numbers in bold are the best for a metric among an algorithm. The and accuracies for multi-task and NN approaches are better than that the baselines, which clearly indicates the value in sharing information between and . The best accuracy on is obtained using the Pairwise + NN approach, which is not surprising since contains and is more granular than . Pairwise + NN approach has a poor performance on since the information in is too coarse for predicting well.

\@xsect

Since random forest was the winning entry on this dataset (Keller et al., 2017), we used a random forest regression to pre-select out of features for subsequent modeling. From these features, we created a base regression network using fully connected hidden layer of 64 units (embedding layer), which was then connected to an output layer. No non-linearities were employed, but the data was first transformed using and then the features were standardized to zero mean and unit variance. Batch size was 338, and the network with pairwise loss was run for epochs with a learning rate of . For this dataset, we set to . The parameters were chosen to maximize performance on the validation set.

The right side of Table 1 provides accuracy numbers in a similar format as the left side. The results show, once again, improved accuracy over the baseline for Pairwise + NN and Pairwise & + NN and corresponding improvement for MAE for . Again, this performance improvement can be explained by the fact that the predictive accuracy of given using the both baselines were , with MAEs of and ( for RF) for Discretized and Continuous, respectively. Once again, the accuracy of varies among the 3 NN techniques with no clear advantages. The multi-task linear regression does not perform as well as the Pairwise loss based approaches that use non-linear networks.

\@xsect

Hind et al. (2019) discuss the additional labor required for providing training explanations. Researchers (Zaidan & Eisner, 2008; Zhang et al., 2016; McDonnell et al., 2016) have quantified that for an expert SME, the time to add labels and explanations is often the same as just adding labels and cite other benefits, such as improved quality and consistency of the resulting training data set. Furthermore, in some instances, the NN instantiation of TED may require no extra labor. For example, when embeddings are used as search criteria for evidence-based predictions of queries, end users will, on average, naturally interact with search results that are similar to the query in explanation space. This query-result interaction activity inherently provides similar and dissimilar pairs in the explanation space that can be used to refine an embedding initially optimized for the predictions alone. This reliance on relative distances in explanation space is also what distinguishes this method from multi-task learning objectives, since absolute labels in explanation space need not be defined.

\@xsect

The societal demand for “meaningful information” on automated decisions has sparked significant research in AI explanability. This paper describes two novel instantiations of the TED framework. The first learns feature embeddings using labels and explanation similarities in a joint and aligned way to permit neighbor-based explanation prediction. The second uses labels and explanations together in a multi-task setting. We have demonstrated these two instantiations on a publicly-available olfactory pleasantness dataset (Keller et al., 2017) and Melanoma detection dataset (Codella et al., 2018a). We hope this work will inspire other researchers to further enrich this paradigm.

References

  • Ainur et al. (2010) Ainur, Y., Choi, Y., and Cardie, C. Automatically generating annotator rationales to improve sentiment classification. In Proceedings of the ACL 2010 Conference Short Papers, pp. 336–341, 2010.
  • Bastani et al. (2018) Bastani, O., Kim, C., and Bastani, H. Interpreting blackbox models via model extraction. arXiv preprint arXiv:1705.08504, 2018.
  • Biran & Cotton (2017) Biran, O. and Cotton, C. Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI), 2017.
  • Breiman (2017) Breiman, L. Classification and regression trees. Routledge, 2017.
  • Caruana et al. (2015) Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In ACM SIGKDD International Conference Knowledge Discovery and Data Mining, pp. 1721–1730, Sydney, Australia, August 2015.
  • Codella et al. (2018a) Codella, N. C., Gutman, D., Celebi, M. E., Helba, B., Marchetti, M. A., Dusza, S. W., Kalloo, A., Liopyris, K., Mishra, N., Kittler, H., et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (isic). In Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pp. 168–172, 2018a.
  • Codella et al. (2018b) Codella, N. C., Lin, C.-C., Halpern, A., Hind, M., Feris, R., and Smith, J. R. Collaborative human-ai CHAI: Evidence-based interpretable melanoma classification in dermoscopic images. In MICCAI 2018, Workshop on Interpretability of Machine Intelligence in Medical Image Computing (IMIMIC), 2018b. arXiv preprint arXiv:1805.12234.
  • Codella et al. (2018c) Codella, N. C. F., Hind, M., Ramamurthy, K. N., Campbell, M., Dhurandhar, A., Varshney, K. R., Wei, D., and Mojsilovic, A. Teaching meaningful explanations, 2018c. arXiv preprint arXiv:1805.11648.
  • Cohen (1995) Cohen, W. W. Fast effective rule induction. In Machine Learning Proceedings 1995, pp. 115–123. Elsevier, 1995.
  • Dash et al. (2018) Dash, S., Gunluk, O., and Wei, D. Boolean decision rules via column generation. In Advances in Neural Information Processing Systems, pp. 4655–4665, 2018.
  • Dhurandhar et al. (2017) Dhurandhar, A., Iyengar, V., Luss, R., and Shanmugam, K. A formal framework to characterize interpretability of procedures. In ICML Workshop on Human Interpretable Machine Learning (WHI), pp. 1–7, Sydney, Australia, August 2017.
  • Donahue & Grauman (2011) Donahue, J. and Grauman, K. Annotator rationales for visual recognition. In ICCV, 2011.
  • Doshi-Velez & Kim (2017) Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning, 2017. arXiv preprint arXiv:1702.08608v2.
  • Doshi-Velez et al. (2017) Doshi-Velez, F., Mason Kortz, R. B., Bavitz, C., Sam Gershman, D. O., Schieber, S., Waldo, J., Weinberger, D., and Wood, A. Accountability of ai under the law: The role of explanation, 2017. arXiv preprint arXiv:1711.01134.
  • Goodman & Flaxman (2016) Goodman, B. and Flaxman, S. EU regulations on algorithmic decision-making and a ‘right to explanation’. pp. 26–30, New York, NY, June 2016.
  • Hendricks et al. (2016) Hendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., and Darrell, T. Generating visual explanations. In European Conference on Computer Vision, 2016.
  • Hind et al. (2019) Hind, M., Wei, D., Campbell, M., Codella, N. C. F., Dhurandhar, A., Mojsilovic, A., Ramamurthy, K. N., and Varshney, K. R. TED: Teaching AI to explain its decisions. In AAAI/ACM conference on Artificial Intelligence, Ethics, and Society, 2019.
  • Jimenez-del-Toro et al. (2015) Jimenez-del-Toro, O., Hanbury, A., Langs, G., Foncubierta–Rodriguez, A., and Muller, H. Overview of the visceral retrieval benchmark 2015. In Multimodal Retrieval in the Medical Domain (MRMD) Workshop, in the 37th European Conference on Information Retrieval (ECIR), 2015.
  • Keller et al. (2017) Keller, A., Gerkin, R. C., Guan, Y., Dhurandhar, A., Turu, G., Szalai, B., Mainland, J. D., Ihara, Y., Yu, C. W., Wolfinger, R., Vens, C., Schietgat, L., De Grave, K., Norel, R., Stolovitzky, G., Cecchi, G. A., Vosshall, L. B., and Meyer, P. Predicting human olfactory perception from chemical features of odor molecules. Science, 355(6327):820–826, 2017.
  • Kong et al. (2016) Kong, S., Shen, X., Lin, Z., Mech, R., and Fowlkes, C. Photo aesthetics ranking network with attributes and content adaptation. In European Conference on Computer Vision, pp. 662–679, Amsterdam, Netherlands, October 2016.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 25, pp. 1097–1105. 2012.
  • Lei et al. (2016) Lei, T., Barzilay, R., and Jaakkola, T. Rationalizing neural predictions. In EMNLP, 2016.
  • Li et al. (2018) Li, Z., Zhang, X., Muller, H., and Zhang, S. Large-scale retrieval for medical image analytics: A comprehensive review. In Medical Image Analysis, volume 43, pp. 66–84, 2018.
  • Lundberg & Lee (2017) Lundberg, S. and Lee, S.-I. A unified approach to interpreting model predictions. In Advances of Neural Information Processing Systems, 2017.
  • McDonnell et al. (2016) McDonnell, T., Lease, M., Kutlu, M., and Elsayed, T. Why is that relevant? collecting annotator rationales for relevance judgments. In AAAI Conference on Human Compututing Crowdsourcing, 2016.
  • Miller (2017) Miller, T. Explanation in artificial intelligence: Insights from the social sciences. arXiv preprint arXiv:1706.07269, June 2017.
  • Miller et al. (2017) Miller, T., Howe, P., and Sonenberg, L. Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. In Proc. IJCAI Workshop Explainable Artif. Intell., Melbourne, Australia, August 2017.
  • Montavon et al. (2017) Montavon, G., Samek, W., and Müller, K.-R. Methods for interpreting and understanding deep neural networks. Digital Signal Processing, 2017.
  • Peng et al. (2016) Peng, P., Tian, Y., Xiang, T., Wang, Y., and Huang, T. Joint learning of semantic and latent attributes. In ECCV 2016, Lecture Notes in Computer Science, volume 9908, 2016.
  • Ribeiro et al. (2016) Ribeiro, M. T., Singh, S., and Guestrin, C. “Why should I trust you?”: Explaining the predictions of any classifier. In Proc. ACM SIGKDD Int. Conf. Knowl. Disc. Data Min., pp. 1135–1144, San Francisco, CA, August 2016.
  • Selbst & Powles (2017) Selbst, A. D. and Powles, J. Meaningful information and the right to explanation. International Data Privacy Law, 7(4):233–242, November 2017.
  • Sun et al. (2012) Sun, J., Wang, F., Hu, J., and Edabollahi, S. Supervised patient similarity measure of heterogeneous patient records. In SIGKDD Explorations, 2012.
  • Sun & DeJong (2005) Sun, Q. and DeJong, G. Explanation-augmented svm: an approach to incorporating domain knowledge into svm learning. In 22nd International Conference on Machine Learning, 2005.
  • Wachter et al. (2017) Wachter, S., Mittelstadt, B., and Floridi, L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2):76–99, May 2017.
  • Wan et al. (2014) Wan, J., Wang, D., Hoi, S., Wu, P., Zhu, J., Zhang, Y., and Li, J. Deep learning for content-based image retrieval: A comprehensive study. In Proceedings of the ACM International Conference on Multimedia, 2014.
  • Wang et al. (2017) Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., and MacNeille, P. A bayesian framework for learning rule sets for interpretable classification. Journal of Machine Learning Research, 18(70):1–37, 2017.
  • Zaidan & Eisner (2008) Zaidan, O. F. and Eisner, J. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of EMNLP 2008, pp. 31–40, October 2008.
  • Zhang et al. (2016) Zhang, Y., Marshall, I. J., and Wallace, B. C. Rationale-augmented convolutional neural networks for text classification. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
372949
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description