Semantic Explanations of Predictions

Semantic Explanations of Predictions

Freddy Lécué
INRIA, Sophia Antipolis, France
Accenture Labs, Dublin, Ireland
freddy.lecue@inria.fr
&Jiewen Wu
A*STAR Artificial Intelligence Initiative
Institute for InfoComm Research
Singapore
jiewen.wu@acm.org
Authors contributed equally to the work.
Abstract

The main objective of explanations is to transmit knowledge to humans. This work proposes to construct informative explanations for predictions made from machine learning models. Motivated by the observations from social sciences, our approach selects data points from the training sample that exhibit special characteristics crucial for explanation, for instance, ones contrastive to the classification prediction and ones representative of the models. Subsequently, semantic concepts are derived from the selected data points through the use of domain ontologies. These concepts are filtered and ranked to produce informative explanations that improves human understanding. The main features of our approach are that (1) knowledge about explanations is captured in the form of ontological concepts, (2) explanations include contrastive evidences in addition to normal evidences, and (3) explanations are user relevant.

1 Introduction

Machine learning, particularly deep learning, has attracted attentions from both industries and academia over the years. The algorithmic advancement has spurred near-human level accuracy applications such as neural machine translation [\citeauthoryearWu et al.2016], novel methods including Generative Adversarial Networks [\citeauthoryearSalimans et al.2016] and Deep Reinforcement Learning [\citeauthoryearMnih et al.2015], among other things. Although highly scalable, accurate and efficient, most, if not all, of the machine learning models have exhibited limited interpretability [\citeauthoryearLou, Caruana, and Gehrke2012], which implies humans can hardly explain the final predictions [\citeauthoryearShmueli and others2010]. The lack of meaningful explanations of prediction would become more problematic when the models are deployed in financial, medical, and public safety domains, among many others. Explanations are indispensable for building the trust relationship between human decision makers and intelligent systems making predictions. For instance, both the context and the rationale of any prediction result in medical diagnosis [\citeauthoryearCaruana et al.2015] need to be understood as some of its consequences may be disastrous. In addition to trust, business owners can demand explanations for more informed decision making and developers can leverage explanations to debugging and maintenance. More stringent requirements have been dictated from legislation to safeguard fair and ethical decision making in general, notably the European Union General Data Protection Regulation (GDPR) warranting users the “right to explanation” in algorithmic decision-making [\citeauthoryearGoodman and Flaxman2016].

Although there has been a lack of consensus on the definition of explanations, we have witnessed multiple avenues of research. As argued in [\citeauthoryearLipton2016], these efforts generally fall into two (not necessarily disjoint) categories, i.e., one that aims at improving transparency of decision making by unveiling the internal mechanism of machine learning models and the other being post hoc explanations that justify the predictions generated by the models. The first category is sometimes referred to as interpretability. Our paper posits itself in the post hoc explanation category, in response to this specific question:

Why the input was labeled ? (1)

It is obvious that answers to such a question can be subjective. This paper, instead of deriving a complete solution to “correct” explanations, addresses the issue of informativeness [\citeauthoryearLipton2016]. Towards informative explanations, we investigate certain salient properties on elucidating predictions to human users. In particular, we observe the following survey findings in social sciences [\citeauthoryearMiller2017] towards explanations.

  • Human explanations imply social interaction [\citeauthoryearHilton1990]. The implication is that, for machine-generated explanations, it is indispensable to associate semantic information with an explanation (or the elements therein) for effective knowledge transmission to users.

  • Users favor contrastive explanations for understanding of causes [\citeauthoryearTemple1988, \citeauthoryearLipton1991]. That is, (1) often implies the question:

    Why the input was labeled instead of ? (2)
  • Users select explanations. Due to the large space of possible explanations and a specific user’s understanding of the context, she selects the explanations based on what she believes to be the most relevant to her, rather than the most direct or probable causes [\citeauthoryearHilton1990]. The subjectivity of human choices implies that informative explanations may need to consider personalisation or contextualisation.

This paper proposes a method that leverages semantic concepts drawn from data instances to characterize the aforementioned three observations for explanations, thus enabling more effective human understanding of predictions.

Most of existing approaches focus on data-driven explanation and lack semantic interpretation, which defeats the objective of human-centric explanations. Instead, our proposed approach exploits the semantics of representative data points in the training samples. It works by (i) selecting representative data points and elaborating the decision boundary of classifiers, (ii) extracting and encoding the semantics of such data points using domain ontologies, and (iii) computing informative explanations based on optimizing certain criteria learned from humans’ daily explanations.

The remainder of the paper subsequently reviews the basics and introduces the problem. Then, we describe how representative data points are extracted and show how semantics of data point is exploited to derive explanations. Conclusions and future research directions are given in the end.

2 Related Work

Interpreting models or predictions dates back to at least twenty years ago. The resurgence of neural nets also attracted a lot of recent research into the area of interpretability of such deep models. To show the position of our proposed approach, we discuss a few representative work in the field of machine learning, from the angles outlined in , , and .

Decision trees and random forests have been studied to extract various levels of model interpretation [\citeauthoryearCraven and Shavlik1995, \citeauthoryearPalczewska et al.2013] together with some degrees of prediction explanation [\citeauthoryearPang et al.2006]. Although their explanations are tuned to complex stochastic and uncertain rules they naturally expose high visibility on the decision process. [\citeauthoryearWang et al.2016] exploit the characteristics of classification models by exploiting and relaxing their decision boundary to approximate explanations. However explanations remain handcrafted from features and raw data, often as rules which remain very difficult to be generalized. [\citeauthoryearLi et al.2016] targeted neural networks and observed the effects of erasing various parts of the data and its features on the model to derive a minimum but representative set, qualified as explanation. [\citeauthoryearLei, Barzilay, and Jaakkola2016] study similar models and aim at identifying candidate rationales i.e., core elements of the model which aims at generalizing any prediction. Instead, [\citeauthoryearKim, Shah, and Doshi-Velez2015] focused on placing interpretability criteria directly into the model to ensure fined-grained exploration and generation of explanations. [\citeauthoryearRibeiro, Singh, and Guestrin2016] elaborated a model-agnostic technique. To this end any test data is re-sampled and approximated using training data, which is then used as a view, or explanation of the predictions and model. Note that our work here is not model-agnostic and focuses more on the semantic interpretation to achieve higher level of informativeness. Leveraging contrastive information for explaining the predictions has seen its applications in image classification, e.g., [\citeauthoryearVedantam et al.2017], which justifies why an image describes a particular, fine-grained concept as opposed to the distractor concept.

Towards human-centric explanations, [\citeauthoryearTintarev and Masthoff2007] designed some general properties of effective explanations in recommender systems. [\citeauthoryearOr Biran2017] focuses on combining instance-level and feature-level information to provide a framework that generalizes several types of explanations. A more complete survey on human-centric explanations is also available in [\citeauthoryearMiller2017], highlighting research findings from social sciences.

3 Problem Statement

We focus on the predictions given by (w.l.o.g., binary) classifiers, where data points are partitioned into sets, each of which belonging to one class. The partition surface is a decision boundary. Before delving into the technical details, we first define the problem to be addressed.

3.1 Ontology

An ontology describes the concept hierarchy of domain knowledge. A concept, denoted by , represents a type of objects. The most common relationship between concepts is subsumption (is-a), denoted . For instance, Human Animal w.r.t. some ontology. An ontology can define many different types of semantic relationships beyond just is-a relationship, e.g., hasChild, hasParent, and so on. The hierarchical relations of concepts in can be described as a graph for easy manipulation: each concept is a vertex, while the semantic relationship is a directed edge. An edge may be weighted to indicate how strong the semantic relation is between the concepts.

3.2 Explanation Problem Statement

An informative explanation, the objective of this paper, can be defined based on observations -. Without loss of generality, consider a binary classifier, , and a prediction of some given data point , intuitively, the aim is to find a set of human-understandable descriptions of with respect to and . To ease the presentation, a classifier is abused as a function, too. That is, means that is predicted to be of label by the classifier . A formal definition now follows.

Definition 1.

(Informative Explanation)
Let be a binary classifier, be the set of training data points, be a test data point with =, and be a set of concepts. We define a data point selection function : and a semantic uplift function .

An informative explanation is , where and .

Observe that the definition of the semantic uplift function, , implies that a set of data points can be assigned multiple ontological concepts.

Algorithmic considerations In addition to defining functions and , two more conditions are imposed on the algorithm design: (1) must be concise to meet observations and . So, the size of needs to optimized using quantifiable measures. (2) the content of needs to show contrastive information and ranking is necessary to allow for user choices based on relevancy, as discussed in .

Assumption To semantically interpret data points, the raw feature must be (at least partially) semantically meaningful, so that semantics is available from the beginning. In practice most datasets have textual descriptions, and, in the rare cases where the raw features lack any descriptions, advices from dataset owners or domain experts can be sought. Note that the proposed approach assumes nominal features are expressed in one-hot encoding.

Example 1.

(Explanation of Classification Prediction)
The rest of the paper uses the dataset Haberman’s Survival from the UCI repository***http://archive.ics.uci.edu/ml/datasets/ as an running example. The dataset contains cases from a study conducted 1958-1970 at the University of Chicago’s Billings Hospital on the survival of patients who had surgery for breast cancer. The task aims at classifying patients into those (1) survived 5 years or longer or (2) died within 5 year using the predictors: age, year of operation, and positive axillary nodes detected. We aim at identifying the informative explanations, as in Definition 1, for any predicted data point w.r.t a classifier , the training data points and a domain ontology .

3.3 Organization

The organization of the remainder is as follows. We first describe the data point selection function , which chooses the most interesting data points for the defined problem. From the chosen data points, we then show how concepts can be drawn, enhanced by consulting ontologies, reduced for succinctness, and finally ranked for user choices.

4 Identifying Representative Training Data

To generate an informative explanation, we need to first consider how to find representative data points, i.e., the function . For the sake of clarity, we concentrate on binary classification (positive or negative) problems and two representative machine learning models, one is a linear classifier using Logistic Regression (LR), while the other is the non-linear classifier using k-Nearest Neighbour (k-NN). The approach can be easily extended to multi-class classifiers.

4.1 Decision Boundary of Classification Models

We now review how to compute the decision boundaries of the two models.

LR Models are captured as follows:

(3)

where denotes the probability of being in the positive class. are the parameters for the given predictors . In particular, is the constant and is the intercept.

LR Decision Boundary is computed using (4) considering the positive class with in (3).

(4)

k-Nearest Neighbour (kNN) Models are captured as:

(5)

where is the neighbourhood of defined by the closest data points in the training sample and are the respective responses of .

k-NN Decision Boundary for the data points belonging to the positive / negative class is computed by elaborating the convex hull [\citeauthoryearChazelle1993] of all points in the respective class. Given datasets of points in dimensions, the convex hull, also known as the smallest convex envelope that contains all points, can be efficiently computed in for . However, in the case of high dimensions, the worse case complexity becomes . Moreover, the convex hulls, represented by the points, have an average size of [\citeauthoryearDwyer1988]. In practice, it is reasonable to obtain an approximation of the convex hulls for high dimensional datasets. For instance, the algorithm proposed in [\citeauthoryearSartipizadeh and Vincent2016], of which the time complexity is insensitive to and the size of an approximate convex hull is user-specified.

We will use a single dataset throughout this paper as an example to illustrate how the proposed approach is applied to a classifier to obtain explanations.

Example 2.

(k-NN Decision Boundary)
Among the data points of Haberman’s Survival dataset, a random point is chosen to be predicted, while the remaining are considered to be training sample. The convex hull, as decision boundary of the model, consists of 42 points, as blue triangle marks in Figure 1. The convex hull, represented by blue triangle points, is computed all the training data points in Haberman’s Survival dataset. The red cross-squared point in the lower left is the test data point, and the rest of all training data points, the yellow rounded ones, are enclosed by the convex hull.

Figure 1: k-NN Decision Boundary.

4.2 Representative Data via Decision Boundary

We illustrate our approach on identifying representative data points via the decision boundary. The set of representative data points is computed from the decision boundary of models, reflecting the extreme cases, and from the neighbours of the test data points, reflecting the local context. Technically, we define , and we compute the two subsets separately to obtain . To distinguish between the different classifiers, these set notations will use superscripts accordingly, e.g., . By combining both types of data points, we aim at identifying elements for explanations that meets the objectives -.

LR Representative Data Points Consider a test data point predicted to be of class . is constructed by selecting data points that have a (standard Euclidean) distance to the decision boundary i.e., a line, within certain threshold . We also consider the proximity between the data points and the test data point . That is, neighboring points of will be included, denoted . The distance between a neighbor and should be within the threshold .

The set is obtained as follows. First all instances on or close to the decision boundaries are potential elements in . There is a tradeoff between the size of and the representativeness of . To solve this problem, we find such that has the largest variance among all features (or, the most important feature can be chosen if feature importance is available from the model). First all data points that are close enough (determined by the threshold ) to the decision boundary are collected. In the context of LR, we have:

(6)
(7)

Note that the class labels of must be the same as that of the input . Finally, we can obtain the set of representative points by combining the two sets: . It also follows that, for an arbitrary data point , if , and otherwise. The definition of in the k-NN case is the same, and is thus omitted.

k-NN Representative Data Points. The local neighbors in k-NN, represented as , are computed in the same manner as in (7). A simple version of is defined as follows:

(8)

Observe that might contain, in the worst case scenario and particularly for high dimensional data, exactly all points of class label in the convex hull. Therefore the set of representative data points can be large. Its size can be further reduced by selecting points in the decision boundary. To this end, we consider how these points spread over the decision boundary and aim to sample points that best represent the decision boundary.

To achieve this, we consider the feature that has the largest variance, say . Data points in are linearly projected onto this dimension, . For each of the equally-spaced values on , say , a random sampling is performed on data points in that have a value of close to . Ultimately, a set of data points for each is selected. This way, the representative data points to the decision boundaries will be spread over the decision boundary. Alternatively, data points can be obtained by iteratively using the features ranked by variance or any other metrics (such as feature importance). This would work like a -dimensional tree until a single data point can be found. In this case, no random sampling is required.

Let be a threshold value. The final step is to weigh the points in . The rationale is that contour points closer to the test data point are more useful in explaining the prediction of . The weighting can be achieved using the distance between the points and .

(9)

where denotes the value projecting on .

Uniform and Contrastive Explanations. Computing representative data points depends on the input class label, e.g., in the previous descriptions. We compute not only representative points that are of the same class label , but also compute points that are predicted with the different class label. By observation , humans need to see more than just uniform explanations, i.e., contrastive explantions, i.e., why the input is not labeled and alike. For binary classification, we define the class that is labelled to be the positive class and the other is the negative class. The uniform and contrastive explanations are, for simplicity, called positive and negative explanations, respectively. Consequently, the representative data points with respect to are the positive points . For negative class, can be computed analogously: we just need to change the labels from to in (6), (7), and (8). The two sets of data points thus serve as positive and negative evidences for explaining why is of class .

Example 3.

(k-NN Representative Data Points)
Figure 2 shows the various steps of representative data points discovery. Assume the test data point is predicted to be the positive class and no more than 8 points are to be chosen in each step.

Figure 1(a) first computes some points of positive labels that spread over the convex hull (decision boundary). Note that there are 42 data points in the convex hull, but only 8 points are sampled from the 42 points to approximate the convex hull, based on the spread feature, the age of patients. These 8 points are considered to be the uniform evidences, i.e., they form the set as given in (9).

Figure 1(b) then further computes the neighboring points (positive local evidences) that are also in the positive class. These points, denoted by plus signs, form the set .

After collecting all the positive evidences, Figures 1(c) and 1(d) show the additional data points in the negative class (contrastive information) based on the convex hull and the local neighborhood, which form negative extreme () and local () evidences, respectively.

Figure 1(d) gives a nice visualisation of our representative selection idea: extreme evidences are spread globally, while local evidences gather around the test data points. In addition, negative evidences appear visually more distant from the test point than positive ones.

(a) Extreme Positive
(b) Local Positive
(c) Extreme Negative
(d) Local Negative
Figure 2: k-NN Representative Data Points. The cross-squared point is the test point.

5 Explaining Predictions

Definition 1 provides the basis for constructing informative explanations. To design the explanation algorithm, a few prerequisites need to be elaborated. In particular, the role of a domain knowledge base (or ontology) is indispensable in that semantic abstraction of data points is drawn from the knowledge base.

Context: The explanation algorithm is formulated as , where is a classifier, is the set of training data points, is the input data point, is a domain ontology, and is the semantic uplift function as given in Definition 1. From , , and , (6-7) and (8-9) provide a way to compute two sets of data points (evidences) based on the decision boundaries of . The positive and negative evidences, denoted and respectively, can then be used to drive the extraction of relevant information for explanation. We discuss how an ontology can be used to abstract the semantics of these data points, which are leveraged to compute explanations. Our approach can be applied independently to and , thus, the general notation denotes either set.

Notations: To ease the presentation, is given as a matrix-like structure of size , where and there are predictors. A single row in is a data point, represented as a set of feature-value pairs, with a weight given as computed from (9).

(10)

Semantic Uplift of Data Points: There have been much prior work on deducing concepts from relational-style data. We use the existing work to uplift data semantically. For each , we aim at finding its Basic-level Categorization [\citeauthoryearWang et al.2015a], denoted by with respect to a domain ontology . Categorization is achieved in two steps. First, concepts are identified in a large knowledge graph i.e., a graph dominated by instance and is-a relationships such as Dbpedia [\citeauthoryearLehmann et al.2015] and Microsoft Concept Graph [\citeauthoryearWu et al.2012] following [\citeauthoryearWang et al.2015b]. Then a mapping step from concepts to in the domain ontology is required to contextualized categorization in a targeted domain. In other words a concept is identified such that:

(11)

where is a mapping function from to domain ontology . The concept mapping is achieved following [\citeauthoryearEhrig and Staab2004] where both syntactic and semantic similarities (distance among similar concepts) are considered. Therefore the semantic uplift function in Definition 1 is computed through the composition of and blc i.e., . We adopted a 2-step process to ensure a maximum coverage of in . Indeed a more direct approach from in could result in no mapping, and then no semantic association for . The knowledge graph layer provides a much larger input set to be mapped in , and then a better semantic coverage for .

Representation of Data Points: Since not all can be automatically matched with concepts in , we differentiate the semantic and non-semantic parts. To this end we assume there is a set such that the final representation of a data point can be defined as two components, which are the projections of points for the feature-value pairs that cannot be matched with ontological concepts () and the semantic counterpart (), respectively.

(12)

Projections are applied to each row in , and the concept components of all rows forms a set, , that serves as the input of our explanation approach (cf. Algorithm 1).

(13)

Note that for duplicate concepts across data points, the weights of these concepts will increase accordingly. For positive data points , the set is . Similarly, is computed for negative data points .

Example 4.

(Semantic Uplift)
The features of Haberman’s survival data set have a natural categorization of values using wikipedia as in (11). For instance denotes people born between and . Patient , classified as a patient who survived 5 years or longer, is described as (14), and as (15) after semantic encoding of the data point. Note that the study has been conducted between and hence a year-old patient in .

(14)
(15)

Semantic representations of human population have been extractedhttps://en.wikipedia.org/wiki/Generation#List_of_generations for appropriate semantic mapping.

Explanation Concept Completion The input concepts given in (13) are not necessarily easy to understand for two reasons: a) they tend to be loosely connected to each other as not all feature value pairs can be semantically uplifted. b) these concepts may be data specific due to the semantic uplift so that humans may not understand the low-level concepts well. To address these issues, we show how to introduce more human-comprehensible and semantically connected concepts from the ontology. Furthermore, to optimize the completion process and to ensure the final explanation concepts are succinct, the following constraints are stipulated:

  • Minimize the size of for succinct explanations.

  • Maximize the number of matching among input concepts and ontological concepts.

  • Maximize the total weight of matching.

Note that the concepts in the input are weighted. To fully leverage the semantics of concepts, the structure of is used to find concepts that can abstract input concepts. Our graph-based traversal requires the following notions for defining relationships among any concepts.

Definition 2.

(Distance between Concepts)
Suppose be an ontology represented as a graph. Given two weighted concepts over graph . The distance between and is defined to be the minimum length of the path from to .

Definition 3.

(Concept Matching)
Let a mapping be a partial function , which defines a set of matching between concepts in . A matching of is the shortest path following any labelled edges in from to .

Note that each edge on the path of a matching carries a weight, , that denotes the semantic relatedness between concepts. For a matching of distance one, an aggregated weight of can be computed as follows, e.g., for a concept with matching from concepts in , each of which has a weight . In the initial case, i.e., concepts in , the weight .

Following Definitions 2 and 3, we are ready to instantiate the constraints -. Let denote the number of hops between two nodes in and denote the weight of . The algorithm aims to find a set of output concepts defined as follows:

(16)

For tractable computation, our algorithm uses random hill climbing to find , as discussed in the next section.

5.1 Computing Informative Explanations

The main algorithm, Algorithm 1, computes the explanation concept completion based on (16).

Input: The input to the algorithm includes a set of concepts , an ontology represented as a graph, a given integer to restrict the depth for traversing , and two additional control parameters, and .

Input : , , , ,
Output : 
1 ;
2 ;
3 for  to  do
4       if there is a matching in  then
5            
6       end if
7      
8 end for
9;
10 for  to  do
11       ;
12       ;
13       while  and  do
14             of distance 1 and ;
15             ;
16             ;
17             if  then ;
18            
19       end while
20      if  then ;
21      
22 end for
Algorithm 1 The Algorithm for Explanation Concept Completion. We define to be the function in (16).

Algorithm 1: Line 2 sorts the concepts decreasingly by weights. Lines 3-7 remove concepts that are subsumed by any concepts in because the purpose of our algorithm is to uplift special concepts into more general ones. The loop in lines 9-19 is a random-restart step to reduce the exponential search space of subsets of to restarts. Here, specifies the number of restarts desired. Line 10 obtains a random subset of , which is then used to find matching successors in as shown in lines 12-17. Line 13 collects matching concepts that can match at least two different concepts so as to further limit the search space. There is also an implicit condition to ensure correctness: should collect only concepts that have never been collected before in each restart, due to possible cycles in the ontological graph. Line 14 first sorts the matching concepts and then picks the first concepts. Here specifies the number of concepts to be chosen for next matching. This step is necessary as potentially all concepts in can be in , so a constant can significantly reduce the search space. The output is the set of matching concepts that maximizes the weight.

Contraction Applying Algorithm 1 to results in . For , we also apply Algorithm 1 to and obtain . The two sets of concepts form the basis to provide both uniform and contrastive explanations, each set being a group of explanations in that class. Note that in case of multi-classification, there will be many groups of contrastive explanations. To avoid excessive contrastive evidences, it is reasonable to restrict the groups of contrastive explanation to one or two. This can be realized by selecting the next one or two most probable class labels predicted by the classifiers.

Now consider the binary classification case. The uniform explanations may contain knowledge already entailed by the contrastive explanations. It makes sense to keep only the essential information in the uniform explanations for succinctness. As an example, assume the uniform explanations have only one concept and the contrastive explanations have , a more informative uniform explanation would be but not , which means w.r.t. a common sense ontology.

For this purpose concept difference [\citeauthoryearBaader et al.2003] is used to find out the concepts that entail some positive concept but not any of the negative concepts. Given concepts from , respectively, the difference between two concepts is computed as follows:

(17)

The subsumption relation may introduce many unseen ontological concepts as difference concepts. To select the useful difference concepts, these concepts are ranked according to a weight that indicates how closely a difference concept is semantically related to concepts found in the data. We define the importance of each to be:

(18)

where is the size of the input . The final set of difference concepts used to replace the grop , with a defined threshold , is:

(19)

Ranking The grouping step generates two groups, and . Recall that humans select explanations relevant to them. To allow for human subjectivity, we associate all concepts with a rank order within the same group such that concepts across different groups with the same rank order represent a single rational explanation. We show one possible method to compute the rank orders:

  1. A dense rank is computed for concepts in based on the importance defined in (18).

  2. For any used in computing , i.e., (19), the rank order of is the same as that of the majority of the results . Note a single may be used to compute many difference concepts.

  3. For any not used in (19), the rank order of is the same as the majority order of the most similar difference concepts. Note that an upper limit threshold, , is set to ensure concepts are sufficiently similar to some difference concepts. Otherwise, next step is required.

  4. Otherwise, the rank order of any is the next order to the lowest order in .

After ranking, concepts of the same order from both the uniform and contrastive explanations form an explanation of that rank order. Human users can choose, among these succinct, informative explanations, the ones that they believe to be most relevant using the rank order.

Example 5.

(Informative Explanations)
Consider the test data point given in (5), representing a year-old individual with an operation which occurred in , and a number of nodes equal to .

(20)

Although concepts are not drawn from the test point, it is easy to see that is also in “TheSilentGeneration”, had “OperationIn1960s,” and had “OnePosAxillaryNode.”

The objective is to understand why such individual is classified to be positive, i.e., survived 5 years or longer. The concepts from the representative data points fall into positive (21) and negative (22) groups. Computing the concept difference in this example discards the concept “NoPosAxillaryNode”, though it does not introduce new concepts (23):

(21)
(22)
(23)

The weights are computed by semantic uplifts and also reflect the proportion of data points that share the concepts cf. (18), e.g., in the positive data points there are more patients in the silent generation, while the G. I. generation is the majority in the negative data points.

The final explanations were ranked within groups:
rank rank
The rank orders of concepts in the contrastive explanations are derived from their semantic similarity to the only uniform explanation , as given in Steps 3&4 of the ranking process.

6 Conclusions and Future Work

Our approach, exploiting the semantics of data points, tackles the problem of explaining the predictions in an informative manner to human users. Semantic reasoning and machine learning have been combined by revisiting decisions boundaries and its representative elements as semantic characteristics of explanations. Such characteristics are then leveraged to derive informative explanations with respect to the domain ontologies. The core contributions of our approach include its ability to capture interesting data points that exhibit extremity (in the form of decision boundaries) and local context of the test data points (in the form of neighborhood) and its manipulation of semantics to enhance informativeness for human-centric explanations.

To generalize our approach for multi-label classifiers, the key difference is on the computation of and . For a particular test point, all the predicted labels are considered to be positive classes. Computing is equivalent to computing the union of in each positive class. For computing , it can be done for each negative class separately. So the contrastive evidences are for all positive labels against each and every of the negative labels. However, we advise choosing only the top few negative classes (by user specification or some predefined popularity metrics of classes).

There are several extensions to be considered in our future work. First, explanation relevancy can be improved by considering user profiles, instead of allowing for user choices. Second, we will investigate other types of machine learning models, for instance, random forest classifiers.

References

  • [\citeauthoryearBaader et al.2003] Baader, F.; Calvanese, D.; McGuinness, D. L.; Nardi, D.; and Patel-Schneider, P. F., eds. 2003. The Description Logic Handbook: Theory, Implementation, and Applications. New York, NY, USA: Cambridge University Press.
  • [\citeauthoryearCaruana et al.2015] Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; and Elhadad, N. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In SIGKDD 2015, Australia, August 10-13, 2015, 1721–1730.
  • [\citeauthoryearChazelle1993] Chazelle, B. 1993. An optimal convex hull algorithm in any fixed dimension. Discrete & Computational Geometry 10(4):377–409.
  • [\citeauthoryearCraven and Shavlik1995] Craven, M. W., and Shavlik, J. W. 1995. Extracting tree-structured representations of trained networks. NIPS’95, 24–30. Cambridge, MA, USA: MIT Press.
  • [\citeauthoryearDwyer1988] Dwyer, R. A. 1988. Average-case Analysis of Algorithms for Convex Hulls and Voronoi Diagrams. Ph.D. Dissertation, Pittsburgh, PA, USA. Order No. GAX88-17713.
  • [\citeauthoryearEhrig and Staab2004] Ehrig, M., and Staab, S. 2004. Qom-quick ontology mapping. In International Semantic Web Conference, volume 3298, 683–697. Springer.
  • [\citeauthoryearGoodman and Flaxman2016] Goodman, B., and Flaxman, S. 2016. EU regulations on algorithmic decision-making and a ”right to explanation”. CoRR abs/1606.08813.
  • [\citeauthoryearHilton1990] Hilton, D. J. 1990. Conversational processes and causal explanation. Psychological Bulletin 65–81.
  • [\citeauthoryearKim, Shah, and Doshi-Velez2015] Kim, B.; Shah, J. A.; and Doshi-Velez, F. 2015. Mind the gap: A generative approach to interpretable feature selection and extraction. In NIPS 2015, December 7-12, 2015, Montreal, Quebec, Canada, 2260–2268.
  • [\citeauthoryearLehmann et al.2015] Lehmann, J.; Isele, R.; Jakob, M.; Jentzsch, A.; Kontokostas, D.; Mendes, P. N.; Hellmann, S.; Morsey, M.; van Kleef, P.; Auer, S.; and Bizer, C. 2015. Dbpedia - A large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web 6(2):167–195.
  • [\citeauthoryearLei, Barzilay, and Jaakkola2016] Lei, T.; Barzilay, R.; and Jaakkola, T. S. 2016. Rationalizing neural predictions. In EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, 107–117.
  • [\citeauthoryearLi et al.2016] Li, J.; Chen, X.; Hovy, E. H.; and Jurafsky, D. 2016. Visualizing and understanding neural models in NLP. In NAACL HLT 2016, San Diego California, USA, June 12-17, 2016, 681–691.
  • [\citeauthoryearLipton1991] Lipton, P. 1991. Contrastive explanation and causal triangulation. Philosophy of Science 58(4):687–697.
  • [\citeauthoryearLipton2016] Lipton, Z. C. 2016. The mythos of model interpretability. In ICML Workshop on Human Interpretability of Machine Learning.
  • [\citeauthoryearLou, Caruana, and Gehrke2012] Lou, Y.; Caruana, R.; and Gehrke, J. 2012. Intelligible models for classification and regression. In SIGKDD ’12, Beijing, China, August 12-16, 2012, 150–158.
  • [\citeauthoryearMiller2017] Miller, T. 2017. Explanation in artificial intelligence: Insights from the social sciences. CoRR abs/1706.07269.
  • [\citeauthoryearMnih et al.2015] Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; and Hassabis, D. 2015. Human-level control through deep reinforcement learning. Nature 518(7540):529–533.
  • [\citeauthoryearOr Biran2017] Or Biran, K. M. 2017. Human-centric justification of machine learning predictions. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 1461–1467.
  • [\citeauthoryearPalczewska et al.2013] Palczewska, A.; Palczewski, J.; Robinson, R. M.; and Neagu, D. 2013. Interpreting random forest models using a feature contribution method. In IRI 2013, San Francisco, CA, USA, August 14-16, 2013, 112–119.
  • [\citeauthoryearPang et al.2006] Pang, H.; Lin, A.; Holford, M.; Enerson, B. E.; Lu, B.; Lawton, M. P.; Floyd, E.; and Zhao, H. 2006. Pathway analysis using random forests classification and regression. Bioinformatics 22(16):2028–2036.
  • [\citeauthoryearRibeiro, Singh, and Guestrin2016] Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. ”why should i trust you?”: Explaining the predictions of any classifier. In SIGKDD 2016, KDD ’16, 1135–1144. New York, NY, USA: ACM.
  • [\citeauthoryearSalimans et al.2016] Salimans, T.; Goodfellow, I. J.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training gans. In NIPS2016, December 5-10, 2016, Barcelona, Spain, 2226–2234.
  • [\citeauthoryearSartipizadeh and Vincent2016] Sartipizadeh, H., and Vincent, T. L. 2016. Computing the approximate convex hull in high dimensions.
  • [\citeauthoryearShmueli and others2010] Shmueli, G., et al. 2010. To explain or to predict? Statistical science 25(3):289–310.
  • [\citeauthoryearTemple1988] Temple, D. 1988. The contrast theory of why-questions. Philosophy of Science 55(1):141–151.
  • [\citeauthoryearTintarev and Masthoff2007] Tintarev, N., and Masthoff, J. 2007. Effective explanations of recommendations: User-centered design. RecSys ’07, 153–156. New York, NY, USA: ACM.
  • [\citeauthoryearVedantam et al.2017] Vedantam, R.; Bengio, S.; Murphy, K.; Parikh, D.; and Chechik, G. 2017. Context-aware captions from context-agnostic supervision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  • [\citeauthoryearWang et al.2015a] Wang, Z.; Wang, H.; Wen, J.; and Xiao, Y. 2015a. An inference approach to basic level of categorization. In CIKM 2015, Melbourne, VIC, Australia, October 19 - 23, 2015, 653–662.
  • [\citeauthoryearWang et al.2015b] Wang, Z.; Zhao, K.; Wang, H.; Meng, X.; and Wen, J. 2015b. Query understanding through knowledge-based conceptualization. In IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, 3264–3270.
  • [\citeauthoryearWang et al.2016] Wang, T.; Rudin, C.; Doshi-Velez, F.; Liu, Y.; Klampfl, E.; and MacNeille, P. 2016. Bayesian rule sets for interpretable classification. In IEEE 16th International Conference on Data Mining, ICDM 2016, December 12-15, 2016, Barcelona, Spain, 1269–1274.
  • [\citeauthoryearWu et al.2012] Wu, W.; Li, H.; Wang, H.; and Zhu, K. Q. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, May 20-24, 2012, 481–492.
  • [\citeauthoryearWu et al.2016] Wu, Y.; Schuster, M.; Chen, Z.; Le, Q. V.; Norouzi, M.; Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.; Klingner, J.; Shah, A.; Johnson, M.; Liu, X.; Kaiser, L.; Gouws, S.; Kato, Y.; Kudo, T.; Kazawa, H.; Stevens, K.; Kurian, G.; Patil, N.; Wang, W.; Young, C.; Smith, J.; Riesa, J.; Rudnick, A.; Vinyals, O.; Corrado, G.; Hughes, M.; and Dean, J. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199934
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description