A Symbolic Approach to Explaining Bayesian Network Classifiers

A Symbolic Approach to Explaining Bayesian Network Classifiers

Andy Shih    Arthur Choi    Adnan Darwiche
Computer Science Department
University of California, Los Angeles
{andyshih,aychoi,darwiche}@cs.ucla.edu
Abstract

We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form. We introduce two types of explanations for why a classifier may have classified an instance positively or negatively and suggest algorithms for computing these explanations. The first type of explanation identifies a minimal set of the currently active features that is responsible for the current classification, while the second type of explanation identifies a minimal set of features whose current state (active or not) is sufficient for the classification. We consider in particular the compilation of Naive and Latent-Tree Bayesian network classifiers into Ordered Decision Diagrams (ODDs), providing a context for evaluating our proposal using case studies and experiments based on classifiers from the literature.

A Symbolic Approach to Explaining Bayesian Network Classifiers


Andy Shih and Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles {andyshih,aychoi,darwiche}@cs.ucla.edu

1 Introduction

Recent progress in artificial intelligence and the increased deployment of AI systems have led to highlighting the need for explaining the decisions made by such systems, particularly classifiers; see, e.g., [????].111It is now recognized that opacity, or lack of explainability is “one of the biggest obstacles to widespread adoption of artificial intelligence” (The Wall Street Journal, August 10, 2017). For example, one may want to explain why a classifier decided to turn down a loan application, or rejected an applicant for an academic program, or recommended surgery for a patient. Answering such why? questions is particularly central to assigning blame and responsibility, which lies at the heart of legal systems and may be required in certain contexts.222See, for example, the EU general data protection regulation, which has a provision relating to explainability, https://www.privacy-regulation.eu/en/r71.htm.

In this paper, we propose a symbolic approach to explaining Bayesian network classifiers, which is based on the following observation. Consider a classifier that labels a given instance either positively or negatively based on a number of discrete features. Regardless of how this classifier is implemented, e.g., using a Bayesian network, it does specify a symbolic function that maps features into a yes/no decision (yes for a positive instance). We refer to this function as the classifier’s decision function since it unambiguously describes the classifier’s behavior, independently of how the classifier is implemented. Our goal is then to obtain a symbolic and tractable representation of this decision function, to enable symbolic and efficient reasoning about its behavior, including the generation of explanations for its decisions. In fact, [?] showed how to compile the decision functions of naive Bayes classifiers into a specific symbolic and tractable representation, known as Ordered Decision Diagrams (ODDs). This representation extends Ordered Binary Decision Diagrams (OBDDs) to use multi-valued variables (discrete features), while maintaining the tractability and properties of OBDD [???].

We show in this paper how compiling decision functions into ODDs can facilitate the efficient explanation of classifiers and propose two types of explanations for this purpose.

The first class of explanations we consider are minimum-cardinality explanations. To motivate these explanations, consider a classifier that has diagnosed a patient with some disease based on some observed test results, some of which were positive and others negative. Some of the positive test results may not be necessary for the classifier’s decision: the decision would remain intact if these test results were negative. A minimum-cardinality explanation then tells us which of the positive test results are the culprits for the classifier’s decision, i.e., a minimal subset of the positive test results that is sufficient for the current decision.

The second class of explanations we consider are prime-implicant explanations. These explanations answer the following question: what is the smallest subset of features that renders the remaining features irrelevant to the current decision? In other words, which subset of features—when fixed—would allow us to arbitrarily toggle the values of other features, while maintaining the classifier’s decision?

This paper is structured as follows. In Section 2, we review the compilation of naive Bayes classifiers into ODDs, and propose a new algorithm for compiling latent-tree classifiers into ODDs. In Section 3, we introduce minimum-cardinality explanations, propose an algorithm for computing them, and provide a case study on a real-world classifier. In Section 4, we do the same for prime-implicant explanations. In Section 5, we discuss the relationship between the two types of explanations and show that they coincide for monotone classifiers. We then follow by a discussion of related work in Section 6 and finally close in Section 7.

2 Compiling Bayesian Network Classifiers

Figure 1: A naive Bayes classifier, specified using the class prior, in addition to the false positive () and false negative () rates of features. The class variable and features are all binary.
Figure 2: An OBDD (decision function) of the classifier in Figure 1.

Consider Figure 1 which depicts a naive Bayes classifier for detecting pregnancy. Given results for the three tests, if the probability of pregnancy passes a given threshold (say ), we would then obtain a “yes” decision on pregnancy.

Figure 2 depicts the decision function of this classifier, in the form of an Ordered Binary Decision Diagram (OBDD). Given some test results, we make a corresponding decision on pregnancy by simply navigating the OBDD. We start at the root, which is labeled with the Urine (U) test. Depending on the outcome of this test, we follow the edge labeled positive, or the edge labeled negative. We repeat for the test labeled at the next node. Eventually, we reach a leaf node labeled “yes” or “no,” which provides the resulting classification.

The decisions rendered by this OBDD are guaranteed to match those obtained from the naive Bayes classifier. We have thus converted a probabilistic classifier into an equivalent classifier that is symbolic and tractable. We will later see how this facilitates the efficient generation of explanations.

We will later discuss compiling Bayesian network classifiers into ODDs, after formally treating classifiers and ODDs.

2.1 Bayesian Network Classifiers

A Bayesian network classifier is a Bayesian network containing a special set of variables: a single class variable and feature variables . The class is usually a root in the network and the features are usually leaves. In this paper, we assume that the class variable is binary, with two values and that correspond to positive and negative classes, respectively (i.e., “yes” and “no” decisions). An instantiation of variables is denoted and called an instance. A Bayesian network classifier specifying probability distribution will classify an instance positively iff , where is called the classification threshold.

Definition 1 (Decision Function)

Suppose that we have a Bayesian network classifier with features , class variable and a threshold . Let be a function that maps instances into . We say that is the classifier’s decision function iff

Instance is positive if and negative if .

The naive Bayes classifier is a special type of a Bayesian network classifier, where edges extend from the class to features (no other nodes or edges). Figure 1 depicted a naive Bayes classifier. A latent-tree classifier is a tree-structured Bayesian network, whose root is the class variable and whose leaves are the features.

2.2 Monotone Classifiers

The class of monotone classifiers is relevant to our discussion, particularly when relating the two types of explanations we shall propose. We will define these classifiers next, while assuming binary features to simplify the treatment. Intuitively, a monotone classifier satisfies the following. A positive instance remains positive if we flip some of its features from to . Moreover, a negative instance remains negative if we flip some of its features from to .

More formally, consider two instances and . We write to mean: the features set to in is a subset of those set to in . Monotone classifiers are then characterized by the following property of their decision functions, which is well-known in the literature on Boolean functions.

Definition 2

A decision function is monotone iff

One way to read the above formal definition is as follows. If the positive features in instance contain those in instance , then instance must be positive if instance is positive.

It is generally difficult to decide whether a Bayesian network classifier is monotone; see, e.g., [?]. However, if the decision function of the classifier is an OBDD, then monotonicity can be decided in time quadratic in the OBDD size [?].

2.3 Ordered Decision Diagrams

An Ordered Binary Decision Diagram (OBDD) is based on an ordered set of binary variables . It is a rooted, directed acyclic graph, with two sinks called the -sink and -sink. Every node (except the sinks) in the OBDD is labeled with a variable with two outgoing edges, one labeled and the other labeled . If there is an edge from a node labeled to a node labeled , then . An OBDD is defined over binary variables, but can be extended to discrete variables with arbitrary values. This is called an ODD: a node labeled with variable has one outgoing edge for each value of variable . Hence, an OBDD/ODD can be viewed as representing a function that maps instances into . Figure 2 depicted an OBDD. Note: in this paper, we use positive/yes/ and negative/no/ interchangeably.

An OBDD is a tractable representation of a function as it can be used to efficiently answer many queries about the function. For example, one can in linear time count the number of positive instances (i.e., ), called the models of . One can also conjoin, disjoin and complement OBDDs efficiently. This tractability, which carries over to ODDs, will be critical for efficiently generating explanations. For more on OBDDs, see [??].

2.4 Compiling Decision Functions

[?] proposed an algorithm for compiling a naive Bayes classifier into an ODD, while guaranteeing an upper bound on the time of compilation and the size of the resulting ODD. In particular, for a classifier with features, the compiled ODD has a number of nodes that is bounded by and can be obtained in time . Here, is the maximum number of values that a variable may have. The actual time and space complexity can be much less, depending on the classifier’s parameters and variable order used for the ODD (as observed experimentally).

The algorithm is based on the following insights. Let be all features. Observing features leads to another naive Bayes classifier, with features and an adjusted class prior. Consider now a decision tree over features and a node in the tree that was reached by a partial instantiation . We annotate this node with the corresponding naive Bayes classifier found by observing , and then merge nodes with equivalent classifiers—those having equivalent decision functions—as described by [?]. Implementing this idea carefully leads to an ordered decision diagram (ODD) with the corresponding bounds.333[?] uses a sophisticated, but conceptually simple, technique for identifying equivalent classifiers.

input: A naive Bayes classifier

output: An ODD for the decision function of

main:

1:   empty decision graph
2:  for each feature of classifier  do
3:     
4:  return  ODD
Algorithm 1 compile-naive-bayes()

input: A latent-tree classifier

output: An ODD for the decision function of

main:

1:   empty decision graph
2:   root of tree
3:  while  has unprocessed children do
4:     if  has a single internal and unprocessed child  then
5:        
6:     else
7:         child of with smallest number of leaves
8:        for each leaf under  do
9:           
10:        mark as processed
11:  return  ODD
Algorithm 2 compile-latent-tree()

Algorithm 1 is a simpler variation on the algorithm of [?]; it has the same complexity bounds, but may be less efficient in practice. It uses procedure , which expands the partial decision graph by a feature , then merges nodes that correspond to equivalent classifiers.

Using this procedure, we propose Algorithm 2 for compiling a latent-tree classifier into an ODD. Here’s the key insight. Let be a node in a latent-tree classifier where all features outside have been observed, and let be a child of . Observing all features under leads to a new latent-tree classifier without the subtree rooted at and an adjusted class prior. Algorithm 2 uses this observation by iteratively choosing a node and then shrinking the classifier size by instantiating the features under , allowing us to compile an ODD in a fashion similar to [?]. The specific choice of internal nodes by Algorithm 2 leads to the following complexity.

Theorem 1

Given a latent-tree classifier with variables, each with at most values, the ODD computed by Algorithm 2 has size and can be obtained in time .

If one makes further assumptions about the structure of the latent tree (e.g., if the root has children, and each child of the root has features), then one obtains the size bound of and time bound of for naive Bayes classifiers. We do not expect a significantly better upper bound on the time complexity due to the following result.

Theorem 2

Given a naive Bayes classifier , compiling an ODD representing its decision function is NP-hard.

3 Minimum Cardinality Explanations

We now consider the first type of explanations for why a classifier makes a certain decision. These are called minimum-cardinality explanations or MC-explanations. We will first assume that the features are binary and then generalize later.

Consider two instances and . As we did earlier, we write to mean: the features set to in are a subset of those set to in . We define analogously. Moreover, we write to mean: the count of -features in is no greater than their count in . We define analogously.

Definition 3 (MC-Explanation)

Let be a given decision function. An MC-explanation of a positive instance is another positive instance such that and there is no other positive instance where . An MC-explanation of a negative instance is another negative instance such that and there is no other negative instance where .

Intuitively, an MC-explanation of a positive decision answers the question: which positive features of instance are responsible for this decision? Similarly for the MC-explanation of a negative decision : which negative features of instance are responsible for this decision? MC-explanations are not necessarily unique as we shall see later. However, MC-explanations of positive decisions must all have the same number of -features, and those for negative decisions must all have the same number of -features.

MC-explanations are perhaps best illustrated using a monotone classifier. As a running example, consider a (monotone) classifier for deciding whether a student will be admitted to a university. The class variable is () and the features of an applicant are:

  • (): has prior work experience.

  • (): did not apply before.

  • (): passed the entrance exam.

  • (): has met the university’s expected GPA.

All variables are either positive (+) or negative (-).

Consider a naive Bayes classifier with the following false positive and false negative rates:

feature

To completely specify the naive Bayes classifier, we also need the prior probability of admission, which we assume to be . Moreover, we use a decision threshold of , admitting an applicant if . Note that with the above false positive and false negative rates, a positively observed feature will increase the probability of a positive classification, while a negatively observed feature will increase the probability of a negative classification (hence, the classifier is monotone).

MC-explanations
- - - - 0.0002 -

(- - + +) (- + - +) (- + + -)

(+ - + -) (+ + - -)

- - - + 0.0426 - (- - + +) (- + - +)
- - + - 0.0006 - (- - + +) (- + + -) (+ - + -)
- - + + 0.1438 - (- - + +)
- + - - 0.0016 - (- + - +) (- + + -) (+ + - -)
- + - + 0.2933 - (- + - +)
- + + - 0.0060 - (- + + -)
- + + + 0.6105 + (- + + +)
+ - - - 0.0354 - (+ + - -) (+ - + -)
+ - - + 0.9057 + (+ - - +)
+ - + - 0.1218 - (+ - + -)
+ - + + 0.9732 + (+ - - +)
+ + - - 0.2552 - (+ + - -)
+ + - + 0.9890 + (+ - - +)
+ + + - 0.5642 + (+ + + -)
+ + + + 0.9971 + (+ - - +)
Table 1: A decision function with MC-explanations.

Table 1 depicts the decision function for this naive Bayes classifier, with MC-explanations for all instances.

Consider, for example, a student (+ + + +) who was admitted by this decision function. There is a single MC-explanation for this decision, (+ - - +), with cardinality . According to this explanation, work experience and a good GPA were the reasons for admission. That is, the student would still have been admitted even if they have applied before and did not pass the entrance exam.

For another example, consider a student (- - - +) who was rejected. There are two MC-explanations for this decision. The first, (- - + +), says that the student would not have been admitted, even if they passed the entrance exam. The second explanation, (- + - +), says that the student would not have been admitted, even if they were a first-time applicant.

Finally, we remark that while MC-explanations are more intuitive for monotone classifiers, they also apply to classifiers that are not monotone, as we shall see in Section 3.2.

3.1 Computing MC-Explanations

We will now present an efficient algorithm for computing the MC-explanations of a decision, assuming that the decision function has a specific form. Our treatment assumes that the decision function is represented as an OBDD, but it actually applies to a broader class of representations which includes OBDDs as a special case. More on this later.

Our algorithm uses a key operation on decision functions.

Definition 4 (Cardinality Minimization)

For , the -minimization of decision function is another decision function defined as follows: iff (a)  and (b)  for every .

The -minimization of decision function renders positive decisions only on the positive instances of having a minimal number of -features. Similarly, the -minimization of decision function renders positive decisions only on the positive instances of having a minimal number of -features. Cardinality minimization was discussed and employed for other purposes in [??].

input: An OBDD and instance .

output: An OBDD where iff is an MC-explanation of decision .

main:

1:  
2:  
3:  complement function if
4:  return  -
Algorithm 3 find-mc-explanation()

Algorithm 3 computes the MC-explanations of a decision . The set of computed explanations is encoded by another decision function . In particular, iff is an MC-explanation of decision .

Suppose we want to compute the MC-explanations of a positive decision . The algorithm will first find the portion of instance with variables set to . It will then conjoin444Conjoining with leads to a function such that iff and is compatible with . with and -minimize the result. The obtained decision function encodes the MC-explanations in this case.

An OBDD can be complemented and conjoined with a variable instantiation in linear time. It can also be minimized in linear time. This leads to the following complexity for generating MC-explanations based on OBDDs.

Theorem 3

When the decision function is represented as an OBDD, the time and space complexity of Algorithm 3 is linear in the size of , while guaranteeing that the output function is also an OBDD.

Given OBDD properties, one can count MC-explanations in linear time, and enumerate each in linear time.555Minimization, conjoin, and model enumeration are all linear time operations on DNNFs, which is a superset of OBDDs [??]. Moreover, where we read as “is-a-subclass-of”. Hence, DNNFs, d-DNNFs and SDDs could have been used for supporting MC-explanations, except that we would need a different algorithm for compiling classifiers. Moreover, beyond OBDDs, only SDDs support complementation in linear time. Hence, efficiently computing MC-explanations of negative decision requires that we efficiently complement the decision functions represented by DNNFs or d-DNNFs.

3.2 Case Study: Votes Classifier

We now consider the Congressional Voting Records () from the UCI machine learning repository [?]. This dataset consists of 16 key votes by Congressmen of the U.S. House of Representatives. The class label is the party of the Congressman (positive if Republican and negative if Democrat). A naive Bayes classifier trained on this dataset obtains 91.0% accuracy. We compiled this classifier into an OBDD, which has a size of 630 nodes.

The following Congressman from the dataset voted on all 16 issues and was classified correctly as a Republican:

(0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1)

This decision has five MC-explanations of cardinality 3, e.g.:

(0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0)

The MC-explanation tells us that this Congressmen could have reversed four of their yes-votes, and the classifier would still predict that this Congressman was a Republican.

For a problem of this size, we can enumerate all instances of the classifier. We computed the MC-explanations for each of the positive instances, out of a possible number of instances. Among these MC-explanations, the one that appeared the most frequently was the MC-explanation from the above example. This explanation corresponded to yes-votes on three issues: , , and . Further examination of the dataset revealed that these issues were the three with the fewest Republican no-votes.

4 Prime Implicant Explanations

We now consider the second type of explanations, called prime-implicant explanations or PI-explanations for short.

Let and be instantiations of some features and call them partial instances. We will write to mean that extends , that is, it includes but may set some additional features.

Definition 5 (PI-Explanation)

Let be a given decision function. A PI-explanation of a decision is a partial instance such that

  1. ,

  2. for every , and

  3. no other partial instance satisfies (a) and (b).

Intuitively, a PI-explanation of decision is a minimal subset of instance that makes features outside irrelevant to the decision. That is, we can toggle any feature that does not appear in while maintaining the current decision. The number of features appearing in a PI-explanation will be called the length of the explanation. As we shall see later, PI-explanations of the same decision may have different lengths.

PI-explanations
- - - - 0.0002 - () () () () ()
- - - + 0.0426 - () ()
- - + - 0.0006 - () () ()
- - + + 0.1438 - ()
- + - - 0.0016 - () () ()
- + - + 0.2933 - ()
- + + - 0.0060 - ()
- + + + 0.6105 + ()
+ - - - 0.0354 - () ()
+ - - + 0.9057 + ()
+ - + - 0.1218 - ()
+ - + + 0.9732 + ()
+ + - - 0.2552 - ()
+ + - + 0.9890 + ()
+ + + - 0.5642 + ()
+ + + + 0.9971 + () () ()
Table 2: A decision function with PI-explanations.

Table 2 depicts the decision function for the admissions classifier, with PI-explanations for all 16 instances. We write for and () for .

Consider a student (+ + - -) who was not admitted by this decision function. There is a single PI-explanation () for this decision. According to this explanation, it is sufficient to have a poor entrance exam and a poor GPA to be rejected—it does not matter whether they have work experience or if they are a first-time applicant. That is, we can set these features to any value, and the applicant would still be rejected.

Consider now a student (+ + + +) who was admitted. There are three PI-explanations for this decision, () () (), with different lengths. These explanations can be visualized as (+ * * +), (+ + + *) and (* + + +). This is in contrast to the single MC-explanation (+ - - +) obtained previously.

4.1 Computing Prime Implicant Explanations

Algorithms exist for converting an OBDD for function into an ODD that encodes the prime implicants of [???].666These algorithms compute prime-implicant covers. The resulting ODD has three values for each variable: , and (don’t care). The ODD encodes partial instances, which correspond to the PI-explanations of positive instances (to get the PI-explanations of negative instances, we complement the OBDD ). These algorithms recurse on the structure of the input OBDD, computing prime implicants of sub-OBDDs. If is the variable labeling the root of OBDD , then denotes its -child and denotes its -child. Algorithm 4 computes prime implicants by recursively computing prime implicants for , and [?].

As we are interested in explaining a specific instance , we only need the prime implicants compatible with (a function may have exponentially many prime implicants, but those compatible with an instance may be small). We exploit this observation in Algorithm 5, which computes the PI-explanations of a given positive instance by avoiding certain recursive calls. Empirically, we have observed that Algorithm 5 can be twice as fast as Algorithm 4 (computing PIs first, then conjoining with a given instance to obtain PI-explanations). It can also generate ODDs that are an order-of-magnitude smaller. The following table highlights this difference in size and running time, per instance, between Algorithms 4 (cover) & 5 (inst). Relative improvements are denoted by impr; denotes the number of features. We report averages over instances.

input: OBDD and variable ordering

output: ODD encoding prime implicants of

main:

1:  if is empty return
2:  remove first variable from order
3:  
4:  
5:  
6:  return  ODD with branches
Algorithm 4 pi-cover()

input: OBDD , variable ordering , and instance

output: ODD for primes implicant compatible with

main:

1:  if is empty return
2:  remove first variable from order
3:  
4:  if  sets to  then
5:     
6:  else
7:     
8:  
9:  return  ODD with branches
Algorithm 5 pi-inst()
time (s) ODD size
dataset cover inst impr cover inst impr
16 0.04 0.02 1.99 2,144 139 15.42
22 0.06 0.02 2.27 3,130 437 7.14
16 0.07 0.02 2.56 5,086 446 11.39
15 0.03 0.02 1.39 432 111 3.89

4.2 Case Study: Votes Classifier

Consider again the voting record of the Republican Congressman that we considered earlier in Section 3.2:

(0 1 0 1 1 1 0 0 0 0 0 0 1 1 0 1)

There are 30 PI-explanations of this decision. There are 2 shortest explanations of features:

(0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0)

(0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0)

The first corresponds to yes votes on:

, , , ,

and no votes on

, , , , .

These 9 votes necessitate the classification of a Republican; no other vote changes this decision. Finally, there are PI-explanations for all decisions made by this classifier:

length of explanation 9 10 11 12 13 total
number of explanations 35 308 143 19 1 506

5 More On Monotone Classifiers

We now discuss a specific relationship between MC and PI explanations for monotone classifiers.

An MC-explanation sets all features, while a PI-explanation sets only a subset of the features. For a positive instance, we will say that MC-explanation and PI-explanation match iff can be obtained from by setting all missing features negatively. For a negative instance, MC-explanation and PI-explanation match iff can be obtained from by setting all missing features positively.

Theorem 4

For a decision of a monotone decision function :

  1. Each MC-explanation matches some shortest PI-explanation.

  2. Each shortest PI-explanation matches some MC-explanation.

Hence, for monotone decision functions, MC-explanations coincide with shortest PI-explanations.

The admissions classifier we considered earlier is monotone, which can be verified by inspecting its decision function (in contrast, the votes classifier is not monotone). Here, all MC-explanations matched PI-explanations. For example, the MC-explanation (+ - - +) for instance (+ + - +) matches the PI-explanation . However, the PI-explanation for instance (+ + + +) does not match the single MC-explanation (+ - - +). One can verify though, by examining Tables 1 and 2, that shortest PI-explanations coincide with MC-explanations.

MC-explanations are no longer than PI-explanations and their count is no larger than the count of PI-explanations. Moreover, MC-explanations can be computed in linear time, given that the decision function is represented as an OBDD. This is not guaranteed for PI-explanations.

PI-explanations can be directly extended to classifiers with multi-valued features. They are also meaningful for arbitrary classifiers, not just monotone ones. While our definition of MC-explanations was directed towards monotone classifiers with binary features, it can be generalized so it remains useful for arbitrary classifiers with multi-valued features. In particular, let us partition the values of each feature into two sets: on-values and off-values. Let us also partition the set of features into and . Consider now the following question about a decision , where . Keeping fixed, find a culprit of on-features in that maintains the current decision. Definition 3 is a special case of this more general definition, and Algorithm 3 can be easily extended to compute these more general MC-explanations using the same complexity (that is, linear in the size of ODD for the decision function).

6 Related Work

There has been significant interest recently in providing explanations for classifiers; see, e.g., [?????]. In particular, model-agnostic explainers were sought [?], which can explain the behavior of (most) any classifier, by treating it as a black box. Take for example, LIME, which locally explains the classification of a given instance. Roughly, LIME samples new instances that are “close” to a given instance, and then learns a simpler, interpretable model from the sampled data. For example, suppose a classifier rejects a loan to an applicant; one could learn a decision tree for other instances similar to the applicant, to understand why the original decision was made.

More related to our work is the notion of an “anchor” introduced in [??]. An anchor for an instance is a subset of the instance that is highly likely to be classified with the same label, no matter how the missing features are filled in (according to some distribution). An anchor can be viewed as a probabilistic extension of a PI-explanation. Anchors can also be understood using the Same-Decision Probability (SDP) [???], proposed in [?]. In this context, the SDP asks, “Given that I have already observed , what is the probability that I will make the same classification if I observe the remaining features?” In this case, we expect an anchor to have a high SDP, but a PI-explanation will always have an SDP of 1.0.

7 Conclusion

We proposed an algorithm for compiling latent-tree Bayesian network classifiers into decision functions in the form of ODDs. We also proposed two approaches for explaining the decision that a Bayesian network classifier makes on a given instance, which apply more generally to any decision function in symbolic form. One approach is based on MC-explanations, which minimize the number of positive features in an instance, while maintaining its classification. The other approach is based on PI-explanations, which identify a smallest set of features in an instance that renders the remaining features irrelevant to a classification. We proposed algorithms for computing these explanations when the decision function has a symbolic and tractable form. We also discussed monotone classifiers and showed that MC-explanations and PI-explanations coincide for this class of classifiers.

Acknowledgments

This work has been partially supported by NSF grant #IIS-1514253, ONR grant #N00014-15-1-2339 and DARPA XAI grant #N66001-17-2-4032.

Appendix A Proofs

  • Our proof is based on analyzing Algorithm 2 on an arbitrary latent-tree classifier with variables and values, and bounding the size of the decision graph after each call to expand-then-merge. For any iteration of the while-loop, let be the initial decision graph and let be the decision graph generated after the expanding phase of . Furthermore, let denote the number of leaf nodes of (similarly for ). We will show the following loop invariant: . For any iteration, is bounded by , where denotes the depth of . There are variables remaining, and the choice of in the algorithm guarantees that the number of variables under is at most . Thus, is bounded by . If , then . Otherwise if then . Thus, after every call to expand-then-merge, the decision graph has at most leaf nodes and the merging phase cannot increase the number of nodes, giving us a total size bound of . To obtain the size bound of , observe that is at least half of the number of newly expanded nodes for each call, and at most one such call can have nodes. Finally, merging a node in takes time logarithmic in the size of , so the time complexity is .  

  • Our proof is based on [?], which showed that computing the same-decision probability (SDP) is NP-hard in naive Bayes networks. Say we have an instance of the number partitioning problem, where we have positive integers and we ask if there exists a set such that . Suppose we have a naive Bayes classifier with features where:

    and where we have a uniform prior . Let be the instance where is set to true if and is set to false if . Consider the log-odds :

    If is a number partitioning solution, then Otherwise where . Hence, if there is no solution , then half of the instances have log-odds strictly greater than zero, and the other half have log-odds strictly less than zero. Thus, there exists a solution iff the number of positive instances in the decision function of is strictly less than given a (strict) threshold of . Finally, if we can compile the decision function of to an OBDD in polytime, then we can perform model counting in time linear in the size of the OBDD, and hence solve number partitioning, which is NP-complete. Thus, compiling the decision function is NP-hard.  

  • An OBDD can be complemented by simply switching its -sink and -sink. Since is a conjunction of literals, we can conjoin with by manipulating the OBDD structure directly: if appears in positively (negatively), we redirect the 0-edge (1-edge) of each OBDD node labeled by to the -sink. Clearly, this operation takes time linear in the size of . The operation of -minimization can also be performed in time linear in the size of using the technique given in [?] for DNNFs. The minimization procedure performs two passes. The first pass performs an addition or minimization at each node. The second pass redirects some edges depending on simple tests.  

  • Suppose, without loss of generality, that we are explaining a positive instance of a monotone decision function (the negative case is symmetric). The proof uses the following observation: A shortest PI-explanation must have all its features set positively (otherwise, due to monotonicity, we can just drop the negative features in to obtain a shorter PI-explanation).

    1. Suppose that is an MC-explanation. Let be the portion of containing all features that are set positively. Due to monotonicity, we can toggle features of that are outside without changing the decision. Moreover, no subset of will have this property; otherwise, cannot be an MC-explanation. Hence, is a PI-explanation that matches . Suppose now that is not a shortest PI-explanation and let be a shortest PI-explanation. Then we can augment by setting all missing features negatively, giving us a positive instance with a 1-cardinality less than that of . Hence, cannot be an MC-explanation.

    2. Suppose that is a shortest PI-explanation. Then all features in must be set positively. Now let be the result of augmenting by setting all missing features negatively. Then is a positive instance since is a PI-explanation. Suppose now that is not an MC-explanation, and let be an MC-explanation. Then let be the portion of containing all features that are set positively. By monotonicity, cannot be a shortest PI-explanation since is shorter than yet all of its completions would be positive instances.

References

  • [Bache and Lichman, 2013] K. Bache and M. Lichman. UCI machine learning repository, 2013.
  • [Bryant, 1986] R. E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, C-35:677–691, 1986.
  • [Chan and Darwiche, 2003] Hei Chan and Adnan Darwiche. Reasoning about Bayesian network classifiers. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 107–115, 2003.
  • [Chen et al., 2014] Suming Chen, Arthur Choi, and Adnan Darwiche. Algorithms and applications for the same-decision probability. Journal of Artificial Intelligence Research, 49:601–633, 2014.
  • [Choi et al., 2012] Arthur Choi, Yexiang Xue, and Adnan Darwiche. Same-decision probability: A confidence measure for threshold-based decisions. International Journal of Approximate Reasoning (IJAR), 53(9):1415–1428, 2012.
  • [Choi et al., 2013] Arthur Choi, Doga Kisa, and Adnan Darwiche. Compiling probabilistic graphical models using sentential decision diagrams. In Proceedings of the 12th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU), pages 121–132, 2013.
  • [Choi et al., 2017] YooJung Choi, Adnan Darwiche, and Guy Van den Broeck. Optimal feature selection for decision robustness in Bayesian networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), August 2017.
  • [Coudert and Madre, 1993] Olivier Coudert and Jean Christophe Madre. Fault tree analysis: prime implicants and beyond. In Proc. of the Annual Reliability and Maintainability Symposium, 1993.
  • [Coudert et al., 1993] Olivier Coudert, Jean Christophe Madre, Henri Fraisse, and Herve Touati. Implicit prime cover computation: An overview. In Proceedings of the 4th SASIMI Workshop, 1993.
  • [Darwiche and Choi, 2010] Adnan Darwiche and Arthur Choi. Same-decision probability: A confidence measure for threshold-based decisions under noisy sensors. In Proceedings of the Fifth European Workshop on Probabilistic Graphical Models (PGM), pages 113–120, 2010.
  • [Darwiche and Marquis, 2002] Adnan Darwiche and Pierre Marquis. A knowledge compilation map. JAIR, 17:229–264, 2002.
  • [Darwiche, 2001] Adnan Darwiche. Decomposable negation normal form. Journal of the ACM, 48(4):608–647, 2001.
  • [Elenberg et al., 2017] Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, and Amin Karbasi. Streaming weak submodularity: Interpreting neural networks on the fly. In Advances in Neural Information Processing Systems 30 (NIPS), pages 4047–4057, 2017.
  • [Horiyama and Ibaraki, 2002] Takashi Horiyama and Toshihide Ibaraki. Ordered binary decision diagrams as knowledge-bases. Artificial Intelligence (AIJ), 136(2):189–213, 2002.
  • [Lundberg and Lee, 2017] Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30 (NIPS), pages 4768–4777, 2017.
  • [Meinel and Theobald, 1998] Christoph Meinel and Thorsten Theobald. Algorithms and Data Structures in VLSI Design: OBDD — Foundations and Applications. Springer, 1998.
  • [Minato, 1993] Shin-ichi Minato. Fast generation of prime-irredundant covers from binary decision diagrams. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 76(6):967–973, 1993.
  • [Ribeiro et al., 2016a] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Nothing else matters: Model-agnostic explanations by identifying prediction invariance. In NIPS Workshop on Interpretable Machine Learning in Complex Systems, 2016.
  • [Ribeiro et al., 2016b] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ”why should i trust you?”: Explaining the predictions of any classifier. In Knowledge Discovery and Data Mining (KDD), 2016.
  • [Ribeiro et al., 2018] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018.
  • [van der Gaag et al., 2004] Linda C. van der Gaag, Hans L. Bodlaender, and A. J. Feelders. Monotonicity in Bayesian networks. In Proceedings of the 20th Conference in Uncertainty in Artificial Intelligence (UAI), pages 569–576, 2004.
  • [Wegener, 2000] Ingo Wegener. Branching Programs and Binary Decision Diagrams. SIAM, 2000.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
187874
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description