The Emerging Landscape of Explainable Automated Planning & Decision Making

The Emerging Landscape of Explainable Automated Planning & Decision Making

Abstract

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years, and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.

1 Introduction

As AI techniques mature, issues of interfacing with users has emerged as one of the primary challenges facing the AI community. Primary among these challenges is for AI-based systems to be able to explain their reasoning to humans in the loop [25]. This is necessary both for collaborative interactions where humans and AI systems solve problems together, as well as in establishing trust with end users in general. Among the work in this direction in the broader AI community, in this survey, we focus on how the automated planning community in particular has responded to this challenge.

One of the recent developments towards this end is the establishment of the Explainable AI Planning (XAIP) Workshop1 at the International Conference on Automated Planning and Scheduling (ICAPS), the premier conference in the field. The agenda of the workshop states:

While XAI at large is primarily concerned with black-box learning-based approaches, model-based approaches are well suited – arguably better suited – for an explanation, and Explainable AI Planning (XAIP) can play an important role in helping users interface with AI technologies in complex decision-making procedures.

In general, this is true for sequential decision making tasks for a variety of reasons. The complexity of automated planning and decision making, and consequently the role of explainability in it, raises many more challenges than function approximation tasks (e.g. classification) as was originally focused on [26] by the XAI Program from DARPA. This includes dealing with complex constraints over problems intractable to the human’s inferential capabilities, differences in human expectations and mental models, to proving provenance of various artifacts of a system’s decision making process over long term interactions even as the world evolves around it. Furthermore, these typically deal with reasoning tasks where we tend to seek explanations anyway in human-human interactions, as opposed to perception tasks.

Thus, the original DARPA XAI program [26], which served as a great catalyst towards advancing research in explainable AI, has also seen evolution [25] of its core focus from machine learning to the broader sense of artificial intelligence, particularly decision making tasks. Recent surveys on the topic [1] also recognize this lacuna. As the issue of explainability becomes front and center in AI, the importance of long term decision making cannot be avoided [63]. This is highlighted by the emergence of XAI-subcommunities within planning, multi-agents, and other communities at premier AI conferences, including the Explainable AI (XAI) Workshop2 at the International Joint Conference on Artificial Intelligence (IJCAI) and the Explainable Transparent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS) Workshop3 at the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), which in addition to the XAIP Workshop mentioned above, has captured the imagination of this emerging field of inquiry.

Survey Scope and Outline In this survey, we highlight the role of explanations in the many unique dimensions of a decision making problem, particularly automated planning, and provide a comprehensive survey of recent work in this direction. In particular, we will focus on automated planning as a subfield of decision making problems in order to adhere to the limitations of a six page survey but we will point to work in the broader area wherever necessary to highlight themes of explainable planning in general. To this end, we will start with a brief overview of the different kinds of users associated with a automated decision making task and the considerations for an explanation in each case. We then introduce various aspects of a planning task formally and delve into a survey of existing works that tackle the explanation problem in one or more of these dimensions, while comparing and contrasting the properties of such explanations. Finally, we will conclude with a summary of emerging trends in XAIP research.

In the survey, we focus exclusively on explanations of a plan as a solution of a given planning problem. We will not cover meta planning problems such as goal reasoning [52, 13, 50], or open world considerations in the explanation of plans that fail [27]. We will also not cover novel behaviors in pursuit of explainability: e.g. the generation of explicable plans [66] that conform to user expectations and are thus not required to be explained, or the design of environments to facilitate the same [37]. For a detailed treatise of the same, we refer the reader to [9]. Other topics excluded are execution time considerations, such as in [39].

2 The Many Faces of XAIP

The primary considerations in the design of explainable systems is the consideration of the persona of the explainee. This is true for explaianble AI in general [67] but also acknowledged to be crucial to the XAIP scene as well [40].

  • End user: This is the person who interacts with the system in the form of a user. For a planning system, this may be the human teammate in a human-robot team [11] who is impacted by, or is a direct stakeholder in the plans of the robot, or user collaborating with an automated planner in a decision support setting [24].

  • Domain Designer: This is the person involved in the acquisition of the model that the system works with: e.g. the designer of goal-oriented conversation systems [55].

  • Algorithm Designer: The final persona is that of the developer of the algorithms themselves: e.g. in the context of automated planning systems, this could be someone working on informed search.

Though [40] does not make an explicit distinction, for most real-world applications, the domain designer is distinct from the algorithm designer and may even not have any overlap in expertise (e.g. [55]). As we go into details of different forms of XAIP techniques, we will see how they cater to the needs of one or more of these personas (c.f. Figure 1).

3 The Decision Making Problem

A sequential decision making or planning problem is defined in terms of a transition function , where is the set of capabilities available to the agent, is the set of states it can be in, and the real number denotes the cost of making the transition. The planning algorithm solves subject to a desired property to produce a plan or policy , i.e. . Here, may represent different properties such as soundness, optimality, and so on.

  • Plan that transforms the current state of the agent to its goal , i.e. . The second term in the output denotes the plan cost . The optimal plan is .

  • Policy provides a mapping from any state of the agent to the desired action to be taken in that state. The optimal policy is .

While specific decision making tasks have more nuanced definitions characterizing what forms states and actions can take, how the transition function is defined, etc. for the purposes of this survey, this abstraction should be enough for the general audience to grasp the salient features of a decision making task and relevant XAIP concepts.

Algorithm-based Model-based Explanations
Explanations Inference Resolution Model Reconciliation
End User
Domain Designer n/a
Algorithm Designer
Table 1: The many faces of XAIP.

3.1 The Explanation Process

The explanation process of a planning problem proceeds as follows, with a question from the explainee about the current solution of a given planning problem, and the explainer (the XAIP system) coming up with an explanation for it:

  • “Why ?” or “Why not ?”

  • Here, is a foil [45] and may be either stated explicitly, implicitly, or even partially (leading to a set of foils) in the questions. Examples of foils would be:

    • “Why ?” is a partial foil where all plans with action in them are the foils.

    • The original question “Why ?” where the implicit foil is “as opposed to all other plans ”.

  • An explanation such that the explainee can compute and verify that either

    • ; or

    • but or (the criterion for comparison may be cost, preferences, etc.).

The point of an explanation is thus to establish the property of the solution given a planning problem . The Q&A continues until the explainee is satisfied. The content of an explanation can vary greatly depending on the needs of the explainee (Figure 1). This is the topic of discussion next.

3.2 Explanation Artifacts: Algorithm/Model/Plan

Clearly, from the definition of the decision making task above, there are many components at play here which can contribute to an explanation of a plan. The system can explain the steps made in while solving a problem to the debugger / algorithm designer. It can also explain artifacts of the problem description that led to the decision: these are model-based algorithm-agnostic explanations. These are more useful to end users. The system can also communicate characteristics of as an explanation.

It is interesting to note that this sort of a distinction can be seen in the literature on explainable machine learning as well. For example, LIME [47] interfaces with the explainee at the level of outputs only, i.e. the classification choices made (corresponding to plans computed in our setting) – it is also algorithm dependent since it reveals (albeit simplified) details of the learned model to the user. Approaches like [49], on the other hand, are purely algorithm dependent requiring the explainee to visualize the internal representations learned by the algorithm at hand. Other works such as [14] provide algorithm independent explanations in terms of the input data and black box learners, similar to model-based explanations in our case that use the input problem definition as the basis of an explanation and not the inference engine.

3.3 Properties of Explanations

Existing literature on explainable artificial intelligence, as well as studies on explanations in human-human interactions, surface recurring themes used to characterize explanations.

Social, Selective, and Contrastive Looking at how humans explain their decisions to each other can provide great insight on the desired properties of an explanation. Miller in [45] provides an insightful survey of lessons learned from social sciences and how they can impact the informed design of explainable AI systems. He outlines three key properties for consideration: social in being able to model the expectations of the explainee, selective in being able to select explanations among several competing hypothesis, and contrastive in being able to differentiate properties of two competing hypothesis. The contrastive property in particular has received a lot of attention [29, 46] in the XAIP community.

Local versus Global Explanations Another consideration is whether an explanation is geared towards a particular decision (local), e.g. LIME [47], or they are for the entire model (global), e.g. TCAV [33] – for a planning problem this distinction can manifest in many ways: whether the explanation is for a given plan versus if it is for the model in general.

Abstractions One final approach we want to highlight is the use of abstractions: this is especially useful if the model of decision-making is too complex for the explainee and a simplified model can provide more useful feedback. [47]

4 Algorithm-based Explanations

We first look at attempts to explain the underlying planning algorithm. This is quite useful for debugging: e.g. [42] provides an interactive visualization of the search tree for a given problem. Another case is where the explanation methods are particularly tailored for specific algorithms. Such explanatory methods have become quite common in explaining decisions generated by deep reinforcement learning. For example, authors in [22] look at the possibility of generating perturbation based saliency maps for explaining a policy learned by Asynchronous Advantage Actor-Critic Algorithms, while authors in [35] look at learning finite-state representation (a Moore machine) that can represent RL policies learned by RNNs.

5 Model-based Explanations

Majority of works in XAIP look at algorithm-agnostic methods for generating explanations since properties of a solution can be evaluated independently of the method used to come up with it, given the model of the decision making task. As opposed to debugging settings where the algorithm has to be investigated in more detail, end users typically care about model-based algorithm-agnostic explanations more so that services [5] can be built around it. Approaches in this category deal with two considerations: 1) the inferential capability; and/or 2) the mental model of the user. When both of these are aligned, there is no need to explain.

5.1 Inference Reconciliation

Users have considerably less computational ability (let’s say ) than a planner. In this situation:

  • and

An explanation here is supposed to reconcile the inferential power of the user and the planner:

In order to help the inference process of the user, there are usually two broad approaches (not necessarily exclusive): (a) Allow the user to raise specific questions about a plan and engage in explanatory dialogue; and (b) leverage abstraction techniques to allow the user to better understand the plan.

Investigatory Dialogue With a few exceptions, most of the methods that engage in explanatory dialogue look at queries contrasting the given plan with a foil (implicit or explicit).

  • “Why is this action in this plan or why ?”

One of the most well-known approaches for answering this [51, 2] use a causal link chain originating at that can be traced to the goal. The explanations is thus a subset of the model that effectively explains the role of the action by pointing out the preconditions of successive actions that are being supported by the plan in question. While the original paper [51] does not specifically talk about any selection criterion for the explanation content, recent work [6] has shown how the information can be minimized.

  • “Why not this other plan ?”

This is the case where a contrastive foil is explicitly considered. Authors in [5, 36] assume that the foils specified by the user can be best understood as constraints on the plans they are expecting: e.g. a certain action/action-sequence to be included/excluded. The explanation is then to identify an exemplary plan that satisfies those constraints thus demonstrating how the computed plan is better. Authors in [16], on the other hand, expect the user queries to be expressed in terms of plan properties which are user-defined binary properties that apply to all valid plans for the decision problem. The explanation then takes the form of other plan properties that are entailed by those properties. This is computed using oversubscription planning with plan properties reflecting goals.

  • “Why is this policy optimal, i.e. ?”

Such questions are pursued particularly in the context of MDPs: authors in [32] phrase explanations in terms of the frequency with which the current action would lead the agent to high-value states, while authors in [15] looked at such questions in a specific application context with explanations that show how the action allows for the execution of more desirable actions later. The latter additionally employs a case-based explanation technique to provide historical precedents about the results of the actions. Authors in [30] answer questions over being preferred over by illustrating how the actions affect the total value in terms of various human-understandable components of the reward function.

Among these works, [32, 15, 30, 16] aim for minimal explanations as a means of selection.

  • “Why is not solvable?”

There are several ways to surface to the user the constraints in the problem that are leading to unsolvability.

Excuse One approach would be to transform the given problem to a new one so that the updated problem is now solvable and provide the model fix as an explanation of why the original problem was unsolvable.

  • so that

These are called excuses [20]: here the authors identify a set of static initial facts to update by framing it as a planning problem. It is possible to impose selection strategies in this framework by associating costs to the various excuses.

Abstraction An alternative transformation on the problem would be to find a simpler version of the given problem which is still unsolvable and highlight the problems there.

  • so that

These are called model abstractions and have been used in [58, 60] to reduce the computational burden on the user. The approach in [60] also leverages temporal abstractions in the form of intermediate subgoals to illustrate why possible foils fail. Use of abstractions is, of course, not confined to explanations of unsolvability: recent work [41] used abstract models defined over simpler user-defined features to generate explanations for reinforcement learning problems in terms of action influence. The methods discussed in [51] also allows for the generation of causal link explanations for planning settings that involve abstract tasks, such as in HTN planning [19]. The use of plan properties by [16] and subsets of state factors in [32] are more examples of the use of abstraction schemes to simplify the explanation process.

Certificates Finally, authors in [17] look at a different way to approach the unsolvability issue by creating inductive certificates for the initial states that captures all reachable states. They have also investigated axiomatic systems that can generate proofs for task unsolvability [18]. Such certificates (represented, for example, as a binary decision diagram) can be quite complicated and are not meant to be consumed by end users, but provide useful debugging information to domain designers, algorithm designers, and AI assistants.

5.2 Model Reconciliation

One of the recurring themes in human-machine interaction is the “mental models” of users [4] – users of software systems often come with their own preconceived notions and expectations of the system that may ore may not be borne out by the ground truth. For a planning system, this means that even if it is making the best plans it could, the human-in-the-loop is evaluating those plans with a different model, i.e. their mental model of the problem, and may not agree to its quality. Differences in models between the user and the machine appear in many settings, such as in drifting world models over long terms interactions [3], search and rescue settings where there are internal and external agents with different views into the world [11], in intelligent tutoring systems between the student and the instructor [23], in smart rooms with distributed sensors [6], and so on. This model difference, along with inferential limitations of the human, is thus the root cause of the need for explanations from the end user persona.

In [12], the seminal work on this topic, authors posit that explanations can no longer be a “soliloquy“ in the agent’s own model but must instead consider and explain in terms of these model differences. The process of explanations is then one of reconciliation of the systems model and the human mental model so that both can agree on the property of the decision being made. Thus, if is the mental model of the user, the model reconciliation process requires that:

  • Given:

  • such that .

In the original work, the mental model was assumed to be known and reconciliation was achieved through a search in the space of models induced by the difference between the system model and the mental model, until a model is found where holds. The difference between this intermediate model and the mental model is provided as an explanation.

Figure 1: Recent trends in XAIP illustrate burst in model reconciliation approaches acknowledging the need to account for the mental model of the user in the explanation process. Also noticeable is an encouraging uptick in willingness of the community to engage in user studies. Size of a circle is proportional to the number of papers in a year, smallest being 1. Note that 2020 is still in progress.
Explanation Type Social Contrastive Selective Local Global Abstraction User Study
Algorithm-based explanations [30] [30, 22, 35] [30, 22, 42, 35] [35] [22, 42]
Model-based explanations Inference Reconciliation [58, 60, 16, 55, 41, 30] [51, 2, 58, 60, 16, 55, 20, 32, 15, 17, 41, 30, 5] [22, 51, 2, 58, 60, 16, 55, 20, 32, 15, 41, 30] [51, 22, 2, 58, 60, 16, 55, 20, 32, 15, 17, 41, 30, 5] [60, 16, 55, 20] [58, 60, 16, 55, 32, 41] [2, 22, 60, 32, 15, 41]
Model Reconciliation [12, 58, 60, 55, 57, 56, 11, 54, 6, 62, 8] [12, 58, 60, 55, 57, 56, 11, 54, 6, 62, 8] [12, 58, 60, 55, 57, 56, 11, 54, 6, 62, 8] [12, 58, 57, 56, 11, 54, 6] [12, 60, 55] [58, 60, 55] [10, 60, 56, 11, 8]
Plan-based explanations [38, 6, 48] [64, 35, 59, 61, 28, 38, 34, 6, 48] [64, 35, 59, 61, 28, 38, 34, 6, 48, 31] [64, 35, 59, 61, 28, 48] [59, 38]
Table 2: Summary of results. Note for reviewers: We had to go with the plain bib format so we can tabulate the paper numbers.

Social Such explanations are inherently social in being able to explicitly capture the effect of expectations in the explanation process. Indeed, in user studies conducted in [10], it was shown that participants were indeed able to identify the correct based on an explanation. Note that, in the model reconciliation framework, the mental model is just a version of the decision making problem at hand which the agent believes the user is operating under. This may be a graph, a planning problem, or even a logic program [62]. The notion of model reconciliation is agnostic to the actual representation.

Contrastive The contrastive nature of these explanations comes from how the model update preserves of the given plan as opposed to the foil, which may be implicitly [12] or explicitly [58] provided. This is also closely tied with the selection process of those model updates.

Selective In [12], the explanation content was selected based on minimality of model update: . The minimal explanation is not unique and it was shown in [65] how users attribute different value to theoretically equivalent model updates, thereby motivating further research on how to select among several competing explanations for the user.

Model Reconciliation Expansion Pack

The last couple of years have seen extensive work on this topic. primarily focused on relaxing the assumptions made on the mental model in the original model reconciliation work, and expanding the scope of problems addressed by it. We expound on a few of them below.

Model Uncertainty One of the primary directions of work has been in considering uncertainty about the mental model. In [57], authors show how to reconcile with a set of possible mental models and also demonstrate how the same framework can be used to explain to multiple users in the loop. In [58], on the other hand, the authors estimate the mental model from the provided foil.

Inference Reconciliation The original work on model reconciliation assumed an user with identical inferential capability (optimal or sound as the case may be) to the planner. However, as we saw previously, much of XAIP has been about dealing with the computational limits of users. Model reconciliation approaches have started adapting to this [58, 60] by identifying from the given foil the simplest abstraction of their model to explain in. [60] provides further inferential assistance in the form of unmet subgoals.

Unsolvability An important aspect of human-planner interaction, where inferential limitations play an outsized part, is the case of unsolvability. An interesting case of this is recently explored in [55] where the domain acquisition problem has been cast into the model reconciliation framework, reusing [60] to help out the domain designer persona when they cannot figure out why their domain has no solutions or the solutions do not match their expectation.

Model-free Model Reconciliation So far, model reconciliation has considered the mental model explicitly. This may not be necessary. At the end of the day, the explanation includes information regarding the agent model and what it include and do not include. The mental model only helps the system to filter what new information is relevant to the user. Thus an alternative would be to predict how model information can affect the expectation of the user [56] by learning a labeling model that takes a state-action-state tuple, a subset of information about the system’s model and whether the user after receiving the information would find this tuple explicable. The learned model then drives the search to determine what information should be exposed to the user.

Lies and Deception A consequence of going “model-free” is that the explanations provided may no longer be true but rather be whatever users find to be satisfying. In the original work on model reconciliation, was always constrained to be consistent with the ground truth . Authors in [8, 7] have shown how this constraint can be relaxed to hijack the model reconciliation process into producing false explanations, opening up intriguing avenues of further research into the ethics of mental modeling in planning.

6 Plan-based Explanations

Finally, we look at the role of plans in explanatory dialogue. Works like [53, 43] have explored explanans in the form of a plan that explains a set of observations. Beyond human-AI interaction, the qualitative structure of plans has also been used for problems like plan-reuse and validation [31].

Plan / Policy Summarization With regards to the role of plans in explanatory dialogue, one area we want to highlight in greater detail is that of plan or policy summarization. When the system is generating solutions over long time horizons and over large state spaces, presentation of the plan or policy to the user becomes difficult. One way to approach this issue is through verbalization of plans: e.g. paths taken by a robot [48] along different dimensions of interest such as levels of abstraction, specificity, and locality. Recent work has also attempted at domain-independent methods for plan summarization [6] by using the model reconciliation process with an empty mental model to compute the minimal subset of causal links required to justify each action in a plan.

Another possibility is to use abstraction schemes to simplify the decision structure and allow the user to drill down as required. [61] looks at the possibility of employing state abstraction that project out low importance features. On the other hand, [59] generates temporal abstractions for a given policy by automatically extracting subgoals. [64] takes advantage of both schemes by mapping policies learned through Deep Q-learning methods to a policy for a semi-aggregated MDP that employs both user-specified state aggregation features and temporally extended actions in the form of skills automatically generated from the learned policy.

Another possibility would be to allow the user to ask questions about generated policies: e.g. “Under what conditions is action performed”? This was investigated in [28], where both queries and answers were expressed in terms of user-specified features. [34] looked at cases where the user is not just interested in learning details of the model underlying the current decisions but rather how it differs from possible alternatives, by using LTL formulas that are true in a target set of plan traces but are not satisfied by a specified alternate set.

A different approach is taken by [38] where authors propose to present to users partial plans that they can figure out completions based on their knowledge of the task, by using various psychologically feasible computational models (e.g. models inspired by inverse reinforcement learning and imitation learning) that people could have.

7 Emerging Landscape

This survey provides an overview of the many flavors of explainable planning and decision making and current trends in the field. While the works explored here are mostly after-the-fact explanations, i.e. after a plan has been computed (or no plan has been found) given a planning problem, there is recent work [11] demonstrating how the possibility of having to explain its decisions can be folded into an agent’s reasoning stage itself. This is a well-known phenomenon in human behavior: we are known to make better decisions when we are asked to explain them [44]. By adopting a similar philosophy, we can potentially achieve better, more human-aware, behavior in XAIP-enabled agents as well.

Early attempts at this, employing search in the space of models [11], had proved computationally prohibitive. However, recent work [54] has shown that achieving such behavior is computationally no harder than its classical planning counterpart! Furthermore, recognizing that plans are not made in vacuum but often in the context of interactions with end users, can lead to a more efficient planning process with explainable components than without, for example, in collaborative planning scenarios [24] or in anytime planners that can preserve high-level constraints in partial plans as it plans along [21]. As the XAIP community comes to terms with its own accuracy versus efficiency trade-offs, parallel to similar arguments in the XAI community at large, a whole new world of possibilities open up in imbuing established planning approaches with the latest and best XAIP-components.

Footnotes

  1. https://kcl-planning.github.io/XAIP-Workshops/
  2. https://sites.google.com/view/xai2019
  3. https://extraamas.ehealth.hevs.ch

References

  1. S. Anjomshoae, A. Najjar, D. Calvaresi and K. Främling (2019) Explainable Agents and Robots: Results from a Systematic Literature Review. In AAMAS, Cited by: §1.
  2. P. Bercher, S. Biundo, T. Geier, T. Hoernle, F. Nothdurft, F. Richter and B. Schattenberg (2014) Plan, Repair, Execute, Explain – How Planning Helps to Assemble Your Home Theater. In ICAPS, Cited by: §5.1, Table 2.
  3. D. Bryce, J. Benton and M. W. Boldt (2016) Maintaining Evolving Domain Models. In IJCAI, Cited by: §5.2.
  4. J. M. Carroll and J. R. Olson (1988) Mental models in human-computer interaction. Handbook of Human-Computer Interaction. Cited by: §5.2.
  5. M. Cashmore, A. Collins, B. Krarup, S. Krivic, D. Magazzeni and D. Smith (2019) Towards Explainable AI Planning as a Service. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §5.1, Table 2, §5.
  6. T. Chakraborti, K. P. Fadnis, K. Talamadupula, M. Dholakia, B. Srivastava, J. O. Kephart and R. K. E. Bellamy (2019) Planning and Visualization for a Smart Meeting Room Assistant – A Case Study in the Cognitive Environments Laboratory at IBM T.J. Watson Research Center, Yorktown. AI Communication. Cited by: §5.1, §5.2, Table 2, §6.
  7. T. Chakraborti and S. Kambhampati (2019) (How) Can AI Bots Lie?. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §5.2.1.
  8. T. Chakraborti and S. Kambhampati (2019) (When) Can AI Bots Lie?. In AIES/AAAI, Cited by: §5.2.1, Table 2.
  9. T. Chakraborti, A. Kulkarni, S. Sreedharan, D. E. Smith and S. Kambhampati (2019) Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior. In ICAPS, Cited by: §1.
  10. T. Chakraborti, S. Sreedharan, S. Grover and S. Kambhampati (2019) Plan Explanations as Model Reconciliation – An Empirical Study. In HRI, Cited by: §5.2, Table 2.
  11. T. Chakraborti, S. Sreedharan and S. Kambhampati (2019) Balancing Explanations and Explicability in Human-Aware Planning. In IJCAI, Cited by: item -, §5.2, Table 2, §7, §7.
  12. T. Chakraborti, S. Sreedharan, Y. Zhang and S. Kambhampati (2017) Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In IJCAI, Cited by: §5.2, §5.2, §5.2, Table 2.
  13. D. Dannenhauer, M. W. Floyd, D. Magazzeni and D. W. Aha (2018) Explaining Rebel Behavior in Goal Reasoning Agents. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §1.
  14. A. Datta, S. Sen and Y. Zick (2016) Algorithmic transparency via quantitative input influence: theory and experiments with learning systems. In IEEE Symposium on Security and Privacy (SP), Cited by: §3.2.
  15. T. Dodson, N. Mattei, J. T. Guerin and J. Goldsmith (2013) An English-Language Argumentation Interface for Explanation Generation with Markov Decision Processes in the Domain of Academic Advising. TiiS. Cited by: §5.1, §5.1, Table 2.
  16. R. Eifler, M. Cashmore, J. Hoffmann, D. Magazzeni and M. Steinmetz (2020) A New Approach to Plan-Space Explanation: Analyzing Plan-Property Dependencies in Oversubscription Planning. In AAAI, Cited by: §5.1, §5.1, §5.1, Table 2.
  17. S. Eriksson, G. Röger and M. Helmert (2017) Unsolvability Certificates for Classical Planning. In ICAPS, Cited by: §5.1, Table 2.
  18. S. Eriksson, G. Röger and M. Helmert (2018) A Proof System for Unsolvable Planning Tasks. In ICAPS, Cited by: §5.1.
  19. K. Erol, J. Hendler and D. S. Nau (1994) HTN planning: Complexity and Expressivity. In AAAI, Cited by: §5.1.
  20. M. Göbelbecker, T. Keller, P. Eyerich, M. Brenner and B. Nebel (2010) Coming Up with Good Excuses: What to Do When No Plan Can be Found. In ICAPS, Cited by: §5.1, Table 2.
  21. A. Grea, L. Matignon and S. Aknine (2018) How Explainable Plans Can Make Planning Faster. In IJCAI Workshop on Explainable AI (XAI), Cited by: §7.
  22. S. Greydanus, A. Koul, J. Dodge and A. Fern (2018) Visualizing and Understanding Atari Agents. In ICML, Cited by: §4, Table 2.
  23. S. Grover, T. Chakraborti and S. Kambhampati (2018) What Can Automated Planning do for Intelligent Tutoring Systems?. In ICAPS Scheduling and Planning Applications Workshop (SPARK), Cited by: §5.2.
  24. S. Grover, S. Sengupta, T. Chakraborti, A. P. Mishra and S. Kambhampati (2020) RADAR: Automated Task Planning for Proactive Decision Support. HCI Journal. Cited by: item -, §7.
  25. D. Gunning and D. W. Aha (2019) DARPA’s Explainable Artificial Intelligence Program. AI Magazine. Cited by: §1, §1.
  26. D. Gunning (2017) Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA). Cited by: §1, §1.
  27. M. Hanheide, M. Göbelbecker, G. S. Horn, A. Pronobis, K. Sjöö, A. Aydemir, P. Jensfelt, C. Gretton, R. Dearden and M. Janicek (2017) Robot Task Planning and Explanation in Open and Uncertain Worlds. Artificial Intelligence. Cited by: §1.
  28. B. Hayes and J. A. Shah (2017) Improving Robot Controller Transparency Through Autonomous Policy Explanation. In HRI, Cited by: Table 2, §6.
  29. J. Hoffmann and D. Magazzeni (2019) Explainable AI Planning (XAIP): Overview and the Case of Contrastive Explanation. In Reasoning Web. Explainable Artificial Intelligence, Note: Extended Abstract Cited by: §3.3.
  30. Z. Juozapaitis, A. Koul, A. Fern, M. Erwig and F. Doshi-Velez (2019) Explainable Reinforcement Learning via Reward Decomposition. In IJCAI Workshop on Explainable AI (XAI), Cited by: §5.1, §5.1, Table 2.
  31. S. Kambhampati (1990) A Classification of Plan Modification Strategies Based on Coverage and Information Requirements. In AAAI Spring Symposium on Case Based Reasoning, Cited by: Table 2, §6.
  32. O. Z. Khan, P. Poupart and J. P. Black (2009) Minimal Sufficient Explanations for Factored Markov Decision Processes. In ICAPS, Cited by: §5.1, §5.1, §5.1, Table 2.
  33. B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas and R. Sayres (2018) Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML, Cited by: §3.3.
  34. J. Kim, C. Muise, A. Shah, S. Agarwal and J. Shah (2019) Bayesian Inference of Linear Temporal Logic Specifications for Contrastive Explanations. In IJCAI, Cited by: Table 2, §6.
  35. A. Koul, S. Greydanus and A. Fern (2018) Learning Finite State Representations of Recurrent Policy Networks. In ICLR, Cited by: §4, Table 2.
  36. B. Krarup, M. Cashmore, D. Magazzeni and T. Miller (2019) Model-Based Contrastive Explanations for Explainable Planning. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §5.1.
  37. A. Kulkarni, S. Sreedharan, S. Keren, T. Chakraborti, D. E. Smith and S. Kambhampati (2019) Design for Interpretability. ICAPS Workshop on Explainable AI Planning (XAIP). Cited by: §1.
  38. I. Lage, D. Lifschitz, F. Doshi-Velez and O. Amir (2019) Exploring computational user models for agent policy summarization. In IJCAI, Cited by: Table 2, §6.
  39. P. Langley, B. Meadows, M. Sridharan and D. Choi (2017) Explainable Agency for Intelligent Autonomous Systems. In IAAI/AAAI, Cited by: §1.
  40. P. Langley (2019) Varieties of explainable agency. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §2, §2.
  41. P. Madumal, T. Miller, L. Sonenberg and F. Vetere (2020) Explainable Reinforcement Learning Through a Causal Lens. In AAAI, Cited by: §5.1, Table 2.
  42. M. C. Magnaguagno, R. F. Pereira, M. D. Móre and F. Meneguzzi (2017) Web Planner: A Tool to Develop Classical Planning Domains and Visualize Heuristic State-Space Search. In ICAPS Workshop on User Interfaces in Scheduling and Planning (UISP), Cited by: §4, Table 2.
  43. B. L. Meadows, P. Langley and M. J. Emery (2013) Seeing Beyond Shadows: Incremental Abductive Reasoning for Plan Understanding. In AAAI Workshop on Plan, Activity, and Intent Recognition (PAIR), Cited by: §6.
  44. H. Mercier and D. Sperber (2011) Why do Humans Reason? Arguments for an Argumentative Theory. Behavioral and Brain Sciences. Cited by: §7.
  45. T. Miller (2018) Contrastive Explanation: A Structural-Model Approach. arXiv:1811.03163. Cited by: §3.1, §3.3.
  46. T. Miller (2019) Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence. Cited by: §3.3.
  47. M. T. Ribeiro, S. Singh and C. Guestrin (2016) “Why Should I Trust You?” Explaining the Predictions of Any Classifier. In KDD, Cited by: §3.2, §3.3, §3.3.
  48. S. Rosenthal, S. P. Selvaraj and M. M. Veloso (2016) Verbalization: Narration of Autonomous Robot Experience. In IJCAI, Cited by: Table 2, §6.
  49. W. Samek, T. Wiegand and K. Müller (2017) Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. arXiv:1708.08296. Cited by: §3.2.
  50. C. Sammut (2018) What was I planning to do?. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §1.
  51. B. Seegebarth, F. Müller, B. Schattenberg and S. Biundo (2012) Making Hybrid Plans More Clear to Human Users – A Formal Approach for Generating Sound Explanations. In ICAPS, Cited by: §5.1, §5.1, Table 2.
  52. D. E. Smith (2004) Choosing Objectives in Over-Subscription Planning. In ICAPS, Cited by: §1.
  53. S. Sohrabi, J. A. Baier and S. A. McIlraith (2011) Preferred Explanations: Theory and Generation via Planning. In AAAI, Cited by: §6.
  54. S. Sreedharan, T. Chakraborti, C. Muise and S. Kambhampati (2020) Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning. In AAAI, Cited by: Table 2, §7.
  55. S. Sreedharan, T. Chakraborti, C. Muise, Y. Khazaeni and S. Kambhampati (2020) D3WA+: A Case Study of XAIP in a Model Acquisition Task. In ICAPS, Cited by: item -, §2, §5.2.1, Table 2.
  56. S. Sreedharan, A. O. Hernandez, A. P. Mishra and S. Kambhampati (2019) Model-Free Model Reconciliation. In IJCAI, Cited by: §5.2.1, Table 2.
  57. S. Sreedharan and S. Kambhampati (2018) Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation. In ICAPS, Cited by: §5.2.1, Table 2.
  58. S. Sreedharan, S. Srivastava and S. Kambhampati (2018) Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations. In IJCAI, Cited by: §5.1, §5.2, §5.2.1, §5.2.1, Table 2.
  59. S. Sreedharan, S. Srivastava and S. Kambhampati (2020) TLdR: Policy Summarization for Factored SSP Problems Using Temporal Abstractions. In ICAPS, Cited by: Table 2, §6.
  60. S. Sreedharan, S. Srivastava, D. Smith and S. Kambhampati (2019) Why Can’t You Do That HAL? Explaining Unsolvability of Planning Tasks. In IJCAI, Cited by: §5.1, §5.2.1, §5.2.1, Table 2.
  61. N. Topin and M. Veloso (2019) Generation of Policy-Level Explanations for Reinforcement Learning. In AAAI, Cited by: Table 2, §6.
  62. S. Vasileiou, W. Yeoh and T. C. Son (2019) A Preliminary Logic-based Approach for Explanation Generation. In ICAPS Workshop on Explainable AI Planning (XAIP), Cited by: §5.2, Table 2.
  63. S. Wachter, B. Mittelstadt and C. Russell (2017) Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology. Cited by: §1.
  64. T. Zahavy, N. Ben-Zrihem and S. Mannor (2016) Graying the Black Box: Understanding DQNs. In ICML, Cited by: Table 2, §6.
  65. Z. Zahedi, A. Olmo, T. Chakraborti, S. Sreedharan and S. Kambhampati (2019) Towards Understanding User Preferences for Explanation Types in Explanation as Model Reconciliation. In HRI, Note: Late Breaking Report Cited by: §5.2.
  66. Y. Zhang, S. Sreedharan, A. Kulkarni, T. Chakraborti, H. H. Zhuo and S. Kambhampati (2017) Plan Explicability and Predictability for Robot Task Planning. In ICRA, Cited by: §1.
  67. Y. Zhou and D. Danks (2020) Different “Intelligibility” for Different Folks. In AIES/AAAI, Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409467
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description