Partial-Order, Partially-Seen Observations of Fluents or Actions for Plan Recognition as Planning

Partial-Order, Partially-Seen Observations of Fluents or Actions
for Plan Recognition as Planning

Jennifer M. Nelson1    Rogelio E. Cardona-Rivera1, 2
1School of Computing, 2Entertainment Arts and Engineering Program
University of Utah
Salt Lake City, UT 84112 USA
jennifer.m.nelson@utah.edu, rogelio@cs.utah.edu
Abstract

This work aims to make plan recognition as planning more ready for real-world scenarios by adapting previous compilations to work with partial-order, half-seen observations of both fluents and actions. We first redefine what observations can be and what it means to satisfy each kind. We then provide a compilation from plan recognition problem to classical planning problem, similar to original work by \citeauthorramirezGeffner09, but accommodating these more complex observation types. This compilation can be adapted towards other planning-based plan recognition techniques. Lastly we evaluate this method against an “ignore complexity” strategy that uses the original method by \citeauthorramirezGeffner09. Our experimental results suggest that, while slower, our method is equally or more accurate than baseline methods; our technique sometimes significantly reduces the size of the solution to the plan recognition problem, i.e, the size of the optimal goal set. We discuss these findings in the context of plan recognition problem difficulty and present an avenue for future work.

1 Introduction

Plan recognition is the problem of identifying the plans and goals of an agent, given some observations of their behavior(s) [sukthankar2014plan]. Plan recognition has applications wherever it’s useful for a system to anticipate an agent’s actions or desires. This variety of applications includes robot-human coordination [talamadupula2014coordination], human-computer collaboration [lesh1999using], assisted cognition [pentney2006sensor], network monitoring [sohrabi2013hypothesis], interactive narratives [cardona2015symbolic], and language recognition [carberry1990plan, zukerman2001natural].

\citeauthor

ramirezGeffner09 \shortciteramirezGeffner09 realized that plan recognition problems were very similar to classical planning problems, and created a formulation to compile recognition problems into planning problems ready for off-the-shelf planning algorithms. Previously, plan recognition relied on specialized algorithms and handcrafted libraries. Rather than rely on a library of possible plan-goal pairs, \citeauthorramirezGeffner09’s formulation relies on a set of possible goals and a domain theory describing possible actions. It assumes that any plan which reaches a possible goal at optimal cost but also “explains” all observations (in order) is part of the optimal solution set to a recognition problem.

In addition to defining an optimal solution set, \citeauthorramirezGeffner09 \shortciteramirezGeffner09 also relaxed its own optimality assumption to allow suboptimal approximate solutions computed with faster algorithms. This also allowed solutions to “skip” some observations if necessary. \citeauthorramirez2010probabilistic \shortciteramirez2010probabilistic also relaxed the optimality assumption, such that goals whose optimal plans differed significantly from the observations were considered less likely. \citeauthorsohrabi2016plan \shortcitesohrabi2016plan further relaxed the optimality assumption, admitting that observation sequences may be non-optimal, noisy, or missing segments. It assumed observations of single fluents.

The methods above all assume total-ordered fully specified observations, though real-world applications may be more complex. One might observe artifacts of past actions, but not know the order in which those artifacts appeared. One might see the actor pick something up, but not know if it were the key or the coin (a half-specified observation). One might later observe that the key is missing from that spot (a fluent observation). This is important information if the agent’s goal is behind a locked door, but current methods cannot use it. Our work provides methods to utilize this information. In this paper we modify the original [ramirezGeffner09] compilation, but our definitions for observations and our use of ordering fluents can be adapted for any of the methods mentioned above. We focus only on the “optimal” set of answers for complex observations, leaving relaxations to future work.

2 Motivating Example

Figure 1: DetectiveBot’s observations, and unconstrained plans for the possible motivations

We illustrate our method with the following scenario: DetectiveBot is trying to solve a breaking-and-entering case at a museum. Cameras record the culprit breaking into the museum office, rifling through the manager’s top drawer, pocketing something, then sprinting into an unfilmed backroom. DetectiveBot inspects the backroom: it contains a single window (opened), a chest (unlocked and emptied), and a stairwell towards an exit. DetectiveBot wants to figure out the culprit’s motives. Were they stealing cash from the managers drawer? Were they stealing the contents of the chest? Or were they destroying the contents of the chest?

DetectiveBot models this situation. It knows the culprit took either cash or a key to the chest from the drawer, then entered the back room. DetectiveBot does not know what order things happened in the backroom, but it knows that by the end the window was opened, the chest was unlocked and emptied, and the culprit was gone. First DetectiveBot computes plans for what the culprit would’ve done for each of their three possible motives, unconstrained by DetectiveBot’s observations. (Figure 1) Then it computes plans for each of the three possible motives, such that each plan also “sees” the observations. DetectiveBot compares the unconstrained plans to their constrained counterparts, and discovers that only one pair has identical costs: destroying the contents.

3 The Goal Recognition Problem with Complex Observation Constraints

Planning Background

In this paper, we rely on the formulation of plan recognition as classical planning. Classical planning is a model of problem solving, wherein agent actions are fully observable and deterministic. Classical problems are typically represented in the strips formalism [fikes1971strips]; a strips planning problem is a tuple where is the set of fluents, is the initial state, is the set of goal conditions, and is a set of actions. Each action is a triple , that represents the precondition, add, and delete lists respectively, all subsets of . A state is a set of conjuncted fluents, and an action is applicable in a state if ; applying said applicable action in the state results in a new state and incurs a non-negative cost determined by the function .

The solution to a planning problem is a plan , a sequence of actions that transforms the problem’s initial state to a state that satisfies the goal; i.e. . The cost c of a plan is for all actions . A plan segment is a segment of a plan, denoted .

The execution trace of plan from initial state is defined as the alternating sequence of states and actions, starting with , such that results from applying to state .

Handling Complex Observations

Our formulation is based on the formulation by \citeauthorramirezGeffner09, but we relax the assumption that all observations are totally ordered and grounded actions. Instead, we allow observations to be either an observed action or a set of observed fluents. Further, we allow partial orderings in the observations as well as partially-specified observations via sets of possible observations.

Fundamental to this formulation are observation groups, which impose constraints on the observations they contain. We describe two types of observations, three types of groups, and what it means for a plan to satisfy each. Because groups can nest within each other, we describe satisfaction of an outer group in terms of the satisfaction of its nested member(s) by a plan through a plan segment. Because a member might be a simple observation, we also describe the satisfaction of a simple observations in terms of a plan through a plan segment.

Definition 1.

An action observation paired with action is satisfied by the plan through segment iff for some .

A fluent observation is a set of fluents, and is satisfied by plan segments that mark out a time period where those fluents are true for some state. This definition relies on initial state , which is later set in the definition of a Plan Recognition Problem.

Definition 2.

A fluent observation paired with fluents is satisfied by the plan with initial state through segment iff for some in .

Note that the actions in the plan segment do not need to contribute to the observed fluents for this notion of satisfaction. The plan segment merely marks a time period in which the fluents were observed. It may be that was true since the initial state, but was not observable until much later. Our intent is to rule out goal-plan pairs where the plan never co-occurs with the fluents in being true.

Now we define ordered groups, who impose ordering constraints on members. A member can be either another group or a single observation. An ordered group can only be satisfied by a plan segment if that segment can be split into chunks that satisfy each member in order. (These chunks are the reason we define satisfaction in terms of plan segments.)

Definition 3.

An ordered observation group is a totally ordered sequence of observation groups and/or single observations. A plan satisfies through the plan segment iff there exists a monotonically increasing function of the form , , which maps members of to segments of such that = satisfies .

The function above is used to ensure ordering. It maps subsequent group members to subsequent plan segments. marks the beginning of ’s plan segment. We allow no gaps in plan segments, so ’s segment ends right before ’s segment begins, and ’s segment ends where the whole plan segment ends.

Next we define unordered groups, who impose no constraints on members, but are only satisfied when all members are. When embedded in an ordered group, these form the partial part of partial order.

Definition 4.

An unordered group is a set of observation groups and/or single observations that have no ordering constraints with respect to each other. A plan satisfies through the segment iff satisfies all members.

Lastly, we define option groups. Unlike the other groups, this group may contain only single observations, not nested groups, and is intended to describe a set of mutually exclusive possible observations. This is how we support partially-seen observations: by transforming it into an option group of all its possible interpretations. This group is satisfied if at least one of its members is satisfied.

Definition 5.

An option group is a set of single observations where it is uncertain which of them was the true observation. A plan satisfies through the segment satisfies at least one member. A plan which satisfies more than one member is not considered more likely than a plan which satisfies only one member.

With the above definitions, we mark out a modified version of the plan recognition problem. This is largely the same as previous work, but replaces a total-order constraint on observations with partial-order constraints and option groups.

Definition 6.

A plan recognition problem over a domain theory is the tuple where is a planning domain, is the set of possible goals , and is an observation group as defined above. A solution to is one of the goals which has an optimal plan that also satisfies .

4 Compilation to Planning Problem

We compile a goal recognition problem into a planning problem such that a solution to “explains” the observations nested in , while respecting ’s ordering constraints and not double-explaining observations in an option group. If an optimal solution to has the same cost as an optimal solution to , and the plan solving are considered a solution to the plan recognition problem.

To ensure a solution to respects ’s constraints, we use ordering fluents to ensure an explanation may only happen after its predecessors, and that only one explanation is allowed per observation, or per option group. Let denote the set of all observations nested within or its subgroups.

Definition 7.

For the goal recognition problem where , the transformed planning problem for each is defined as such that:

  • , where

  • , where , and

  • .

We further define and , and later show that a solution to satisfies .

Definition 8.

The explanation action for the fluent observation corresponding to fluents is a dummy action that marks that is observed, defined as:

  • where is the set of all observations nested in any group immediately preceding a group that is nested within.

  • for all in the same option group as

This definition is based off those of \citeauthorsohrabi2016plan \shortcitesohrabi2016plan, except multiple fluents can be included in the same observation. A metric planner is needed to work with this zero-cost action, or the cost of these actions can be subtracted post-planning.

Definition 9.

The explanation action for the action observation corresponding to action is an action identical to but with additional ordering fluents:

  • where is the set of all observations nested in any group immediately preceding a group that is nested within.

  • for all in the same option group as

Note that explanation actions have the precondition , but add as an effect. As no action removes , this means an explanation action cannot be used twice. Additionally, explaining an observation in an option group prevents all other explanations from that option group from being used.

Definition 10.

The solution to is the set satisfying and optimally solving }

In the next section, we prove that our compilation indicates members of : is in when the optimal plan for costs the same as an optimal plan for . To find all members of , one would optimally solve and for all in , and compare costs.

5 Proofs

In this section we present two main proofs. The first is a proof that our compilation indicates if a goal is in the solution to a goal recognition problem; i.e. if the goal has an observation-satisfying plan that optimally reaches the goal. The second is a proof that solving the compiled problem will never yield an optimal goal set of size greater than the optimal goal set size achieved by solving the problem compiled as in \citeauthorramirezGeffner09 when ignoring complex observations.

5.1 Goal Recognition Problem is Solved

We prove that our compilation produces a planning problem that solves the goal recognition problem in two steps. We first prove that any plan for the compiled problem has a corresponding plan of equivalent cost that solves . We then prove that if is an optimal plan for , then satisfies and (by first proof) solves with the same cost as . If this cost is the same as an optimal plan for just , then is a solution to .

Theorem 1.

A plan for has a corresponding plan , solving , such that .

Proof.

For , let be the same sequence of actions, but with fluent observation explanations removed, and action observation explanations replaced with their corresponding action in . Because fluent explanations have no cost, and action explanations have cost identical to their corresponding action, . Fluent explanations have no effect save for ordering fluents, and action explanations are identical to their corresponding actions, save for ordering fluents. Since does not include any ordering fluents, still achieves . ∎

Theorem 2.

If plan segment achieves all for , then satisfies .

Proof.

We prove this theorem through a series of Lemmas showing that such a plan segment will satisfy every observation and observation group; each Lemma corresponds to a different complex observation type.

Lemma 2.1 (Individual Observations).

If achieves , then satisfies .

Proof.

The only way for to achieve is through explanation action . If is an observation of action , then is translated to in , satisfying . If is an observation of fluents , then has as precondition, so must exist in the execution trace for , and thus in the trace for . In either case, satisfies . ∎

Lemma 2.2 (Option Group).

If achieves any for in the option group , then satisfies .

Proof.

If achieves a particular for , then by Lemma 2.1, satisfies . By satisfying a member of , satisfies . ∎

Lemma 2.3 (Unordered Group).

If achieves all for , then satisfies .

Proof.

satisfies every simple observation contained directly in , per Lemma 2.1. also satisfies any contained option groups, per Lemma 2.2. If contains unordered groups, this is equivalent to containing the unordered group’s members directly. Any contained ordered groups are also satisfied by , via Lemma 2.4. ∎

Lemma 2.4 (Ordered Group).

If achieves all for nested in the ordered group , satisfies .

Proof.

Let be a function where and is the index of the first explanation action for any . Segment then achieves all for , since the explanation action at has { for } as a precondition. Via the other Lemmas, satisfies .

Let be a similar function of the form:

where and maps to where the action at would be if the transformation did not remove/transform it. This way, creates plan segments corresponding to the plan segments creates, such that

Since (as mentioned above), the left-hand side of this equation satisfies , so too does the right-hand side. This makes a non-monotonically increasing function which separates into sections which satisfy each member of , and with it, satisfies . ∎

Lemmas 2.3 and 2.4 recurse into themselves if an unordered group contains an ordered group (or vice versa), but are satisfied by the base case where a group contains only simple observations or option groups.
With Lemmas 2.1 - 2.4, we prove a plan segment achieving all has a corresponding plan that satisfies . ∎

An optimal solution to necessarily achieves all , and so by theorem 2, has a corresponding plan that satisfies . With Lemma 1, we prove that this corresponding plan also solves . If the cost of this plan is the same as the cost for an optimal plan to just (not constrained by ), then a plan exists that satisfies and optimally solves . By definition 10, is a solution to .

5.2 No Worse than Ignoring Complexity

We begin by defining an “ignore complexity” strategy for simplifying observation groups to a total-ordered, fully-specified form the compilation in \citeauthorramirezGeffner09 \shortciteramirezGeffner09 can handle. This strategy removes fluent observations and option groups, reduces unordered groups to a single member, then simplifies zero- or no-member groups until just an ordered group is left. We choose this strategy over strategies that try different orderings/option group members because the other strategies would take exponentially longer to solve, requiring as many tries as there are orderings of unordered groups and combinations of option group choices. We sketch a proof that using complex observations will always be more accurate than or equally accurate to ignoring them. (Accuracy is measured by number of goals indicated: fewer false positives is more accurate).

Theorem 3.

Given and , where removes some number of observations from without altering ordering constraints, , where is the solution set to and is the solution set to .

Proof Sketch.

Assume . Then there exists some such that but . This means an explanation action for some observation in created a larger cost for the optimal plan for compiled for , making and eliminating from . Because the observations in are a subset of those in , that explanation action will also incur a cost for compiled for , eliminating from . This contradicts the premise, so . ∎

6 Experimental Evaluation

We evaluate the proposed formulation against the “ignore complexity” strategy for accomodating complex obervations using only the formulation in \citeauthorramirezGeffner09 \shortciteramirezGeffner09. We use the same domains and plan recognition problems from that work but generate new complex observations according to two parameters. The metric we’re concerned with is the number of incorrect goals in the optimal goal set . By this metric we often perform better, and never perform worse. In some domains the “ignore complexity” strategy often found no incorrect goals, leaving our formulation no room for improvement. We report how often this occurred, and focus on cases where we could improve. In general, our method is slower, taking longer to generate plans.

6.1 Method

Hypotheses  We hypothesize that the size of our optimal goal set will often be smaller than the size of the optimal goal set computed using simplified observations. We also measure the time it takes to compute the optimal goal set. This will be domain dependent, but we expect to see a general trend favoring one method or the other.

Apparatus  We developed our software111Available at https://github.com/qed-lab/Complex-Observation-Compiler by expanding the original plan recognition as planning code developed by \citeauthorramirezGeffner09 \shortciteramirezGeffner09. Our software ran atop Centos 7.2 Linux with the 3.10 kernel, deployed on hardware equipped with an GHz Intel Core i7-4790 Processor, GB DDR3 MHz overclocked RAM, and GB Intel 540 Solid State Drive. Optimal plans were generated using A* search with admissible h-max heuristic, but pruning paths when the estimated cost-to-goal reached a given threshold. When computing plans in compiled , we set this threshold to the optimal cost of , which was pre-computed and not counted towards measures of time. This reduced the time it took to eliminate incorrect goals from being included in . For each , we allotted ten times the time taken to compute , with a minimum of 20 seconds. (In the logistics domain, this was sometimes still not enough time.)

Procedure   For every plan recognition problem in \citeauthorramirezGeffner09 \shortciteramirezGeffner09, consisting of a domain and possible goals of which is correct, we found an optimal plan for , computed the trace of that plan, and generated complex observations from both the plan (A) and trace (A+F). For each plan/trace, we generated 3 observation sets per setting, varying setting by amount of observations of unknown order (U%) and amount of ‘debound’ observations missing a parameter (D%). To generate observations, we randomly removed half the optimal plan (or trace), and removed 90% of fluents from states in each trace. With the remaining plan (or trace) we placed U% of observations into unordered groups of size . (So, U% = 100% means every observation was placed in a small unordered group, not that all observations were completely unordered.) Lastly, we ‘debound’ D% of action observations by removing one parameter and replacing the observation with an option group of all matching observations. (I.e. (eat H)(eat ?)(eat A),…,(eat Z))) D% was only applied to action observations with at least one parameter. Altogether, this forms the plan recognition problem .

We varied U and D over five settings:

  • No Obscuration: U=0% D=0%

  • Vary Debind: U=0% D=25%

  • Vary Unorderedness: U=25% D=0%

  • Vary Unorderedness: U=50% D=0%

  • Vary Both: U=50% D=25%

For each problem , and each , we compiled two planning domains: (this work’s compilation) and (Using the “ignore complexity” strategy and compiling as in [ramirezGeffner09]). We compare the optimal costs of both to the optimal cost for (precomputed). If , we place in . If , we place in .

6.2 Analysis

We conducted tests over four domains: Block-Words, Ipc-Grid, Grid-Navigation, and Logistics. For the latter three, most problems were perfectly solved with the “ignore complexity” strategy, leaving no room for improvement. We removed instances where the “ignore complexity” strategy resulted in an empty observation set. (This occurred 38 times, mostly in the [A+F U:50%D:25%] setting.) Table 2 reports the number of perfectly solved problems (Opt), problems not improved upon (Un), and problems improved upon (Imp), per setting, per domain. Table 2 also reports the average number of observations per method when improvable ( Opt) and when not improvable ( Un/Imp), the average size of the solution set when improvable (), and the average time to compute (whether or not improvable) (time). Error rates indicate a 95% confidence interval. When computing the observation set size, we considered an option group to have size 1.

We conducted an independent t-test, not assuming same variance, comparing the sizes of solution sets when improvable. It found a statistically significant difference between (=3.91, =2.99) and (=2.51, =1.73) ((=2372.97)=15.620, 0.01, =1.40) T-tests also found statistically significant differences (0.01) for each domain, with Block-Words having the largest t-value ((=1703.41)=14.15) and difference (=1.64).

Notice that the [A U:0% D:0%] setting produces identical solution set sizes. This is because without complex observations, our compilation is equivalent to that in \citeauthorramirezGeffner09 \shortciteramirezGeffner09. This changes for the A+F mode, where the “ignore complexity” strategy removes fluent observations.

Figures 2 and 3 show the results from Block-Words in more detail. Notches represent a 95% confidence interval around the median value, and dashed lines represent mean. These are divided into action observations only (A) and mixed action/fluent observations (A+F). Figure 2 compares the size of and for different settings. These only consider instances where , leaving room for improvement. We report the number of these instances as n. Figure 3 looks at the difference in time to compute vs the time to compute . This is divided by settings, and result of computation (if already optimal, if not optimal, but did not improve, and if did improve). The n for each category is reported. Note that values are always negative, meaning our method was always slower for the Block-Words domain.

U% D% Opt Imp Opt Imp Opt Imp Imp Imp time All time All

A: Action Observations Only

Block-Words

0% 0% 97 86 4.45 0.24 4.08 0.27 4.45 0.24 4.08 0.27 3.03 0.33 3.03 0.33 59.98 3.25 77.11 3.82
0% 25% 57 126 3.18 0.19 2.75 0.16 4.67 0.29 4.10 0.22 4.27 0.48 3.17 0.36 46.57 3.08 93.38 5.00
25% 0% 97 86 3.98 0.15 3.78 0.21 4.42 0.23 4.12 0.29 3.27 0.33 3.02 0.32 54.83 2.97 83.55 4.60
50% 0% 65 118 2.97 0.14 2.81 0.12 4.45 0.28 4.19 0.23 3.81 0.40 2.86 0.33 47.92 2.98 99.81 5.98
50% 25% 46 137 2.61 0.23 2.14 0.13 4.59 0.33 4.18 0.21 5.01 0.60 3.42 0.45 41.12 3.04 115.78 6.75

Ipc-Grid

0% 0% 76 14 6.92 0.53 7.21 1.24 6.92 0.53 7.21 1.24 2.00 0.00 2.00 0.00 4.20 0.84 8.53 1.75
0% 25% 76 14 4.82 0.41 5.14 0.87 6.89 0.53 7.36 1.19 2.29 0.42 1.93 0.42 3.62 0.74 8.01 1.62
25% 0% 74 16 5.78 0.41 6.25 0.79 6.84 0.54 7.56 1.03 2.12 0.27 2.06 0.31 3.63 0.72 8.41 1.74
50% 0% 73 17 4.38 0.35 4.65 0.85 6.89 0.54 7.29 1.13 2.41 0.66 1.94 0.34 3.22 0.65 10.88 2.58
50% 25% 65 25 3.42 0.34 3.16 0.60 6.91 0.59 7.12 0.85 2.40 0.55 1.64 0.29 2.79 0.54 11.70 2.78

Navigation

0% 0% 58 5 9.31 1.42 5.40 0.68 9.31 1.42 5.40 0.68 3.60 2.72 3.60 2.72 0.20 0.04 0.21 0.02
0% 25% 52 11 6.17 1.13 6.55 2.56 8.90 1.50 9.45 3.36 2.73 0.85 2.09 0.63 0.21 0.07 0.19 0.03
25% 0% 56 7 7.36 1.11 7.57 5.36 8.96 1.37 9.29 6.51 2.71 1.38 2.57 1.50 0.19 0.05 0.17 0.02
50% 0% 56 7 5.80 0.91 6.29 4.33 8.91 1.37 9.71 6.31 2.43 0.73 2.00 0.53 0.19 0.03 0.19 0.02
50% 25% 52 11 4.58 0.83 4.64 2.00 8.94 1.44 9.27 4.08 3.00 1.08 1.91 0.97 0.19 0.04 0.20 0.03

Logistics

0% 0% 54 6 9.83 0.10 10.00 0.00 9.83 0.10 10.00 0.00 2.00 0.00 2.00 0.00 892.68 22.73 903.62 22.72
0% 25% 52 8 6.83 0.11 7.00 0.00 9.83 0.11 10.00 0.00 2.12 0.30 2.00 0.45 902.31 22.47 900.95 22.82
25% 0% 45 15 7.84 0.11 7.87 0.19 9.84 0.11 9.87 0.19 2.13 0.19 1.73 0.33 884.55 24.70 903.10 24.98
50% 0% 49 11 6.86 0.10 6.82 0.27 9.86 0.10 9.82 0.27 2.18 0.27 1.55 0.35 893.55 24.43 917.98 22.67
50% 25% 47 13 5.28 0.24 5.08 0.39 9.83 0.11 9.92 0.17 2.46 0.58 1.31 0.29 888.69 27.90 923.47 20.64

A+F : Action and Fluent Observations

Block-Words

0% 0% 102 81 4.65 0.25 4.51 0.41 8.69 0.43 8.40 0.61 3.41 0.53 2.57 0.34 61.95 3.71 112.73 4.74
0% 25% 44 132 3.16 0.41 1.98 0.17 9.00 0.71 8.45 0.42 6.36 0.80 2.36 0.30 38.17 3.33 144.43 6.44
25% 0% 99 84 4.44 0.22 4.06 0.30 8.73 0.44 8.36 0.59 3.50 0.46 2.77 0.34 59.14 3.44 140.76 6.74
50% 0% 89 94 4.10 0.24 3.27 0.26 9.15 0.45 8.00 0.54 3.70 0.47 2.49 0.29 53.20 3.09 165.64 7.57
50% 25% 41 124 2.51 0.29 2.02 0.17 8.63 0.59 8.81 0.46 6.30 0.75 2.35 0.33 36.13 3.03 190.01 8.16

Ipc-Grid

0% 0% 79 11 6.76 0.57 7.18 1.34 13.25 1.01 14.73 2.44 2.00 0.00 2.00 0.00 4.16 0.83 1.23 0.24
0% 25% 62 27 3.71 0.44 2.67 0.56 13.53 1.19 13.33 1.53 3.26 0.97 1.37 0.19 2.76 0.58 1.60 0.42
25% 0% 73 17 6.03 0.52 6.82 1.18 12.86 1.01 15.88 2.15 2.00 0.00 1.76 0.22 3.98 0.80 1.81 0.40
50% 0% 71 19 5.52 0.50 5.84 1.21 13.13 1.06 14.58 2.00 2.26 0.39 1.79 0.20 3.76 0.75 2.34 0.65
50% 25% 51 31 3.18 0.38 3.06 0.54 13.61 1.27 14.52 1.30 3.13 0.82 1.58 0.18 3.06 0.64 2.59 0.61

Navigation

0% 0% 54 9 9.17 1.49 8.33 4.25 17.41 2.79 17.56 9.35 2.56 1.02 2.44 1.09 0.18 0.02 0.23 0.03
0% 25% 48 14 4.56 0.95 4.07 1.61 16.73 2.91 20.14 6.89 3.07 0.86 1.79 0.79 0.18 0.02 0.20 0.04
25% 0% 56 7 8.02 1.31 9.29 5.92 17.27 2.71 18.71 12.51 3.14 1.35 2.57 1.68 0.17 0.01 0.20 0.03
50% 0% 57 6 7.60 1.14 8.17 6.28 17.16 2.67 20.00 15.12 2.50 0.88 1.67 0.54 0.17 0.01 0.20 0.03
50% 25% 39 21 3.95 0.81 4.52 1.12 16.21 3.14 20.62 5.36 2.86 0.52 1.19 0.18 0.16 0.01 0.19 0.01

Logistics

0% 0% 55 5 10.13 0.43 19.25 0.20 8.60 0.68 19.80 0.56 2.00 0.00 1.80 0.56 895.50 22.44 913.19 21.46
0% 25% 35 25 5.71 0.54 19.23 0.24 4.68 0.70 19.40 0.32 2.60 0.46 1.32 0.26 862.35 27.86 901.43 26.05
25% 0% 52 8 9.10 0.41 19.25 0.20 10.25 1.72 19.62 0.62 2.00 0.00 1.75 0.39 891.18 22.76 910.72 22.88
50% 0% 47 13 7.94 0.37 19.26 0.22 8.00 0.55 19.46 0.31 2.08 0.17 1.54 0.31 890.45 23.33 933.17 23.85
50% 25% 37 23 4.89 0.37 19.30 0.23 4.26 0.68 19.30 0.33 2.48 0.45 1.22 0.29 866.39 32.33 930.81 22.57
Table 2: U% is percent of observations placed in an unordered set. D% is percent of ‘debound’ observations. We distinguish between samples perfectly solved by the ignore strategy (Opt), and samples with room for improvement (Imp). Opt/Imp is the observation set size for the specified method and sample group. is the size of the solution set, per method, over the improvable (Imp) samples. time is the time to compute, per method, over all samples.
Table 1: Empirical Evaluation Results Per Domain and Setting
Figure 2: Comparison of solution set sizes and , from samples where improvement was possible. A solution set size of 1 is optimal.
Figure 3: Difference in time to compute \citeauthorramirezGeffner09 method vs. time to compute this work’s method.

6.3 Discussion

Figure 2 shows that complex observations can be a crucial factor in eliminating false hypotheses. Particularly for scenarios with multiple types of complexity, such as the (A+F U:50% D:25%) setting, ignoring complexity can cost three or four false positives. In no case were we less accurate, empirically confirming Theorem 3.

This considered, our method is consistently slower across domains, regardless of improvement. We hypothesize that this is due to a larger search space. Utilizing more observations means including more actions in the planning domain, which might take longer to consider. This time is highly domain-dependent. For instance, Logistics takes hundreds of seconds while Ipc-Grid takes under a second.

For all domains except Block-Words, the number of instances where we could improve (i.e. ) was too small to make significant conclusions. This brings up the concept of plan recognition difficulty. What makes Block-Words more difficult than the other domains? The other domains have, on average, larger observation sets to work with, derived from longer plans. Is it the number of observations available? Are the possible goals in its more similar? If so, what makes them similar? Plan Recognition difficulty is not necessarily tied to planning difficulty. The Logistics domain took extraordinarily long compared to the Ipc-Grid and Navigation domains, yet all found the optimal solution set most of the time.

In applications with plentiful information or a low ratio of complex to non-complex observations, ignoring complexity may be preferred for faster results with little loss of answer quality. However, areas with sparse information, higher ratios of complex to non-complex observations, or in domains known to be difficult, using complex information is vital, even if it takes longer to compute.

The new definitions for complex observation types can be used for any plan recognition approach, and our compilation can be adapted for other planning-based approaches. In particular, we are interested in adapting this compilation for probabilistic plan recognition and multi-agent plan recognition.

This work was limited by time constraints for how comprehensive an evaluation to run. We selected representative settings, but wish to reevaluate with more coverage over more settings to pinpoint those settings where a domain becomes ‘easy’, as measured by how often the optimal solution set is found.

7 Conclusion

For plan recognition to be used broadly, it needs to be capable of handling all types of information handed to it. From obstructed vision in robots to ambiguous word meanings in natural language, complex observations can come from many real-world scenarios, and this method lays the groundwork for leveraging them. We provide crisp definitions for partial-order optional observations of eaither fluents or actions and what it means to satisfy each, then prove that our compilation will produce satisfactory plans. While this work deals only with optimal solutions, the definitions provided can be extended to work with non-optimal probabilistic plan recognition.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398416
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description