Landmark-Based Plan Recognition1footnote 11footnote 1This document is a full paper of a work published (as short paper) in the 22nd European Conference on Artificial Intelligence (ECAI), 2016.

Landmark-Based Plan Recognition111This document is a full paper of a work published (as short paper) in the 22nd European Conference on Artificial Intelligence (ECAI), 2016.

Ramon Fraga Pereira Pontifical Catholic University of Rio Grande do Sul (PUCRS), Brazil. Contact: and    Felipe Meneguzzi Pontifical Catholic University of Rio Grande do Sul (PUCRS), Brazil. Contact: and

Recognition of goals and plans using incomplete evidence from action execution can be done efficiently by using planning techniques. In many applications it is important to recognize goals and plans not only accurately, but also quickly. In this paper, we develop a heuristic approach for recognizing plans based on planning techniques that rely on ordering constraints to filter candidate goals from observations. These ordering constraints are called landmarks in the planning literature, which are facts or actions that cannot be avoided to achieve a goal. We show the applicability of planning landmarks in two settings: first, we use it directly to develop a heuristic-based plan recognition approach; second, we refine an existing planning-based plan recognition approach by pre-filtering its candidate goals. Our empirical evaluation shows that our approach is not only substantially more accurate than the state-of-the-art in all available datasets, it is also an order of magnitude faster.

1 Introduction

As more computer systems require reasoning about what agents (both human and artificial) other than themselves are doing, the ability to accurately and efficiently recognize goals and plans from agent behavior becomes increasingly important. Plan recognition is the task of recognizing goals and plans based on often incomplete observations that include actions executed by agents and properties of agent behavior in an environment [18]. Accurate plan recognition is important to monitor and anticipate agent behavior, such as in crime detection and prevention, monitoring activities, and elderly-care. Most plan recognition approaches [7, 1] employ plan libraries (i.e, a library with all plans for achieving a set of goals) to represent agent behavior, resulting in approaches to recognize plans that are analogous to language parsing. Recent work [16, 15, 13, 5] use planning domain definitions (domain theories) to represent potential agent behavior, bringing plan recognition closer to planning algorithms. These approaches allow techniques used in planning algorithms to be employed for recognizing goals and plans.

In this paper, we develop a plan recognition approach that relies on planning landmarks [14, 9] to filter candidate goals and plans from the observations. Landmarks are properties (or actions) that every plan must satisfy (or execute) at some point in every plan execution to achieve a goal. Whereas in planning algorithms these landmarks are used to focus search, in our approach, they allow plan recognition algorithms to rule out candidate goals whose landmarks are missing from observations. Thus, based on planning landmarks, we develop an algorithm to filter candidate goals by estimating how many landmarks required by every goal in the set of candidate goals have been achieved within the observed actions. Since computing a subset of landmarks for a set of goals can be done very quickly, our approach can provide substantial runtime gains. In this way, we use this filtering algorithm in two settings. First, we build a landmark-based plan recognition heuristic that analyzes the amount of achieved landmarks to estimate the percentage of completion of each filtered candidate goal. Second, we show that the filter we develop can also be applied to other planning-based plan recognition approaches, such as the approach from Ramírez and Geffner [16].

We evaluate empirically our plan recognition approach against the current state-of-the-art [16] by using openly available datasets for plan recognition developed by Ramírez and Geffner in [16, 15], and which have been used to evaluate recent approaches to plan recognition [5]. This dataset provides several domains and problems in which it is not trivial to recognize the intended goal from a set of candidate goals and observations. Using this dataset, we show that our approach has at least three advantages over existing approaches. First, by relaxing the filter using a small threshold our landmark-based plan recognition approach is more accurate than the current state-of-the-art [16]. Second, our approach also provides substantially faster recognition time on its own and when used to improve existing plan recognition approaches. Finally, we show that our filtering algorithm provides substantial improvements in recognition time when used to improve existing plan recognition approaches.

This paper is organized as follows. Section 2 provides background on planning and plan recognition. In Section 3, we describe how we extract useful information from planning domain definition. Sections 45, and 6 develop the key parts of our approach for plan recognition. We empirically evaluate our approach in Section 7, which shows the results of the experiments. In Section 8, we survey related work and compare the state of the art with our approach. Finally, in Section 9, we conclude this paper by discussing limitations, advantages and future directions of our approach.

2 Background

In this section, we provide essential background on planning terminology, and how we define plan recognition problems over planning domain definitions.

2.1 Planning

Planning is the problem of finding a sequence of actions (i.e, plan) that achieves a particular goal from an initial state. In this work, we adopt the terminology from Ghallab et al. [8] to represent planning domains and problems. First, we define a state in the environment by the following Definition 1.

Definition 1 (Predicates and State).

A predicate is denoted by an n-ary predicate symbol applied to a sequence of zero or more terms (, , …, ) – terms are either constants or variables. A state is a finite set of grounded predicates (facts) that represent logical values according to some interpretation. Facts are divided into two types: positive and negated facts, as well as constants for truth () and falsehood ().

Definition 2 (Operator).

An operator is represented by a triple name(), pre(), eff(): name() represents the description or signature of ; pre() describes the preconditions of , a set of predicates that must exist in the current state for to be executed; eff() represents the effects of . These effects are divided into eff() (i.e, an add-list of positive predicates) and eff() (i.e, a delete-list of negated predicates).

A plain domain contains operator definitions, which represents the environment dynamics that guide an agent’s search for plans to achieve its goals. Operator definitions are used in the construction of a planning domain, which represents the environment dynamics that guide an agent’s search for plans to achieve its goals. An agent can modify the current state by executing actions according to Definition 3.

Definition 3 (Action).

An action is a ground operator instantiated over its free variables. Thus, if all operator free variables are substituted by objects when instantiating an operator, we have an action.

Definition 4 (Planning Domain).

A planning domain definition is represented by a pair , which specifies the knowledge of the domain, and consists of a finite set of facts and a finite set of actions .

A planning instance, comprises both a planning domain and the elements of a planning problem, describing the initial state of the environment and the goal which an agent wishes to achieve as formalized in Definition 5.

Definition 5 (Planning Instance).

A planning instance is represented by a triple , in which is the domain definition; is the initial state specification, which is defined by specifying the value for all facts in the initial state; and is the goal state specification, which represents a desired state to be reached.

Classical planning representations often separate the definition of and as part of a planning problem (to be used together with a domain ). Finally, a plan is the solution to a planning problem, as formalized in Definition 6.

Definition 6 (Plan).

A plan for a plan instance is a sequence of actions , , …, that modifies the initial state into one in which the goal state holds by the successive execution of actions in a plan . A plan with length is optimal if there exists no other plan for such that .

2.2 Plan Recognition

Plan recognition is the task of recognizing how agents achieve their goals by observing their interactions in an environment [18]. In plan recognition, such observed interactions are defined as available evidence that can be used to recognize plans. Most plan recognition approaches require knowledge of an agent’s possible plans for representing its typical behavior, in other words, this knowledge provides the recipes (i.e, know-how) for achieving goals. These recipes are often called plan libraries and are used as input for many plan recognition approaches [7, 1]. However, in this work we use as input a planning domain definition, more specifically, we use the STRIPS [6] fragment of PDDL [12]. We follow Ramírez and Geffner [16, 15] to formally define a plan recognition problem over a planning domain definition as follows.

Definition 7 (Plan Recognition Problem).

A plan recognition problem is a quadruple , in which , is the domain definition, and consists of a finite set of facts and a finite set of actions ; represents the initial state; is the set of possible goals of a hidden goal , such that ; and , , …, is an observation sequence of a plan execution with each observation being an action in the finite set of actions from the domain definition . This observation sequence can be full or partial, which means that for a full observation sequence we observe all actions during the execution of an agent plan, and for a partial observation sequence, only a sub-sequence of actions of the execution of an agent plan is observed. The solution for this problem is to find a hidden goal in the set of possible goals that the observation sequence of a plan execution achieves.

3 Extracting Recognition Information from Planning Definition

In this section, we describe the process through which we extract useful information for plan recognition from a planning domain. First, we describe landmark extraction algorithms from the literature, and how we use these algorithms to our approach. Second, we show how we classify facts into partitions from planning action descriptions.

3.1 Extracting Landmarks

In the planning literature, landmarks [9] are defined as necessary features that must be true at some point in every valid plan to achieve a particular goal. Landmarks are often partially ordered according to the sequence in which they must be achieved. Hoffman et al. [9] define landmarks as follows.

Definition 8 (Landmark).

Given a planning instance , a formula is a landmark in iff is true at some point along all valid plans that achieve from . In other words, a landmark is a type of formula (e.g, conjunctive formula or disjunctive formula) over a set of facts that must be satisfied at some point along all valid plan executions.

For plan recognition problems, landmarks allow us to infer whether a sequence of observations cannot possibly lead to a certain goal. In order to extract the landmarks of a planning problem, we use two landmark extraction algorithms from the literature: 1) Hoffman et al. [9] to extract conjunctive landmarks; and 2) Porteous and Cresswell [14] to extract conjunctive and disjunctive landmarks. To represent landmarks and their ordering, these algorithms use a tree in which nodes represent landmarks and edges represent necessary prerequisites between landmarks. Each node in the tree represents a conjunction of facts that must be true simultaneously at some point during plan execution, and the root node is a landmark representing the goal state. These algorithms use a Relaxed Planning Graph (RPG) [3], which is a leveled graph that ignores the delete-list effects of all actions, and this way, there are no mutex relations in this graph. Once the RPG is built, the algorithm extracts landmark candidates by back-chaining from the RPG level in which all facts of the goal state are possible, and, for each fact in , checks which facts must be true until the first level of the RPG. For example, if fact is a landmark and all actions that achieve share as precondition, then is a landmark candidate. To confirm that a landmark candidate is indeed a landmark, the algorithm builds a new RPG structure by removing actions that achieve this landmark candidate and checks the solvability over this modified problem222Deciding the solvability of a relaxed planning problem using an RPG structure can be done in polynomial time [2]., and, if the modified problem is unsolvable, then the landmark candidate is a necessary landmark. This means that the actions that achieve the landmark candidate are necessary to solve the original planning problem.

Hoffman et al. [9] proves that the process of generating exactly all landmarks and deciding about their ordering is PSPACE-complete, which is exactly the same complexity of deciding plan existence [4]. Nevertheless, most landmark extraction algorithms extract only a subset of landmarks for a given planning instance in order to extract landmarks efficiently. In this way, we can monitor landmarks during plan execution to determine which goals a plan is going to achieve and discard candidate goals if some landmarks are not achievable or do not appear as precondition or effect of actions in the observations.

3.2 Fact Partitioning

Pattison and Long [13] classify facts into mutually exclusive partitions in order to infer whether certain observations are likely to be goals for goal recognition. Their classification relies on the fact that, in some planning domains, predicates may provide additional information that can be extracted by analyzing preconditions and effects in operator definition. We use this classification to infer if certain observations are consistent with a particular goal, and if not, we can eliminate a candidate goal. We formally define fact partitions in what follows.

Definition 9 (Strictly Activating).

A fact is strictly activating if and , such that eff() eff(). Furthermore, , such that pre().

Definition 10 (Unstable Activating).

A fact is unstable activating if and , eff() and pre() and , eff().

Definition 11 (Strictly Terminal).

A fact is strictly terminal if , such that eff() and , pre() and eff().

A Strictly Activating fact (Definition 9) appears as a precondition, and does not appear as add or delete effect in an operator definition. This means that unless defined in the initial state, this fact can never be added or deleted by an operator. An Unstable Activating fact (Definition 10) appears as both a precondition and a delete effect in two operator definitions, so once deleted, this fact cannot be re-achieved. The deletion of an unstable activating fact may prevent a plan execution to achieve a goal. A Strictly Terminal fact (Definition 11) does not appear as a precondition of any operator definition, and once added, cannot be deleted. For some planning domains, this kind of fact is the most likely to be in the set of goal facts, because once added in the current state, it cannot be deleted, and remains true until the final state.

The fact partitions that we can extract depend on the planning domain definition. For example, from the Blocks-World333Blocks-World is a classical planning domain where a set of stackable blocks must be re-assembled on a table [8]. domain, it is not possible to extract any fact partitions. However, it is possible to extract fact partitions from the Easy-IPC-Grid444Easy-IPC-Grid domain consists of an agent that moves in a grid using keys to open locked locations. domain, such as Strictly Activating and Unstable Activating facts. Here, we use fact partitions to obtain additional information on fact landmarks. For example, consider an Unstable Activating fact landmark , so that if is deleted from the current state, then it cannot be re-achieved. We can trivially determine that goals for which this fact is a landmark are unreachable, because there is no available action that achieves again.

4 Filtering Candidate Goals from Landmarks in Observations

Key to our approach to plan recognition is the ability to filter candidate goals based on the evidence of fact landmarks and partitioned facts in preconditions and effects of observed actions in a plan execution. We now present a filtering process that analyzes fact landmarks in preconditions and effects of observed actions, and selects goals, from a set of candidate goals, that have achieved most of their associated landmarks.

This filtering process is detailed in function FilterCandidateGoals of Algorithm 1, which takes as input a plan recognition problem , which is composed of a planning domain definition , an initial state , a set of candidate goals , a set of observed actions , and a filtering threshold . Our algorithm iterates over the set of candidate goals , and, for each goal in , it extracts and classifies fact landmarks and partitions for from the initial state (Lines 4 and 5). We then check whether the observed actions contain fact landmarks or partitioned facts in either their preconditions or effects. At this point, if any Strictly Activating facts for the candidate goal are not in initial state , then the candidate goal is no longer achievable and we discard it (Line 6). Subsequently, we check for Unstable Activating and Strictly Terminal facts of goal in the preconditions and effects of the observed actions , and if we find any, we discard the candidate goal (Line 11). If we observe no facts from partitions as evidence from the actions in , we move on to checking landmarks of within the actions in . If we observe any landmarks in the preconditions and positive effects of the observed actions (Line 15), we compute the percentage of achieved landmarks for goal . As we deal with partial observations in a plan execution some executed actions may be missing from the observation, thus whenever we identify a fact landmark, we also infer that its predecessors have been achieved. For example, let us consider that the set of fact landmarks to achieve a goal from a state is represented by the following ordered facts: (at A) (at B) (at C) (at D), and we observe just one action during a plan execution, and this observed action contains the (at C) fact landmark as an effect. Based on this action from a partial observation, we can infer that the predecessors of (at C) have been achieved before the observation, and thus, we also include them as achieved landmarks. Given the number of achieved fact landmarks of , we estimate the percentage of fact landmarks that the observed actions have achieved according to the ratio between the amount of achieved fact landmarks and the total amount of landmarks (Line 21). Finally, after analyzing all candidate goals in , we return the goals with the highest percentage of achieved landmarks within our filtering threshold (Line 24). Note that, if threshold , the filter returns only the goals with maximum completion, given the observations. The threshold gives us flexibility when dealing with incomplete observations and sub-optimal plans, which, when , may cause some potential goals to be filtered out before we get additional observations.

Input: , planning domain, initial state, set of candidate goals, observations, and threshold.
Output: A set of filtered candidate goals with the highest percentage of achieved landmarks.

1:function FilterCandidateGoals()
2:      := Map goals to % of landmarks achieved .
3:     for each goal in  do
4:          :=
5:          : Strictly Activating, : Unstable Activating, : Strictly Terminal.
6:         if  then
7:              continue Goal is no longer possible.
8:         end if
9:          := Achieved landmarks for .
10:         for each observed action in  do
11:              if  then
12:                   true
13:                  break
14:              else
15:                   := select all fact landmarks in such that pre() eff()
16:                   :=
17:              end if
18:         end for
19:         if  then break Avoid computing achieved landmarks for .
20:         end if
21:          := Percentage of achieved landmarks for .
22:     end for
23:     return all s.t and
24:     return
25:end function
Algorithm 1 Filter candidate goals.

As an example of how the algorithm filters a set of candidate goals, consider the Blocks-World example shown in Figure 1, which represents an initial configuration of stackable blocks, as well as set of candidate goals. The candidate goals consist of the following stackable words: BED, DEB, EAR, and RED. Now consider that the following actions have been observed in the plan execution: (stack E D) and (pick-up S). After filtering the set of candidate goals, we have the following filtered goals for : BED and RED. Function FilterCandidateGoals returns these goals because the observed action (stack E D) has in its preconditions the fact landmarks (and (clear D) (holding E)), and its effects contain (on E D). Consequently, from these landmarks, it is possible to infer the evidence for another fact landmark, that is: (and (on E A) (clear E) (handempty)). This fact landmark is inferred because it must be true before (clear D) and (holding E). The observed action (pick-up S) does not provide any evidence for filtering the set of candidate goals. Thus, the estimated percentage of achieved fact landmarks of the filtered candidate goals BED and RED is 75%. Both of these goals have 8 fact landmarks, and based on the evidence in the observed actions, we infer 6 fact landmarks have been reached, including fact landmarks in the initial state, such as: (clear B), (ontable D), and (and (on B C) (clear B) (handempty)) for BED; and (clear R), (ontable D), and (and (clear R) (ontable R) (handempty)) for RED. Regarding goals EAR and DEB, the observations allow us to conclude that, respectively, 3 and 2 out of 7 and 9 fact landmarks were reached. Figures 3 and 3 show the ordered fact landmarks for the filtered candidate goals BED and RED. Boxes in dark gray show achieved fact landmarks for these goals while boxes in light gray show inferred fact landmarks.

Figure 1: Blocks-World example.
Figure 2: Fact landmarks for the word BED.
Figure 3: Fact landmarks for the word RED.

5 Heuristic Plan Recognition using Landmarks

We now develop a landmark-based heuristic method that estimates the goal completion of every goal in the set of filtered goals by analyzing the number of achieved landmarks for each goal provided by the filtering process. We can now heuristically estimate the goal completion of every goal in the set of filtered goals using the computed landmarks. Each candidate goal is composed of sub-goals: atomic facts that are part of a conjunction of facts. This estimate represents the percentage of sub-goals (atomic facts that are part of a conjunction of facts) in a goal that have been accomplished based on the evidence of achieved fact landmarks in observations.

Our heuristic method estimates the percentage of completion towards a goal by using the set of achieved fact landmarks provided by the filtering process (Algorithm 1, Line 15). We aggregate the percentage of completion of each sub-goal into an overall percentage of completion for all facts in a candidate goal. This heuristic, denoted as , is computed by the formula below, where is the number of achieved landmarks from observations of every sub-goal of the candidate goal , and represents the number of necessary landmarks to achieve every sub-goal of :


Thus, heuristic estimates the completion of a goal by calculating the ratio between the sum of the percentage of completion for every sub-goal , i.e, , and the number of sub-goals in .

To exemplify how heuristic estimates goal completion, recall the Blocks-World example from Figure 1. For the BED goal its sub-goals (shown at the top of Figure 3) are: (clear B), (on B E), (on E D), and (ontable D). Based on the observed actions (stack E D) and (pick-up S), we conclude that sub-goals (clear B) and (ontable D) have already been achieved because they are in the initial state, and the observed actions do not delete any of those facts. Although fact (clear B) in the initial state does not correspond to the final configuration of goal BED, we account for this fact in the heuristic calculation, since we consider all observed evidence. At this point, our heuristic computes that 50% of goal BED has been accomplished. However, for this goal, there is even more information to be considered in order to calculate the percentage of the BED goal completion. The observed actions have achieved fact landmarks that correspond to the sub-goal (on E D), such as preconditions and effects of the observed action (stack E D), which are: (and (clear D) (holding E)), and (on E D). Therefore, we infer that fact landmark (and (on E A) (clear E) (handempty)) has been achieved, because it must be true before fact landmark (and (clear D) (holding E)). For the sub-goal (on B E), the initial state provides the evidence of the following fact landmark: (and (on B C) (clear B) (handempty)). The observed action (pick-up S) does not provide any evidence for the goal BED. Thus, heuristic estimates that from the evidence of landmarks in the observed actions, the percentage of completion for the goal BED is 83.3%, as follows: (clear B) = (on B E) = (on E D) = (ontable D) = . Note that, by varying the threshold in the filter of Algorithm 1, we increase the number of candidate goals for which we must compute the heuristic. However, since the heuristic is linear on the number of predicates in a goal, increasing the number of candidate goals has virtually no impact in computational complexity.

6 Landmark-based Plan Recognition

We now bring together the techniques from Sections 4 and 5 into our landmark-based plan recognition approach that uses the presented filter and heuristic for recognizing goals and plans. Our plan recognition approach is detailed in Algorithm 2. This algorithm takes as input a plan recognition problem , and works in two stages. In the first stage, this algorithm filters candidate goals using the filter (Algorithm 1), which returns the candidate goals with the highest percentage of achieved landmarks within a given threshold . In the second stage, from the filtered candidates, this algorithm uses our landmark-based heuristic (Equation 1) to return the recognized goal(s) by estimating the percentage of completion using the set of achieved fact landmarks provided by the filter.

Input: , planning domain, initial state, set of candidate goals, observations, and threshold.
Output: Recognized goal(s).

1:function Recognize()
2:      := Map goals to % of landmarks achieved.
3:      := FilterCandidateGoals()
4:     return
5:end function
Algorithm 2 Recognize goals and plans using the filtering method and the landmark-based heuristic.

7 Experiments and Evaluation

Landmark-based Plan Recognition R&G Filter + R&G
Domain %Obs
(0 / 10 / 20 / 30)
(0 / 10 / 20 / 30)
Time Accuracy Time Accuracy
20 15.6
0.99 / 0.100 / 0.105 / 0.111
0.107 / 0.109 / 0.118 / 0.122
0.113 / 0.113 / 0.120 / 0.127
0.138 / 0.139 / 0.141 / 0.148
0.163 / 0.166 / 0.172 / 0.185
36.1% / 38.8% / 70.0% / 89.4%
54.4% / 61.1% / 86.1% / 97.2%
63.8% / 83.8% / 98.3% / 100.0%
81.6% / 94.4% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
2 8.5
0.038 / 0.039 / 0.042 / 0.044
0.048 / 0.050 / 0.055 / 0.057
0.063 / 0.062 / 0.066 / 0.068
0.060 / 0.060 / 0.063 / 0.065
0.068 / 0.069 / 0.073 / 0.072
93.3% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
93.3% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
7.5 11.3
0.585 / 0.588 / 0.609 / 0.623
0.597 / 0.600 / 0.614 / 0.644
0.608 / 0.609 / 0.627 / 0.656
0.629 / 0.628 / 0.661 / 0.715
0.630 / 0.632 / 0.685 / 0.759
82.2% / 85.5% / 97.7% / 100.0%
86.6% / 93.3% / 97.7% / 100.0%
94.4% / 97.7% / 97.7% / 100.0%
95.5% / 98.8% / 98.8% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
15 16
0.197 / 0.200 / 0.211 / 0.233
0.214 / 0.219 / 0.227 / 0.241
0.218 / 0.221 / 0.246 / 0.269
0.219 / 0.223 / 0.258 / 0.274
0.277 / 0.281 / 0.303 / 0.325
76.4% / 96.6% / 100.0% / 100.0%
94.4% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
3 5
0.003 / 0.003 / 0.002 / 0.004
0.003 / 0.004 / 0.005 / 0.005
0.004 / 0.004 / 0.006 / 0.006
0.006 / 0.007 / 0.007 / 0.008
0.007 / 0.008 / 0.008 / 0.009
93.3% / 100.0% / 100.0% / 100.0%
93.3% / 100.0% / 100.0% / 100.0%
93.3% / 100.0% / 100.0% / 100.0%
93.3% / 93.3% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
10 18.7
0.441 / 0.449 / 0.455 / 0.458
0.447 / 0.452 / 0.461 / 0.466
0.457 / 0.469 / 0.474 / 0.488
0.474 / 0.481 / 0.490 / 0.497
0.498 / 0.505 / 0.513 / 0.522
73.3% / 96.6% / 100.0% / 100.0%
88.7% / 100.0% / 100.0% / 100.0%
96.6% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
100.0% / 100.0% / 100.0% / 100.0%
Table 1: Comparison and experimental results of our landmark-based approach against Ramirez and Geffner [16] approach. R&G denotes their plan recognition approach and Filter + R&G denotes the same approach but using our filtering algorithm. For the experiments with the Kitchen domain we use disjunctive landmarks.

In this section, we describe the experiments and evaluation we carried out on our landmark-based plan recognition approach against state-of-the-art techniques. For experiments, we use six domains from datasets provided by Ramírez and Geffner [16, 15], comprising hundreds of problems555 We summarize these domains as follows.

  • Blocks-world domain consists of a set of blocks, a table, and a robot hand. Blocks can be stacked on top of other blocks or on the table. A block that has nothing on it is clear. The robot hand can hold one block or be empty. The goal is to find a sequence of actions that achieves a final configuration of blocks;

  • Campus domain consists of finding what activity is being performed by a student from his observations on campus environment;

  • Easy-IPC-Grid domain consists of an agent that moves in a grid from connected cells to others by transporting keys in order to open locked locations;

  • Intrusion-Detection represents a domain where a hacker tries to access, vandalize, steal information, or perform a combination of these attacks on a set of servers;

  • Kitchen is a domain that consists of home-activities, in which the goals can be preparing dinner, breakfast, among others; and

  • Logistics is a domain which models cities, and each city contains locations. These locations are airports. For transporting packages between locations, there are trucks and airplanes. Trucks can drive between cities. Airplanes can fly between airports. The goal is to get and transport packages from locations to other locations.

These domains contain hundreds of plan recognition problems, i.e, a domain description as well as an initial state, a set of candidate goals , a hidden goal in , and an observation sequence . An observation sequence contains actions that represent an optimal plan or sub-optimal plan that achieves a hidden goal , and this observation sequence can be full or partial. A full observation sequence represents the whole plan for a hidden goal , i.e, 100% of the actions having been observed. A partial observation sequence represents a plan for a hidden goal with 10%, 30%, 50%, or 70% of its actions having been observed. Our experiments use two metrics, the accuracy of the recognition and the speed to recognize a goal. We compare our approach to two other approaches: the approach of Ramírez and Geffner [16], more specifically, we use their faster and most accurate approach; as well as a combination of their approach and our filter.

Figure 4: ROC curve for the Blocks-World domain.
Figure 5: ROC curve for the Campus domain.
Figure 6: ROC curve for the Easy-IPC-Grid domain.
Figure 7: ROC curve for the Intrusion-Detection domain.
Figure 8: ROC curve for the Kitchen domain.
Figure 9: ROC curve for the Logistics domain.

For evaluation, we use the accuracy metric (true positive rate), which represents how well a hidden goal is recognized from a set of possible goals for a given plan recognition problem; as well as recognition time (in seconds), which represents how long it takes for a hidden goal to be recognized given a plan recognition problem. In the Blocks-World domain, the accuracy metric measures how well these approaches recognize, from observations, the word that is being assembled. Regarding Campus domain, we aim to accurate how well these approaches recognize the activity is being performed by an observed student. For the Easy-IPC-Grid domain, how accurate these approaches recognize the cell where keys are being to transported by the observed agent. With regard to Intrusion-Detection domain, how accurate these approaches recognize the type of attack and servers that are being hacked observations. In the Kitchen domain, we aim to accurate how well these approaches recognize the meal is being prepared. For Logistics domain, how accurate these approaches recognize the location where the packages are being transported from observations. Besides the accuracy metric, we use the Receiver Operating Characteristic (ROC), which is called ROC curve. ROC curve shows graphically the performance of a binary classifier system by evaluating true positive rate against the false positive rate at various threshold settings (in this paper we evaluate plan recognition approaches). More specifically, we use the ROC curve to compare not only true positive predictions (i.e, accuracy), but also to compare the false positive ratio of the experimented plan recognition approaches.

Table 1 compares the results for the three plan recognition approaches, showing the total number of plan recognition problems used under each domain name. For each domain we show the number of candidate goals and varying percentages of the plan that is actually observed, as well as the average number of observed actions per problem . Note that, for partial observations, random observed actions are removed (up to the set percentage), but the order is maintained. denotes the average number of fact landmarks extracted for each domain. For each approach, we compute the time to recognize the hidden goal (seconds), given the observations, and the accuracy with which the approaches correctly infer the goal. For our landmark-based plan recognition approach, we show the accuracy under different filtering thresholds (0%, 10%, 20% and 30%). If threshold , our approach does not give any flexibility for filtering candidate goals, returning only the goals with the highest percentage of achieved landmarks. Each row of this table shows the observability (% Obs) and averages of the number of candidate goals , the number of observed actions , recognition time, and accuracy. From this table, it is possible to see that our landmark-based plan recognition approach is both faster and more accurate than Ramírez and Geffner [16], and, when we combine their algorithm with our filter, the resulting approach gets a substantial speedup. Importantly, as we increase the threshold, our plan recognition approach quickly surpasses the state of the art in all domains tested. We note that when measuring time to recognition using our filter we also include the time to compute landmarks, so that landmark computation is performed online (i.e, during the process of plan recognition). Thus, even if this computation has a complex upper bound, in our experience, computing landmarks (especially conjunctive ones) is very fast.

Table 1 shows that both our landmark-based plan recognition approach and Ramírez and Geffner’s [16] yield near perfect accuracy for recognizing goals and plans for all planning domains. However, by using the ROC curve we highlight the trade-off between true positive results and false positive results for these plan recognition approaches. Figures 5-9 show the ROC curve for the six planning domains we use. In the ROC curve, the diagonal line in (Random Access) represents a random guess to recognize a goal from observations. This diagonal line divides the ROC space, in which points above the diagonal represent good classification results (better than random), whereas points below the line represent poor results (worse than random). The best possible (perfect) prediction for recognizing goals must be a point in the upper left corner (i.e, coordinate x = 0 and y = 100) in the ROC space. Thus, the closer a plan recognition approach (point) gets to the upper left corner, the better it is for recognizing goals and plans. Blue, green, red, and yellow points with five different symbols represent our plan recognition approach varying the use of the threshold (0%, 10%, 20% and 30%). These five different symbols represent the percentage of observability (10%, 30%, 50%, 70% and 100%) with regard to the observed plan. Black points represent Ramírez and Geffner’s [16] approach (R&G). According to the ROC curve in Figures 579, and 9 we see that all variation (using different thresholds) of our plan landmark-based recognition approach yield good (sometimes perfect) predictions for recognizing goals and plans, in contrast to R&G, which is near-perfect in these four domains. Figure 5 shows that the results for the Blocks-World are quite scattered in the ROC curve, so recognizing goals and plans in this domain is difficult. Nevertheless, it possible to see that our plan recognition is not only competitive (using the thresholds between 10% and 20%) with R&G with superior accuracy, but also at least 8.75 orders of magnitude faster than R&G. Finally, Figure 7 shows that in the Easy-IPC-Grid domain our approach is very competitive with R&G, again with consistently higher accuracy, but also is near perfect for false positives, surpassing R&G by using different thresholds.

8 Related Work

Ramírez and Geffner [16] propose planning approaches for plan recognition, and instead of using plan-libraries, they model the problem as a planning domain theory with respect to a known set of goals. Their work uses a heuristic, an optimal and modified sub-optimal planner to determine the distance to every goal in a set of goals after an observation. Follow-up work [15] proposes a probabilistic plan recognition approach using off-the-shelf planners. These approaches yield high accuracy in most domains, however, this accuracy is lower than in our threshold-based approaches, and their time to recognition ranges from twice slower to up to an order of magnitude slower. Pattison and Long [13] propose IGRAPH (AUTOmatic Goal Recognition with A Planning Heuristic), a probabilistic heuristic-based goal recognition over planning domains. IGRAPH uses heuristic estimation and domain analysis to determine which goals an agent is pursuing. Although we adapt their fact partitions, their problem definition is formally different than ours, preventing direct comparison. In [11], Keren et al. present an alternate view regarding the goal and plan recognition problem. This work uses planning techniques to assist in the design of goal and plan recognition problems. Most recently, E.-Martín et al. [5] propose a planning-based plan recognition approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities over the set of candidate goals. Although our landmark-based plan recognition approach has no probabilistic interpretation, the accuracy of our approach seems to be higher in the same domains.

9 Conclusion

We have developed an approach for plan recognition that relies on planning landmarks and a new heuristic based on these landmarks. Landmarks provide key information about what cannot be avoided to achieve a goal, and we show that landmarks can be used efficiently for very accurate plan recognition. We have shown empirically that our approach yields not only superior accuracy results but also substantially faster recognition times for all domains used in evaluating against the state of the art [16] at varying observation completeness levels.

Our experiments show that in at least one domain, disjunctive landmarks have a positive effect on accuracy with minimal effect on recognition time, whereas in some domains, these landmarks are either not present or yield almost no gain in accuracy at substantial loss of speed. Knowledge of the domains leads us to believe that disjunctive landmarks are most useful in domains in which we assume that observed plans are just sub-optimal, such as the Kitchen domain. Conversely, disjunctive landmarks slow down recognition in domains in which there are multiple mutually exclusive plans towards the same goal, such as the Easy-IPC-Grid domain in which the agent moves in a grid.

We intend to explore multiple avenues for future work. First, we aim to evaluate other planning techniques, such as heuristics and symmetries in classical planning [17]. Second, we intend to explore other landmark extraction algorithms to obtain additional information from planning domains, such as temporal landmarks [10]. Third, we aim to model a probability interpretation to the observed landmarks and compare the probability results (and probabilistic accuracy) to the recent work of E.-Martín et al. [5]. Finally, for domains with goals with intersecting landmarks, we can use measures of information gain to weigh observations to help break ties when multiple goals are left after the filter. Given the computational complexity of landmark extraction in the general case, we aim to theoretically analyze the tradeoff between landmark completeness and runtime efficiency.


  • [1] Dorit Avrahami-Zilberbrand and Gal A. Kaminka, ‘Fast and Complete Symbolic Plan Recognition’, in IJCAI 2005., pp. 653–658, (2005).
  • [2] Avrim L. Blum and Merrick L. Furst, ‘Fast Planning Through Planning Graph Analysis’, Journal of Artificial Intelligence Research (JAIR), 90(1-2), 281–300, (February 1997).
  • [3] Daniel Bryce and Subbarao Kambhampati, ‘A Tutorial on Planning Graph Based Reachability Heuristics’, AI Magazine, 28(1), 47–83, (2007).
  • [4] Tom Bylander, ‘The Computational Complexity of Propositional STRIPS Planning’, Journal of Artificial Intelligence Research (JAIR), 69, 165–204, (1994).
  • [5] Yolanda E.-Martín, María D. R.-Moreno, and David E. Smith, ‘A Fast Goal Recognition Technique Based on Interaction Estimates’, in IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pp. 761–768, (2015).
  • [6] Richard E Fikes and Nils J Nilsson, ‘STRIPS: A new approach to the application of theorem proving to problem solving’, Journal of Artificial Intelligence Research (JAIR), 2(3), 189–208, (1971).
  • [7] Christopher W. Geib and Robert P. Goldman, ‘Partial Observability and Probabilistic Plan/Goal Recognition’, in In Proceedings of the 2005 International Workshop on Modeling Others from Observations (MOO-2005), (2005).
  • [8] Malik Ghallab, Dana S. Nau, and Paolo Traverso, Automated Planning - Theory and Practice., Elsevier, 2004.
  • [9] Jörg Hoffmann, Julie Porteous, and Laura Sebastia, ‘Ordered Landmarks in Planning’, Journal of Artificial Intelligence Research (JAIR), 22(1), 215–278, (November 2004).
  • [10] Erez Karpas, David Wang, Brian C. Williams, and Patrik Haslum, ‘Temporal landmarks: What must happen, and when’, in ICAPS 2015, Jerusalem, Israel, June 7-11, 2015., pp. 138–146, (2015).
  • [11] Sarah Keren, Avigdor Gal, and Erez Karpas, ‘Goal recognition design’, in Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling, ICAPS 2014, Portsmouth, New Hampshire, USA, June 21-26, 2014, (2014).
  • [12] Drew McDermott, Malik Ghallab, Adele Howe, Craig Knoblock, Ashwin Ram, Manuela Veloso, Daniel Weld, and David Wilkins, ‘PDDL The Planning Domain Definition Language’, (1998).
  • [13] David Pattison and Derek Long, ‘Domain Independent Goal Recognition.’, in STAIRS, ed., Thomas Ågotnes, volume 222 of Frontiers in Artificial Intelligence and Applications, pp. 238–250. IOS Press, (2010).
  • [14] J. Porteous and S. Cresswell, ‘Extending Landmarks Analysis to Reason about Resources and Repetition’, in Proceedings of the 21st Workshop of the UK Planning and Scheduling Special Interest Group (PLANSIG ’02), (2002).
  • [15] Miquel Ramírez and Hector Geffner, ‘Probabilistic Plan Recognition Using Off-the-Shelf Classical Planners’, in AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010, (2010).
  • [16] Miquel Ramírez and Hector Geffner, ‘Plan Recognition as Planning.’, in IJCAI 2009., ed., Craig Boutilier, pp. 1778–1783, (2009).
  • [17] Alexander Shleyfman, Michael Katz, Malte Helmert, Silvan Sievers, and Martin Wehrle, ‘Heuristics and Symmetries in Classical Planning’, in AAAI 2015, January 25-30, 2015, Austin, Texas, USA., pp. 3371–3377, (2015).
  • [18] Gita Sukthankar, Robert P. Goldman, Christopher Geib, David V. Pynadath, and Hung Hai Bui, Plan, Activity, and Intent Recognition: Theory and Practice, Elsevier, 2014.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description