Discovering Underlying Plans Based on Shallow Models

Discovering Underlying Plans Based on Shallow Models

Hankz Hankui Zhuo Hankz Hankui Zhuo Sun Yat-Sen University
22email: zhuohank@mail.sysu.edu.cnYantian Zha Arizona State University
44email: yantian.zha@asu.eduSubbarao Kambhampati Arizona State University
66email: rao@asu.edu
   Yantian Zha Hankz Hankui Zhuo Sun Yat-Sen University
22email: zhuohank@mail.sysu.edu.cnYantian Zha Arizona State University
44email: yantian.zha@asu.eduSubbarao Kambhampati Arizona State University
66email: rao@asu.edu
   Subbarao Kambhampati Hankz Hankui Zhuo Sun Yat-Sen University
22email: zhuohank@mail.sysu.edu.cnYantian Zha Arizona State University
44email: yantian.zha@asu.eduSubbarao Kambhampati Arizona State University
66email: rao@asu.edu
Abstract

Plan recognition aims to discover target plans (i.e., sequences of actions) behind observed actions, with history plan libraries or domain models in hand. Previous approaches either discover plans by maximally “matching” observed actions to plan libraries, assuming target plans are from plan libraries, or infer plans by executing domain models to best explain the observed actions, assuming that complete domain models are available. In real world applications, however, target plans are often not from plan libraries, and complete domain models are often not available, since building complete sets of plans and complete domain models are often difficult or expensive. In this paper we view plan libraries as corpora and learn vector representations of actions using the corpora; we then discover target plans based on the vector representations. Specifically, we propose two approaches, DUP and RNNPlanner, to discover target plans based on vector representations of actions. DUP explores the EM-style framework to capture local contexts of actions and discover target plans by optimizing the probability of target plans, while RNNPlanner aims to leverage long-short term contexts of actions based on RNNs (recurrent neural networks) framework to help recognize target plans. In the experiments, we empirically show that our approaches are capable of discovering underlying plans that are not from plan libraries, without requiring domain models provided. We demonstrate the effectiveness of our approaches by comparing its performance to traditional plan recognition approaches in three planning domains. We also compare DUP and RNNPlanner to see their advantages and disadvantages.

Keywords:
Plan Recognition Distributed Representation Shallow Model AI Planning Action Model Learning

1 Introduction

As computer-aided cooperative work scenarios become increasingly popular, human-in-the-loop planning and decision support has become a critical planning challenge (c.f. cacm-sketch-plan (); woogle (); ai-mix ()). An important aspect of such a support aaai-hilp-tutorial () is recognizing what plans the human in the loop is making, and provide appropriate suggestions about their next actions DBLP:conf/uai/AlbrechtR15 (). Although there is a lot of work on plan recognition, much of it has traditionally depended on the availability of a complete domain model geffner-ramirez (); conf/nips/hankz12 (); DBLP:journals/ai/ZhuoK17 (). As has been argued elsewhere aaai-hilp-tutorial (), such models are hard to get in human-in-the-loop planning scenarios. Here, the decision support systems have to make themselves useful without insisting on complete action models of the domain. The situation here is akin to that faced by search engines and other tools for computer supported cooperate work, and is thus a significant departure for the “planning as pure inference” mindset of the automated planning community. As such, the problem calls for plan recognition with “shallow” models of the domain (c.f. rao-model-lite ()), that can be easily learned automatically. Compared to learning action models (“complex” models correspondingly) of the domain from limited training data, learning shallow models can avoid the overfitting issue. One key difference between “shallow” and “complex” models is the size of parameters of both models is distinguish, which is comparable to learning models in machine learning community, i.e., complex models with large parameters require much more training data for learning parameter values compared to “shallow” models.

There has been very little work on learning such shallow models to support human-in-the-loop planning. Some examples include the work on Woogle system woogle () that aimed to provide support to humans in web-service composition. That work however relied on very primitive understanding of the actions (web services in their case) that consisted merely of learning the input/output types of individual services. In this paper, we focus on learning more informative models that can help recognize the plans under construction by the humans, and provide active support by suggesting relevant actions. To drive this process, we propose two approaches to learning informative models, namely DUP, standing for Discovering Underlying Plans based on action-vector representations, and RNNPlanner, standing for Recurrent Neural Network based Planner. The framework of DUP and RNNPlanner is shown in Figure 1, where we take as input a set of plans (or a plan library) and learn the distributed representations of actions (namely action vectors). After that, our DUP approach exploits an EM-Style framework to discover underlying plans based on the learnt action vectors, while our RNNPlanner approach exploits an RNN-Style framework to generate plans to best explain observations (i.e., discover underlying plans behind the observed actions) based on the learnt action vectors. In DUP we consider local contexts (with a limited window size) of actions being recognized, while in RNNPlanner we explore the potential influence from long and short-term actions, which can be modelled by RNN, to help recognize unknown actions.

Figure 1: The framework of our shallow models DUP and RNNPlanner

In summary, the contributions of the paper are shown below.

  1. In DBLP:conf/atal/TianZK16 (), we presented a version of DUP. In this paper we extend DBLP:conf/atal/TianZK16 () with more details to elaborate the approach.

  2. We propose a novel model RNNPlanner based on RNN to explore the influence of actions from long and short-term contexts.

  3. We compare RNNPlanner to DUP to exhibit the advantage and disadvantage of leveraging information from long and short-term contexts.

In the sequel, we first formulate our plan recognition problem, and then address the details of our approaches DUP and RNNPlanner. After that, we empirically demonstrate that it does capture a surprising amount of structure in the observed plan sequences, leading to effective plan recognition. We further compare its performance to traditional plan recognition techniques, including one that uses the same plan traces to learn the STRIPS-style action models, and use the learned model to support plan recognition. We also compare RNNPlanner with DUP to see the advantage and disadvantage of leveraging long and short-term contexts of actions in different scenarios. We finally review previous approaches related to our work and conclude our paper with further work.

2 Problem Formulation

A plan library, denoted by , is composed of a set of plans , where is a sequence of actions, i.e., where , , is an action name (without any parameter) represented by a string. For example, a string unstack-A-B is an action meaning that a robot unstacks block A from block B. We denote the set of all possible actions by which is assumed to be known beforehand. For ease of presentation, we assume that there is an empty action, , indicating an unknown or not observed action, i.e., . An observation of an unknown plan is denoted by , where , , is either an action in or an empty action indicating the corresponding action is missing or not observed. Note that is not necessarily in the plan library , which makes the plan recognition problem more challenging, since matching the observation to the plan library will not work any more.

We assume that the human is making a plan of at most length . We also assume that at any given point, the planner is able to observe of these actions. The unobserved actions might either be in the suffiix of the plan, or in the middle. Our aim is to suggest, for each of the unobserved actions, possible choices from which the user can select the action. (Note that we would like to keep small, ideally close to 1, so as not to overwhelm users). Accordingly, we will evaluate the effectiveness of the decision support in terms of whether or not the user’s best/intended action is within the suggested actions.

Specifically, our recognition problem can be represented by a triple . The solution to is to discover the unknown plan , which is a plan with unkwown observations, that best explains given and . We have the following assumptions A1-A3:

  • The length of the underlying plan to be discovered is known, which releases us from searching unlimited length of plans.

  • The positions of missing actions in the underlying plan is known in advance, which releases us from searching missing actions in between observed actions.

  • All actions observed are assumed to be correct, which indicates there is no need to criticize or rectify the observed actions.

An example of our plan recognition problem in the blocks111http://www.cs.toronto.edu/aips2000/ domain is shown below.

Example: A plan library in the blocks domain is assumed to have four plans as shown below:

plan 1: pick-up-B stack-B-A pick-up-D stack-D-C
plan 2: unstack-B-A put-down-B unstack-D-C put-down-D
plan 3: pick-up-B stack-B-A pick-up-C stack-C-B pick-up-D stack-D-C
plan 4: unstack-D-C put-down-D unstack-C-B put-down-C unstack-B-A put-down-B

An observation of action sequence is shown below:

observation: pick-up-B unstack-D-C put-down-D stack-C-B

Given the above input, our DUP algorithm outputs plans as follows:

pick-up-B stack-B-A unstack-D-C put-down-D pick-up-C stack-C-B pick-up-D stack-D-C

Although the “plan completion” problem seems to differ superficially from the traditional “plan recognition” problem, we point out that many earlier works on plan recognition do in fact evaluate their recognition algorithms in terms of completion tasks, e.g., cof/ijcai/Ramirez09 (); cof/ijcai/hankz11 (); conf/nips/hankz12 (). While these earlier efforts use different problem settings, taking either a plan library or action models as input, they share one common characteristic: they all aim to look for a plan that can best explain (or complete) the observed actions. This is exactly the same as our problem we aim to solve.

3 Learning the distributed representations of actions

Since actions are denoted by a name string, actions can be viewed as words, and a plan can be viewed as a sentence. Furthermore, the plan library can be seen as a corpus, and the set of all possible actions is the vocabulary. Given a plan corpus, we can exploit off-the-shelf approaches, e.g., the Skip-gram model word2vec (), for learning vector representations for actions.

The objective of the Skip-gram model is to learn vector representations for predicting the surrounding words in a sentence or document. Given a corpus , composed of a sequence of training words , where , the Skip-gram model maximizes the average log probability

(1)

where is the size of the training window or context.

The basic probability is defined by the hierarchical softmax, which uses a binary tree representation of the output layer with the words as its leaves and for each node, explicitly represents the relative probabilities of its child nodes word2vec (). For each leaf node, there is an unique path from the root to the node, and this path is used to estimate the probability of the word represented by the leaf node. There are no explicit output vector representations for words. Instead, each inner node has an output vector , and the probability of a word being the output word is defined by

(2)

where

is the length from the root to the word in the binary tree, e.g., if there are four nodes from the root to . is the th node from the root to , e.g., and . is a fixed child (e.g., left child) of node . is the vector representation of the inner node . is the input vector representation of word . The identity function is 1 if is true; otherwise it is -1.

We can thus build vector representations of actions by maximizing Equation (1) with corpora or plan libraries as input. We will exploit the vector representations to discover the unknown plan in the next subsection.

4 Our Dup Algorithm

Our DUP approach to the recognition problem functions by two phases. We first learn vector representations of actions using the plan library . We then iteratively sample actions for unobserved actions by maximizing the probability of the unknown plan via the EM framework. We present DUP in detail in the following subsections.

4.1 Maximizing Probability of Unknown Plans

With the vector representations learnt in the last subsection, a straightforward way to discover the unknown plan is to explore all possible actions in such that has the highest probability, which can be defined similar to Equation (1), i.e.,

(3)

where denotes the th action of and is the length of . As we can see, this approach is exponentially hard with respect to the size of and number of unobserved actions. We thus design an approximate approach in the Expectation-Maximization framework to estimate an unknown plan that best explains the observation .

To do this, we introduce new parameters to capture “weights” of values for each unobserved action. Specifically speaking, assuming there are unobserved actions in , i.e., the number of s in is , we denote these unobserved actions by , where the indices indicate the order they appear in . Note that each can be any action in . We associate each possible value of with a weight, denoted by . is a matrix, satisfying

where for each . For the ease of specification, we extend to a bigger matrix with a size of , denoted by , such that if is the index of the th unobserved action in , for all ; otherwise, and for all . Our intuition is to estimate the unknown plan by selecting actions with the highest weights. We thus introduce the weights to Equation (3), as shown below,

(4)

where and . We can see that the impact of and is penalized by weights and if they are unobserved actions, and stays unchanged, otherwise (since both and equal to 1 if they are observed actions).

We assume , i.e., , where and .

We redefine the objective function as shown below,

(5)

where is defined by Equation (4.1). The gradient of Equation 5 is shown below,

(6)

The only parameters needed to be updated are , which can be easily done by gradient descent, as shown below,

(7)

if is the index of unobserved action in ; otherwise, stays unchanged, i.e., . Note that is a learning constant.

With Equation (7), we can design an EM algorithm by repeatedly sampling an unknown plan according to and updating based on Equation (7) until reaching convergence (e.g., a constant number of repetitions is reached).

4.2 Overview of our Dup approach

An overview of our DUP algorithm is shown in Algorithm 1. In Step 2 of Algorithm 1, we initialize for all , if is an index of unobserved actions in ; and otherwise, and for all . In Step 4, we view as a probability distribution, and sample an action from based on if is an unobserved action index in . In Step 5, we only update where is an unobserved action index. In Step 6, we linearly project all elements of the updated to between 0 and 1, such that we can do sampling directly based on in Step 4. In Step 8, we simply select based on

for all unobserved action index .

Input: plan library , observed actions
Output: plan

1:  learn vector representation of actions
2:  initialize with for all , when is an unobserved action index
3:  while the maximal number of repetitions is not reached do
4:     sample unobserved actions in based on
5:     update based on Equation (7)
6:     project to [0,1]
7:  end while
8:  select actions for unobserved actions with the largest weights in
9:  return  
Algorithm 1 Framework of our DUP algorithm

Our DUP algorithm framework belongs to a family of policy gradient algorithms, which have been successfully applied to complex problems, e.g., robot control cof/nips/ng03 (), natural language processing cof/acl/Branavan12 (). Our formulation is unique in how it recognizes plans, in comparison to the existing methods in the planning community.

Note that our current study shows that even direct application of word vector learning methods provide competitive performance for plan completion tasks. We believe we can further improve the performance by using the planning specific structural information in the EM phase. In other words, if we are provided with additional planning structural information as input, we can exploit the structural information to filter candidate plans to be recognized in the EM procedure.

5 Our RNNPlanner approach

Instead of using the EM-style framework, in this section we present another approach which is based on Recurrent Neural Networks (RNNs), specifically Long Short-term Memory networks (LSTMs), with the distributed representations of actions introduced in Section 3. LSTM is a specific kind of RNN that works by leveraging long-short term contexts. As we model our plan recognition problem as an action-sequence generation problem, our aim of exploring RNN-LSTM architecture is to leverage longer-horizon of action contexts to help improve the accuracy of generating new actions based on previously observed or generated actions. We will first introduce the RNN-LSTM architecture, and then introduce our RNNPlanner model.

5.1 The RNN Model

Specifically, the RNN architecture can be defined in the following way. Given an input action at the step , RNN accepts it with weighted connections to hidden layers that are stacked together. And from the hidden layer stack, there is a connection to the output layer , as well as a cyclic weighted connection going into the hidden layer stack. And if we unroll this RNN cell along steps, it could accept an action input sequence , and compute a sequence of hidden states . For each of these hidden states (), it contributes to predicting the next step output , and thus RNN computes an output vector sequence , by concatenating outputs from all steps together.

Given an input sequence , an RNN model could predict an output sequence , in which output at each step depends on the input at that step, and the hidden state at the previous step. The RNN could also be utilized to directly generate, in principle, infinitely long future outputs (actions), given a single input . The sequence of future actions could be generated by directly feeding the output at a step , to the input at the next step . This way, RNN “assumes” what it predicts that would happen at next step is reliable (). As for training the RNN as a sequence generation model, we could utilize to parameterize a predictive distribution over all of the possible next inputs , and thus we could minimize the loss:

(8)

where is the number of steps of an observed plan trace, is the observed action at step , and is the output at step as well as the prediction of what would happen at step . To estimate based on , we exploit the Long Short-term Memory (LSTM) model, which has been demonstrated effective on generating sequences DBLP:journals/corr/Graves13 (); DBLP:conf/nips/ShiCWYWW15 (), to leverage long term information prior to and predict based on current input . We can thus rewrite Equation (8) as:

(9)

where indicates the LSTM model estimates based on current input and memories of previous input prior to . The framework of LSTM DBLP:journals/corr/Graves13 (); DBLP:conf/nips/ShiCWYWW15 () is shown in Figure 2, where is the th input, is the th hidden state. , , and are the th input gate, forget gate, output gate, cell and cell input activation vectors, respectively, whose dimensions are the same as the hidden vector .

Figure 2: Long Short-term Memory (LSTM) cell

LSTM is implemented by the following functions:

(10)
(11)
(12)
(13)
(14)

where indicates the Hadamard product, is the logistic sigmoid function, is an input-input gate matrix, is a hidden-input gate matrix, is a cell-input gate matrix, is an input-forget gate matrix, is a hidden-forget gate matrix, is a cell-forget gate matrix, is an input-cell gate matrix, is a hidden-cell gate matrix, is an input-output gate matrix, is a hidden-output gate matrix, is a cell-output gate matrix. , , , and are bias terms. Note that the matrices from cell to gate vectors (i.e., , and ) are diagonal, such that each element in each gate vector only receives input of element of the cell vector. The major innovation of LSTM is its memory cell which essentially acts as an accumulator of the state information. is accessed, written and cleared by self-parameterized controlling gates, i.e., input, forget, output gates. Each time a new input comes, its information is accumulated to the memory cell if the input gate is activated. The past cell status could be forgotten in this process if the forget gate is activated. Whether the latest cell output is propagated to the final state is further controlled by the output gate . The benefit of using the memory cell and gates to control information flow is the gradient is trapped in the cell and prevented from vanishing too quickly.

5.2 Discovering Underlying Plans with the RNN Model

With the distributed representations of actions addressed in Section 3, we can view each plan in the plan library as a sequence of actions, and the plan library as a set of action sequences which can be utilized to train the RNN model. The framework of RNN with sequences of actions can be seen from Figure 3. The bottom row in Figure 3 is an example action sequence “pick-up-B, stack-B-A, pick-up-C, stack-C-B, pick-up-D, …”, which corresponds to an input sequence . Once an action among the bottom row is fed into the RNN, that action is assigned with an index, and an embedding layer is trained to find a vector representation based on that index. The action vector from the embedding layer is the feature that can be used by the LSTM cell. How the LSTM cell works has been explained in Equation 14. Similar to a classic RNN cell, the LSTM cell feeds its output to both itself as a hidden state, and the softmax layer to obtain a probability distribution over all actions in the action vocabulary . From the perspective of the LSTM cell at the next step, it receives a hidden state from the previous step , an action vector at the current step . To obtain the index of most possible action, our model samples over the action distribution output from softmax layer. That retrieved index could be mapped to an action in the vocabulary .

Figure 3: The framework of our RNNPlanner approach (with one hidden layer)

The top row in Figure 3 is the output sequence, which is denoted by “OUT1, OUT2, OUT3, OUT4, OUT5, …”, which corresponds to the estimated sequence “, , , , , ” in Equation (9).

Note that we exploit the dotted arrow to indicate two folds of meanings in Figure 3. When training the RNN model, the one pointed by the head of the dotted row (the embedding of input) is used to compute the cross entropy error with the output at tail (output of LSTM cell at the previous step), and next-step observation as the input at the head, to train the model. When using the trained RNN model to discover unknown actions, the model “imagines” what it predicts is the real next input, and takes it to continue its prediction. Thus the one pointed by the head is copied and identical to the one denoted by the tail. For example, the embedding of “stack-B-A” is copied from the prediction vector of “OUT1” if the input “stack-B-A” was unknown. In addition, the arrows between each of two LSTM cells shows the unrolling of a LSTM cell. The horizontal dashed line suggests that we obtain the action output at each step, by sampling from probability distribution, provided by the softmax layer.

With the trained RNN model, we can discover underlying actions by simply exploiting the RNN model to generate unknown actions based on observed or already discovered actions. For example, given the observation:

pick-up-B unstack-D-C put-down-D stack-C-B

we can generate the first based on pick-up-B, the second based on actions from pick-up-B to put-down-D, the third based on actions from pick-up-B to stack-C-B, and the last based on all previously actions (including generated actions at where there was a ).

6 Experiments

In this section, we evaluate our DUP and RNNPlanner algorithms in three planning domains from International Planning Competition, i.e., blocks, depots222http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume20/long03a-html/JAIRIPC.html, and driverlog. To generate training and testing data, we randomly created 5000 planning problems for each domain, and solved these planning problems with a planning solver, such as FF333https://fai.cs.uni-saarland.de/hoffmann/ff.html, to produce 5000 plans.

We define the accuracy of our DUP and RNNPlanner algorithms as follows. For each unobserved action , DUP and RNNPlanner suggest a set of possible actions which have the highest value of for all . If covers the truth action , i.e., , we increase the number of correct suggestions by 1. We thus define the accuracy as shown below:

where is the size of testing set, is the number of correct suggestions for the th testing plan, is the number of unobserved actions in the th testing plan. We can see that the accuracy may be influenced by . We will test different size of in the experiment.

domain #plan #word #vocabulary
blocks 5000 292250 1250
depots 5000 209711 2273
driverlog 5000 179621 1441
Table 1: Features of datasets

State-of-the-art plan recognition approaches with plan libraries as input aim at finding a plan from plan libraries to best explain the observed actions DBLP:conf/ijcai/GeibS07 (), which we denote by MatchPlan. We develop a MatchPlan system based on the idea of DBLP:conf/ijcai/GeibS07 () and compare our DUP algorithm to MatchPlan with respect to different percentages of unobserved actions and different sizes of suggestion or recommendation set . Another baseline is action-models based plan recognition approach cof/ijcai/Ramirez09 () (denoted by PRP, short for Plan Recognition as Planning). Since we do not have action models as input in our DUP algorithm, we exploited the action model learning system ARMS journal/aij/Yang07 () to learn action models from the plan library and feed the action models to the PRP approach. We call this hybrid plan recognition approach ARMS+PRP. To learn action models, ARMS requires state information of plans as input. We thus added extra information, i.e., initial state and goal of each plan in the plan library, to ARMS+PRP. In addition, PRP requires as input a set of candidate goals for each plan to be recognized in the testing set, which was also generated and fed to PRP when testing. In summary, the hybrid plan recognition approach ARMS+PRP has more input information, i.e., initial states and goals in plan library and candidate goals for each testing example, than our DUP approach.

To evaluate DUP, we compared it with several baselines that we elaborated above, i.e., MatchPlan, and ARMS+PRP. We randomly divided the plans into ten folds, with 500 plans in each fold. We ran our DUP algorithm ten times to calculate an average of accuracies, each time with one fold for testing and the rest for training. In the testing data, we randomly removed actions from each testing plan (i.e., ) with a specific percentage of the plan length. Features of datasets are shown in Table 1, where the second column is the number of plans generated, the third column is the total number of words (or actions) of all plans, and the last column is the size of vocabulary used in all plans. To evaluate our RNNPlanner algorithm, we directly compared RNNPlanner to DUP.

Figure 4: Accuracies of DUP and ARMS+PRP with respect to different percentage of unobserved actions

Figure 5: Accuracies of DUP and ARMS+PRP with respect to different size of recommendations

6.1 Comparison between Dup and Arms+prp

We first compare our DUP algorithm to ARMS+PRP to see the advantage of DUP. We varied the percentage of unobserved actions and the size of recommended actions to see the change of accuracies, respectively. The results are shown below.

6.1.1 Varying Percentage of Unobserved Actions

In this experiment we would like to see the change of accuracies of both our DUP algorithm and ARMS+PRP with respect to in . We set the window of training context in Equation (1) to be three, the number of iterations in Algorithm 1 to be 1500, the size of recommendations to be ten, and the learning constant in Equation (7) to be 0.1. For ARMS+PRP, we generated 20 candidate goals for each testing example including the ground-truth goal which corresponds to the ground-truth plan to be recognized. The results are shown in Figure 4.

From Figure 4, we can see that in all three domains, the accuracy of our DUP algorithm is generally higher ARMS+PRP, which verifies that our DUP algorithm can indeed capture relations among actions better than the model-based approach ARMS+PRP. The rationale is that we explore global plan information from the plan library to learn a “shallow” model (distributed representations of actions) and use this model with global information to best explain the observed actions. While ARMS+PRP tries to leverage global plan information from the plan library to learn action models and uses the models to recognize observed actions, it enforces itself to extract “exact” models represented by planning models which are often with noise. When feeding those noisy models to PRP, since PRP that uses planning techniques to recognize plans is very sensitive to noise of planning models, the recognition accuracy is lower than DUP, even though ARMS+PRP has more input information (i.e., initial states and candidate goals) than our DUP algorithm.

Looking at the changes of accuracies with respect to the percentage of unobserved actions, we can see that our DUP algorithm performs fairly well even when the percentage of unobserved action reaches 25%. In contrast, ARMS+PRP is sensitive to the percentage of unobserved actions, i.e., the accuracy goes down when more actions are unobserved. This is because the noise of planning models induces more uncertain information, which harms the recognition accuracy, when the percentage of unobserved actions becomes larger. Comparing accuracies of different domains, we can see that our DUP algorithm functions better in the blocks domain than the other two domains. This is because the ratio of #word over #vocabulary in the blocks domain is much larger than the other two domains, as shown in Table 1. We would conjecture that increasing the ratio could improve the accuracy of DUP. From Figure 4 (as well as Figure 6), we can see that it appears that the accuracy of DUP is not affected by increasing percentages of unobserved actions. The rationale is (1) the percentage of unobserved actions is low, less than 25%, i.e., there is at most one unobserved action over four continuous actions; (2) the window size of context in DUP is set to be 3, which ensures that DUP generally has ”stable” context information to estimate the unobserved action when the percentage of unobserved actions is less than 25%, resulting in the stable accuracy in Figure 4 (likewise for Figure 6).

6.1.2 Varying Size of Recommendation Set

We next evaluate the performance of our DUP algorithm with respect to the size of recommendation set . We evaluate the influence of the recommendation set by varying the size from 1 to 10. The size of recommendation set is much smaller than the complete set. For example, the size of complete set in the blocks domain is 1250 (shown in Table 1). It is less than 1% even though we recommend 10 actions for each unobserved action. We set the context window used in Equation (1) to be three, the percentage of unobserved actions to be 0.25, and the learning constant in Equation (7) to be 0.1. For ARMS+PRP, the number of candidate goals for each testing example is set to 20. ARMS+PRP aims to recognize plans that are optimal with respect to the cost of actions. We relax ARMS+PRP to output optimal plans, some of which might be suboptimal. The results are shown in Figure 5.

From Figure 5, we find that accuracies of the three approaches generally become larger when the size of the recommended set increases in all three domains. This is consistent with our intuition, since the larger the recommended set is, the higher the possibility for the truth action to be in the recommended set. We can also see that the accuracy of our DUP algorithm are generally larger than ARMS+PRP in all three domains, which verifies that our DUP algorithm can indeed better capture relations among actions and thus recognize unobserved actions better than the model-learning based approach ARMS+PRP. The reason is similar to the one given for Figure 4 in the previous section. That is, the “shallow” model learnt by our DUP algorithm is better for recognizing plans than both the “exact” planning model learnt by ARMS for recognizing plans with planning techniques. Furthermore, the advantage of DUP becomes even larger when the size of recommended action set increases, which suggests our vector representation based learning approach can better capture action relations when the size of recommended action set is larger. The possibility of actions correctly recognized by DUP becomes much larger than ARMS+PRP when the size of recommendations increases.

Figure 6: Accuracies of DUP and MatchPlan with respect to different percentage of unobserved actions

Figure 7: Accuracies of DUP and MatchPlan with respect to different size of recommendations

Figure 8: Accuracies of DUP and MatchPlan with respect to different size of training set

6.2 Comparison between Dup and MatchPlan

In this experiment we compare DUP to MatchPlan which is built based on the idea of DBLP:conf/ijcai/GeibS07 (). Likewise we varied the percentage of unobserved actions and the size of recommended actions to see the change of accuracies of both algorithms. The results are exhibited below.

6.2.1 Varying Percentage of Unobserved Actions

To compare our DUP algorithm with MatchPlan with respect to different percentage of unobserved actions, we set the window of training context in Equation (1) of DUP to be three, the number of iterations in Algorithm 1 to be 1500, the size of recommendations to be ten, and the learning constant in Equation (7) to be 0.1. To make fair the comparison (with MatchPlan), we set the matching window of MatchPlan to be three, the same as the training context of DUP, when searching plans from plan libraries . In other words, to estimate an unobserved action in , MatchPlan matches previous three actions and subsequent three actions of to plans in , and recommends ten actions with the maximal number of matched actions, considering observed actions in the context of and actions in as a successful matching. The results are shown in Figure 6.

From Figure 6, we find that the accuracy of DUP is much better than MatchPlan, which indicates that our DUP algorithm can better learn knowledge from plan libraries than the local matching approach MatchPlan. This is because we take advantage of global plan information of the plan library when learning the “shallow” model, i.e., distributed representations of actions, and the model with global information can best explain the observed actions. In contrast, MatchPlan just utilizes local plan information when matching the observed actions to the plan library, which results in lower accuracies. Looking at all three different domains, we can see that both algorithms perform the best in the blocks domain. The reason is similar to the one provided in the last subsection (for Figure 4), i.e., the number of words over the number of vocabulary in the blocks domain is relatively larger than the other two domains, which gives us the hint that it is possible to improve accuracies by increasing the ratio of the number of words over the number of vocabularies.

6.2.2 Varying Size of Recommendation Set

Likewise, we also would like to evaluate the change of accuracies when increasing the size of recommended actions. We used the same experimental setting as done by previous subsection. That is, we set the window of training context of DUP to be three, the learning constant to be 0.1, the number of iterations in Algorithm 1 to be 1500, the matching window of MatchPlan to be three. In addition, we fix the percentage of unobserved actions to be 0.25. The results are shown in Figure 7.

We can observe that the accuracy of our DUP algorithm are generally larger than MatchPlan in all three domains in Figure 7, which suggests that our DUP algorithm can indeed better capture relations among actions and thus recognize unobserved actions better than the matching approach MatchPlan. The reason behind this is similar to previous experiments, i.e., the global information captured from plan libraries by DUP can indeed better improve accuracies than local information exploited by MatchPlan. In addition, looking at the trends of the curves of both DUP and MatchPlan, we can see the performance of DUP becomes much better than MatchPlan when the size of recommendations increases. This indicates the influence of global information becomes much larger when the size of recommendations increasing. In other words, larger size of recommendations provides better chance for “shallow” models learnt by DUP to perform better.

6.2.3 Varying Size of Training Set

To see the effect of size of training set, we ran both DUP and MatchPlan with different size of training set. We used the same setting as done by last subsection except fixing the size of recommendations to be 10, when running both algorithms. We varied the size of training set from 2500 to 4500. The results are shown in Figure 8.

We observed that accuracies of both DUP and MatchPlan generally become higher when the size of training set increases. This is consistent with our intuition, since the larger the size of training set is, the richer the information is available for improving the accuracies. Comparing the curves of DUP and MatchPlan, we can see that DUP performs much better than MatchPlan. This further verifies the benefit of exploiting global information of plan libraries when learning the shallow models as done by DUP.

Figure 9: Accuracy with respect to different number of iterations in the blocks domain

6.3 Accuracy w.r.t. Iterations

In the previous experiments, we set the number of iterations in Algorithm 1 to be 1500. In this experiment, we would like to see the influence of iterations of our DUP algorithm when running the EM-style procedure. We changed the number of iterations from 300 to 3000 to see the trend of accuracy. We exhibit the experimental results in the blocks domain (the results of the other two domains are similar) in Figure 9.

From Figure 9, we can see the accuracy becomes higher at the beginning and stays flat when reaching the size of 1500. This exhibits that the EM procedure converges and has stable accuracies after the iteration reaches 1500. Similar results can also be found in the other two domains.

6.4 Comparison between RNNPlanner and Dup

In this section we compare RNNPlanner with DUP to see the change of performance with respect to different distributions of missing actions in the underlying plans to be discovered. In this experiment, we are interested in evaluating the performance on consecutive missing actions in the underlying plans since these scenarios often exist in many applications such as surveillance surveillanceExample (). We first test the performance of both RNNPlanner and DUP in discovering underlying plans with only consecutive missing actions in the “middle” of the plans, i.e., actions are not missing at the end or in the front, which indicates missing actions can be inferred from both previously and subsequently observed actions. Then we evaluate both RNNPlanner and DUP in discovering underlying plans with only consecutive missing actions at the end of the plans, which indicates missing actions can only be inferred from previously observed actions. After that, we also evaluate the performance of our RNNPlanner and DUP approaches with respect to the size of recommendation set. In the following subsections, we present the experimental results regarding those three aspects.

6.4.1 Performance with missing actions in the middle

To see the performance of RNNPlanner and DUP in cases when actions are missing in the middle of the underlying plan to be discovered, we vary the number of consecutive missing actions from 1 to 10, to see the change of accuracies of both RNNPlanner and DUP. We set the window size to be 1 and the recommendation size to be 10. The results are shown in Figure 10.

Figure 10: Accuracy with respect to missing actions in the middle

From Figure 10, we can see that the accuracies of both RNNPlanner and DUP generally become lower when the number of consecutive unobserved actions increasing. This is consistent with our intuition since the more actions are missing, the less information can be used to help infer the unobserved actions, which results in low accuracies. Comparing the curves of RNNPlanner and DUP, we can see that the accuracy of DUP is higher than RNNPlanner at the beginning. This is because DUP exploits information of both observed actions before and after missing actions to infer the missing actions, while RNNPlanner just exploits observed actions before missing actions. When the number of missing actions is larger than 3, the accuracies of DUP and RNNPlanner are both low (i.e., lower than 0.2). This is because the window size of DUP and RNNPlanner is set to be 1, which indicates we exploit one action before the missing actions and one action after the missing actions to estimate the missing actions. When the consecutive missing actions are more than 1, there may not be sufficient context information for inferring the missing actions, resulting in low accuracies.

6.4.2 Performance with missing actions at the end

We also would like to see the performance of RNNPlanner and DUP in discovering missing actions at the end, which is prevalent in application domains that aim at discovering/predicting future actions. Similar to previous experiments, we vary the number of consecutive unobserved or missing actions to see the change of accuracies of both RNNPlanner and DUP. We set the window size to be 1 and the recommendation size to be 10 as well. The experimental results are shown in Figure 11.

Figure 11: Accuracy with respect to missing actions in the end.

From Figure 11, we can observe that the accuracies of both RNNPlanner and DUP generally get decreasing when the number of consecutive missing actions increases. This is similar to previous experimental results. That is, the more actions are missing, the less information is available for estimating the missing actions, which results in lower accuracy. In addition, we can also see that RNNPlanner generally performs better than DUP, which indicates that the RNNs-based approach, i.e., RNNPlanner, can indeed better exploit observed actions to predict future missing actions, since RNNs are capable of flexibly leveraging long or short-term information to help predict missing actions.

6.4.3 Performance with respect to different recommendation size

To see the change with respect to different recommendation size, we vary the size of recommendation sets from 1 to 10 and calculate their corresponding accuracies. We test our approaches with four cases: A. there are five actions missing at the end; B. there are five actions missing in the middle; C. there is one action missing at the end; D. there is one action missing in the middle. The results are shown in Figures 12-15 corresponding to cases A-D, respectively.

Figure 12: Case A: accuracy with respect to different size of recommendations
Case A:

As shown in Figure 12, RNNPlanner performs better than DUP mostly, except for when the recommendation set size is larger/equal to eight in blocks domain. This is because RNNPlanner, which contains LSTM cells, is able to actively remember or forget past observations (inputs) and computations (hidden states). For example, if in a set of sequences, a pattern follows after three words (i.e., ), where could be any word from the vocabulary except for and . And if the window size of DUP is smaller than three, then DUP is not able to utilize this pattern to predict mainly based on . When predicting the , the DUP with context size one, works by searching for the most similar word to the . One would yet argue that we can set a larger window size for DUP. Larger window size does not necessarily lead to higher accuracy, since using a larger window size also add more noise in the training of DUP. Remember that the word2vec model treats equally all possible word pair samples within its context window.

In addition, observing the accuracies (in terms of the size of recommendation ) of all three domains , we can see only in the blocks domain that DUP outperforms RNNPlanner, when is larger than eight. Also in the blocks domain, DUP has the best performance, comparing to how DUP functions in other two domains. This is because plans from the blocks domain has an overall higher ratio of #word to #vocabulary, which increases the possibility that the word pattern outside a context window, would reappear inside the window, and consequently help DUP recognize actions in the missing positions. Coming back to the example when we have a plan like , in blocks domain, it’s more possible the word happens again in one of , , and .

Figure 13: Case B: accuracy with respect to different size of recommendations
Case B:

What we can observe here from Figure 13, is similar to our observations in case A. RNNPlanner generally performs better than DUP, except for when the size of recommendation is larger or equal to nine in blocks domain. It could also be observed that both RNNPlanner and DUP have the best accuracy performance in the blocks domain.

And by comparing the Figure 13 in case B (five removed actions in the middle) with Figure 12 in case A (five removed actions at the end), we can see that, the accuracy difference between DUP and RNNPlanner at each size of recommendation along the x-axis, is smaller in case B. This is because, RNNPlanner only leverages the observed actions before a missing position, whereas DUP has the advantage of additionally using the observation after a missing position.

Figure 14: Case C: accuracy with respect to different size of recommendations
Case C:

From Figure 14, we can see that both RNNPlanner and DUP could outperform each other in certain domains and recommendation set sizes (). In blocks domain, DUP is better when is larger than five. In depots domain, RNNPlanner is overwhelmingly better than DUP. In driverlog domain, DUP performs overall better except that, when there is only one recommendation, DUP is as good as RNNPlanner. To explain this observation, if the number of consecutively removed action is less or equal to context window size (e.g., window size is one, and number of missing actions is one, in our case C), then the fixed, and short context window of DUP, is competitive enough.

Figure 15: Case D: accuracy with respect to different size of recommendations
Case D:

From the results in Figure 15, we can see that DUP functions better than RNNPlanner over all three domains, whereas DUP is worse in case A and case B, and could occasionally be better than RNNPlanner in case C. It makes sense in that, on the one hand, within the fixed and short context window, if there is very less positions with removed actions, DUP would have an improved performance. On the other hand, RNNPlanner is not able to leverage the information from both sides of a position with a missing action. Therefore, in case D, DUP gains the benefit from both assumptions that there is only one missing action, and the position of that action is randomly chosen in the middle of a plan.

7 Related work

Our work is related to planning with incomplete domain models (or model-lite planning conf/aaai/rao07 (); DBLP:conf/aaai/ZhuoNK13 ()). Figure 16 shows the schematic view of incomplete models and their relationships in the spectrum of incompleteness. In a full model, we know exactly the dynamics of the model (i.e., state transitions). Approximate models are the closest to full models and their representations are similar except that there can be incomplete knowledge of action descriptions. To enable approximate planners to perform more (e.g., providing robust plans), planners are assumed to have access to additional knowledge circumscribing the incompleteness tuan-robust (). Partial models are one level further down the line in terms of the degree of incompleteness. While approximate models can encode incompleteness in the precondition/effect descriptions of the individual actions, partial models can completely abstract portions of a plan without providing details for them. In such cases, even though providing complete plans is infeasible, partial models can provide “planning guidance” for agents DBLP:conf/atal/ZhangSK15 (). Shallow models are essentially just a step above having no planning model. They provide interesting contrasts to the standard precondition and effect based action models used in automated planning community. Our work in this paper belongs to the class of shallow models. In developing shallow models, we are interested in planning technology that helps humans develop plans, even in the absence of any structured models or plan traces. In such cases, the best that we can hope for is to learn local structures of the planning model to provide planning support, similar to providing spell-check in writing. While some work in web-service composition (c.f. DBLP:conf/vldb/DongHMNZ04 ()) did focus on this type of planning support, they were hobbled by being limited to simple input/output type comparison. In contrast, we expect shallow models to be useful in “critiquing” the plans being generated by the humans (e.g. detecting that an action introduced by the human is not consistent with the model), and “explaining/justifying” the suggestions generated by humans.

Figure 16: Schematic view of incomplete models and their relationships in the spectrum of incompleteness

Our work is also related to plan recognition. Kautz and Allen proposed an approach to recognizing plans based on parsing observed actions as sequences of subactions and essentially model this knowledge as a context-free rule in an “action grammar” cof/aaai/kautz86 (). All actions, plans are uniformly referred to as goals, and a recognizer’s knowledge is represented by a set of first-order statements called event hierarchy encoded in first-order logic, which defines abstraction, decomposition and functional relationships between types of events. Lesh and Etzioni further presented methods in scaling up activity recognition to scale up his work computationally DBLP:conf/ijcai/LeshE95 (). They automatically constructed plan-library from domain primitives, which was different from cof/aaai/kautz86 () where the plan library was explicitly represented. In these approaches, the problem of combinatorial explosion of plan execution models impedes its application to real-world domains. Kabanza and Filion DBLP:conf/ijcai/KabanzaFBI13 () proposed an anytime plan recognition algorithm to reduce the number of generated plan execution models based on weighted model counting. These approaches are, however, difficult to represent uncertainty. They offer no mechanism for preferring one consistent approach to another and incapable of deciding whether one particular plan is more likely than another, as long as both of them can be consistent enough to explain the actions observed. Although we exploit a library of plans in our DUP and RNNPlanner approaches, we aim to learning shallow models and utilize the shallow models to recognize plans that are not necessarily in the plan library, which is different from previous approaches that assume the plans to be recognized are from the plan library.

Instead of using a library of plans, Ramirez and Geffner cof/ijcai/Ramirez09 () proposed an approach to solving the plan recognition problem using slightly modified planning algorithms, assuming the action models were given as input (note that action models can be created by experts or learnt by previous systems journal/aij/Yang07 (); journal/aij/zhuo10 ()). Except previous work cof/aaai/kautz86 (); cof/ijcai/Bui03 (); journal/aij/Geib09 (); cof/ijcai/Ramirez09 () on the plan recognition problem presented in the introduction section, Saria and Mahadevan presented a hierarchical multi-agent markov processes as a framework for hierarchical probabilistic plan recognition in cooperative multi-agent systems conf/ICAPS/Saria04 (). Amir and Gal addressed a plan recognition approach to recognizing student behaviors using virtual science laboratories cof/ijcai/Amir11 (). Ramirez and Geffner exploited off-the-shelf classical planners to recognize probabilistic plans cof/aaai/Ramirez10 (). Different from those approaches, we do not require any domain model knowledge provided as input. Instead, we automatically learn shallow domain models from previous plan cases for recognizing unknown plans that may not be identical to previous cases.

8 Conclusion and Discussion

In this paper we present two novel plan recognition approaches, DUP and RNNPlanner, based on vector representation of actions. For DUP, we first learn the vector representations of actions from plan libraries using the Skip-gram model which has been demonstrated to be effective. We then discover unobserved actions with the vector representations by repeatedly sampling actions and optimizing the probability of potential plans to be recognized. For RNNPlanner, we let the neural network itself to learn the word embedding, which would then be utilized by higher LSTM layers. We also empirically exhibit the effectiveness of our approaches.

While we focused on a one-shot recognition task in this paper, in practice, human-in-the-loop planning will consist of multiple iterations, with DUP recognizing the plan and suggesting action addition alternatives; the human making a selection and revising the plan. The aim is to provide a form of flexible plan completion tool, akin to auto-completers for search engine queries. To do this efficiently, we need to make the DUP recognition algorithm “incremental.”

The word-vector based domain model we developed in this paper provides interesting contrasts to the standard precondition and effect based action models used in automated planning community. One of our future aims is to provide a more systematic comparison of the tradeoffs offered by these models. Although we have focused on the “plan recognition” aspects of this model until now, and assumed that “planning support” will be limited to suggesting potential actions to the humans. In future, we will also consider “critiquing” the plans being generated by the humans (e.g. detecting that an action introduced by the human is not consistent with the model learned by DUP), and “explaining/justifying” the suggestions generated by humans. Here, we cannot expect causal explanations of the sorts that can be generated with the help of complete action models (e.g. petrie ()), and will have to develop justifications analogous to those used in recommendation systems.

Another potential application for this type of distributed action representations proposed in this paper is social media analysis. In particular, work such as kiciman2015 () shows that identification of action-outcome relationships can significantly improve the analysis of social media threads. The challenge of course is that such action-outcome models have to be learned from raw and noisy social media text containing mere fragments of plans. We believe that action vector models of the type we proposed in this paper provide a promising way of handling this challenge.

Acknowledgements.
Zhuo thanks the support of the National Key Research and Development Program of China (2016YFB0201900), National Natural Science Foundation of China (U1611262), Guangdong Natural Science Funds for Distinguished Young Scholar (2017A030306028), Pearl River Science and Technology New Star of Guangzhou, and Guangdong Province Key Laboratory of Big Data Analysis and Processing for the support of this research. Kambhampati’s research is supported in part by the ARO grant W911NF-13-1-0023, the ONR grants N00014-13-1-0176, N00014-09-1-0017 and N00014-07-1-1049, and the NSF grant IIS201330813.

References

  • (1) Abidi, B.R., Aragam, N.R., Yao, Y., Abidi, M.A.: Survey and analysis of multimodal sensor planning and integration for wide area surveillance. ACM Comput. Surv. 41(1), 7:1–7:36 (2009)
  • (2) Albrecht, S.V., Ramamoorthy, S.: Are you doing what I think you are doing? criticising uncertain agent models. In: Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, UAI 2015, July 12-16, 2015, Amsterdam, The Netherlands, pp. 52–61 (2015)
  • (3) Amir, O., Gal, Y.K.: Plan recognition in virtual laboratories. In: Proceedings of IJCAI, pp. 2392–2397 (2011)
  • (4) Branavan, S., Kushman, N., Lei, T., Barzilay, R.: Learning high-level planning from text. In: Proceedings of ACL-12 (2012)
  • (5) Bui, H.H.: A general model for online probabilistic plan recognition. In: Proceedings of IJCAI, pp. 1309–1318 (2003)
  • (6) Cohen, P.R., Kaiser, E.C., Buchanan, M.C., Lind, S., Corrigan, M.J., Wesson, R.M.: Sketch-thru-plan: a multimodal interface for command and control. Commun. ACM 58(4), 56–65 (2015)
  • (7) Dong, X., Halevy, A.Y., Madhavan, J., Nemes, E., Zhang, J.: Simlarity search for web services. In: (e)Proceedings of the Thirtieth International Conference on Very Large Data Bases, pp. 372–383 (2004)
  • (8) Dong, X., Halevy, A.Y., Madhavan, J., Nemes, E., Zhang, J.: Simlarity search for web services. In: Proceedings of VLDB, pp. 372–383 (2004)
  • (9) Geib, C.W., Goldman, R.P.: A probabilistic plan recognition algorithm based on plan tree grammars. Artificial Intelligence 173(11), 1101–1132 (2009)
  • (10) Geib, C.W., Steedman, M.: On natural language processing and plan recognition. In: IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pp. 1612–1617 (2007)
  • (11) Graves, A.: Generating sequences with recurrent neural networks. CoRR abs/1308.0850 (2013). URL http://arxiv.org/abs/1308.0850
  • (12) Kabanza, F., Filion, J., Benaskeur, A.R., Irandoust, H.: Controlling the hypothesis space in probabilistic plan recognition. In: IJCAI (2013)
  • (13) Kambhampati, S.: Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, pp. 1601–1605 (2007)
  • (14) Kambhampati, S.: Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain theories. In: Proceedings of AAAI, pp. 1601–1605 (2007)
  • (15) Kambhampati, S., Talamadupula, K.: Human-in-the-loop planning and decision support (2015). Rakaposhi.eas.asu.edu/hilp-tutorial
  • (16) Kautz, H.A., Allen, J.F.: Generalized plan recognition. In: Proceedings of AAAI, pp. 32–37 (1986)
  • (17) Kıcıman, E., Richardson, M.: Towards decision support and goal achievement: Identifying action-outcome relationships from social media. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 547–556. ACM (2015)
  • (18) Lesh, N., Etzioni, O.: A sound and fast goal recognizer. In: IJCAI, pp. 1704–1710 (1995)
  • (19) Manikonda, L., Chakraborti, T., De, S., Talamadupula, K., Kambhampati, S.: AI-MIX: using automated planning to steer human workers towards better crowdsourced plans. In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 3004–3009 (2014)
  • (20) Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)
  • (21) Ng, A.Y., Kim, H.J., Jordan, M.I., Sastry, S.: Autonomous helicopter flight via reinforcement learning. In: Proceedings of NIPS-03 (2003)
  • (22) Petrie, C.J.: Constrained decision revision. In: Proceedings of the 10th National Conference on Artificial Intelligence. San Jose, CA, July 12-16, 1992., pp. 393–400 (1992)
  • (23) Ramírez, M., Geffner, H.: Plan recognition as planning. In: IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009, pp. 1778–1783 (2009)
  • (24) Ramirez, M., Geffner, H.: Plan recognition as planning. In: Proceedings of IJCAI, pp. 1778–1783 (2009)
  • (25) Ramirez, M., Geffner, H.: Probabilistic plan recognition using off-the-shelf classical planners. In: Proceedings of AAAI, pp. 1121–1126 (2010)
  • (26) Saria, S., Mahadevan, S.: Probabilistic plan recognitionin multiagent systems. In: Proceedings of AAAI (2004)
  • (27) Shi, X., Chen, Z., Wang, H., Yeung, D., Wong, W., Woo, W.: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pp. 802–810 (2015)
  • (28) Tian, X., Zhuo, H.H., Kambhampati, S.: Discovering underlying plans based on distributed representations of actions. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, Singapore, May 9-13, 2016, pp. 1135–1143 (2016)
  • (29) Tuan Nguyen Subbarao Kambhampati, M.D.: Synthesizing robust plans under incomplete domain models. In: Proc. AAAI Workshop on Generalized Planning (2011)
  • (30) Yang, Q., Wu, K., Jiang, Y.: Learning action models from plan examples using weighted MAX-SAT. Artificial Intelligence Journal 171, 107–143 (2007)
  • (31) Zhang, Y., Sreedharan, S., Kambhampati, S.: Capability models and their applications in planning. In: Proceedings of AAMAS, pp. 1151–1159 (2015)
  • (32) Zhuo, H.H., Kambhampati, S.: Model-lite planning: Case-based vs. model-based approaches. Artif. Intell. 246, 1–21 (2017)
  • (33) Zhuo, H.H., Li, L.: Multi-agent plan recognition with partial team traces and plan libraries. In: Proceedings of IJCAI, pp. 484–489 (2011)
  • (34) Zhuo, H.H., Nguyen, T.A., Kambhampati, S.: Model-lite case-based planning. In: AAAI (2013)
  • (35) Zhuo, H.H., Yang, Q., Hu, D.H., Li, L.: Learning complex action models with quantifiers and implications. Artificial Intelligence 174(18), 1540–1569 (2010)
  • (36) Zhuo, H.H., Yang, Q., Kambhampati, S.: Action-model based multi-agent plan recognition. In: Proceedings of NIPS, pp. 377–385 (2012)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
229196
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description