Addendum to “HTN Acting: A Formalism and an Algorithm”
Abstract
Hierarchical Task Network (HTN) planning is a practical and efficient approach to planning when the ‘standard operating procedures’ for a domain are available. Like BeliefDesireIntention (BDI) agent reasoning, HTN planning performs hierarchical and contextbased refinement of goals into subgoals and basic actions. However, while HTN planners ‘lookahead’ over the consequences of choosing one refinement over another, BDI agents interleave refinement with acting. There has been renewed interest in making HTN planners behave more like BDI agent systems, e.g. to have a unified representation for acting and planning. However, past work on the subject has remained informal or implementationfocused. This paper is a formal account of ‘HTN acting’, which supports interleaved deliberation, acting, and failure recovery. We use the syntax of the most general HTN planning formalism and build on its core semantics, and we provide an algorithm which combines our new formalism with the processing of exogenous events. We also study the properties of HTN acting and its relation to HTN planning.
1 Introduction
Hierarchical Task Network (HTN) planning [11, 20, 21, 15] is a practical and efficient approach to planning when the ‘standard operating procedures’ for a domain are available. HTN planning is similar to BeliefDesireIntention (BDI) [23, 22, 16, 30] agent reasoning in that both approaches perform hierarchical and contextbased refinement of goals into subgoals and basic actions [24, 25]. However, while HTN planners ‘lookahead’ over the consequences of choosing one refinement over another before suggesting an action, BDI agents interleave refinement with acting in the environment. Thus, while the former approach can guarantee goal achievability (if there is no action failure or environmental interference), the latter approach is able to quickly respond to environmental changes and exogenous events, and recover from failure. This paper presents a formal semantics that builds on the core HTN semantics in order to enable such response and recovery.
One motivation for our work is a recent drive toward adapting the languages and algorithms used in Automated Planning to build a framework for ‘refinement acting’ [14], i.e., deciding how to carry out a chosen recipe of action to achieve some objective, while dealing with environmental changes, events, and failures. To this end, [14] proposes the Refinement Acting Engine (RAE), an HTNlike framework with continual online processing and recipe repair in the case of runtime failure. A key consideration in the RAE is a unified hierarchical representation and a core semantics that suits the needs of both acting and lookahead. We are also motivated by recent work [5] which suggests that a fragment of the recipe language of HTN planning does not have a direct (nor known) translation to the recipe languages of typical BDI agent programming languages such as AgentSpeak [22] and CAN [30]. For example, HTNs allow a flexible specification of how steps in a recipe should be interleaved, whereas steps in CAN recipes must be sequential or interleaved in a ‘seriesparallel’ [28] manner.
There have already been some efforts toward adapting HTN planning systems to make them behave more like BDI agent systems. Perhaps the first of these efforts was the RETSINA architecture [27], which used an HTN language and semantics for representing recipes and refining tasks, but also interleaved task refinement with acting in the environment. RETSINA is an implemented architecture which has been used in a range of realworld applications. In [7], the JSHOP [20] HTN planner is modified in two ways: (i) to execute a solution (comprising a sequence of actions) found via lookahead, and then replan if the solution is no longer viable in the real world (due to a change in the environment), and (ii) to immediately execute the chosen refinement for a task, instead of first performing lookahead to check whether the refinement will accomplish the task. The latter modification made JSHOP as effective as the industrystrength JACK BDI agent framework [29], in terms of responsiveness to environmental changes.
However, both RETSINA and the JSHOP variant lack a formalism, making it difficult to study the properties (e.g. correctness) of their semantics, and to compare them to other similar systems. The same applies to the algorithms and abstract syntax of the RAE framework, which are presented only in pseudocode.
There is also some work on making BDIlike agent systems behave more like HTN planning systems. In particular, both the REAP algorithm in [14] and the CANPlan [24, 25] BDI agent programming language (and its extensions such as [3, 6]) can make informed decisions about refinement choices by using a lookahead capability. Similarly, there are agent programming languages and systems that support some form of planning (though not HTNstyle planning) [19], such as the PRS [13] based PropicePlan [8] system and the situationcalculus based IndiGolog [4] system. Finally, there are also some interesting extensions to HTN and HTNlike planning [12, 1, 17, 31, 26, 2], e.g. approaches that combine classical and HTN planning. In contrast, our work is not concerned with lookahead or planning, but with adapting the HTN planning semantics to enable BDIstyle behaviour.
Thus, our contribution is a formal account of HTN acting, which supports interleaved deliberation, acting, and recovery from failure, e.g. due to environmental changes. To this end, we use the syntax of the most general HTN planning formalism [11, 9], and we build on its core semantics by developing three main definitions: execution via reduction, action, and replacement. We then provide an algorithm for HTN acting which combines our new formalism with the processing of exogenous events. We also study the properties of HTN acting, particularly in relation to HTN planning.
2 Background: HTN Planning
In this section we provide the necessary background material on HTN planning. Some definitions are given only informally; we refer the reader to [11, 9] for the formal definitions.
An HTN planning problem is a tuple comprising a task network , an initial state , which is a set of ground atoms, and a domain , where Me is a set of reduction methods and Op is a set of STRIPSlike operators. HTN planning involves iteratively decomposing/reducing the ‘abstract tasks’ occurring in and the resulting task networks by using methods in Me, until only STRIPSlike actions remain that can be ordered and executed from relative to Op.
A task network is a couple , where is a constraint formula, and is a nonempty set of labelled tasks, i.e., constructs of the form ; element is a task label, which is a 0ary tasklabel symbol (in FOL) that is unique in and , and is a nonprimitive or primitive task, which is an nary task symbol whose arguments are functionfree terms. The constraint formula is a Boolean formula built from negation, disjunction, conjunction, and constraints, each of which is either: an ordering constraint of the form , which requires the task (corresponding to label) to precede task ; a before (resp. an after) stateconstraint of the form (resp. ), which requires literal to hold in the state just before (resp. after) doing ; a between stateconstraint of the form , which requires to hold in all states between doing and ; or a variable binding constraint of the form , which requires and to be equal, each of which is a variable or constant. We ignore variable binding constraints as they can be specified as stateconstraints, using the binary logical symbol ‘=’.
Instead of specifying a task label, a constraint may also refer, using expression or , to the action that is eventually ordered to occur first or last (respectively) among those that are yielded by the set of task labels . While these expressions can be ‘inserted’ into a constraint when a task is reduced, we assume that they do not occur in methods.
A primitive task, or action, , has exactly one relevant operator in Op, i.e., one operator associated with a primitive task that has the same task symbol and arity as ; any variable appearing in the operator also appears in and its precondition. Given a primitive task , we denote its precondition, addlist and deletelist relative to Op as and , respectively. A nonprimitive task can have one or more relevant methods in Me. A method is a couple , where is a nonprimitive task, the arguments are distinct variables,^{1}^{1}1While [11] does allow this vector to contain constants, we instead specify such binding requirements in the constraint formula. and is a task network.
Given an HTN planning problem , the core planning steps involve selecting a relevant method for some nonprimitive task and then reducing the task to yield a ‘less abstract’ task network. Reducing with involves replacing with the tasks in (where ) and updating , e.g. to include the constraints in ; formal definitions for method relevance and reduction are given in Section 3. The set of reductions of is denoted .
If all nonprimitive tasks in the initial and subsequent task networks have been reduced, a completion is obtained from the resulting ‘primitive’ task network. Informally, is a completion of a primitive task network at a state , denoted , if is a total ordering of a ground instance of that satisfies ; if mentions a nonprimitive task, then .
Finally, the set of all HTN solutions is defined as , where is defined inductively as
In words, the HTN solutions for a given planning problem is the set of all completions of all primitive task networks that can be obtained via zero or more reductions of the initial task network.
A Running Example
Let us consider the example of a rover agent exploring the surface of mars. A part of the rover’s HTN domain is illustrated in Figure 1 (with braces omitted in and expressions). The toplevel nonprimitive task is to transfer, to the lander, previously gathered soil analysis data from a location , and if possible to also deliver the soil sample for further analysis inside the lander.
The toplevel task is achieved using either method or , both of which require the data and sample from to be available (i.e., for to hold). If the rover is low on battery charge (lowBat), is used. This transmits the soil data but it does not deliver the soil sample, which may result in losing it if it is later discarded to make room for other samples. Method prescribes establishing radio communication with the lander, sending it the data by first including metadata, and then breaking the connection, while checking continuously that the connection is not lost between the first and last tasks (including those of ). If the rover is not low on battery charge, is used to achieve the toplevel task; prescribes navigating to a lander and then uploading and depositing the soil data and sample, respectively.
Navigation is performed using or . Method prescribes calibrating the onboard instruments, moving the cameras to point straight (which asserts camMoved), and moving to the lander; while the first two actions can happen in any order, the third must happen last. The method requires that the instruments are not currently calibrated () and the battery charge is not low. Method is similar except that it is used only if the instruments are already calibrated, for example due to a recent calibration to achieve another task.
Action mv requires to hold, and it consumes a significant amount of charge, i.e., it asserts lowBat.^{2}^{2}2For simplicity, we assume ‘low charge’ is less than or equal to 50% of the maximum charge, and an action requiring a ‘significant’ amount of charge consumes 50%. We also consider it unsafe for the charge to reach 0%. Action procImg (not shown) requires raw and to hold and asserts and lowBat; the action processes and compresses new raw images (if any exist, i.e., raw holds) of the martian surface that were taken by the cameras. Doing procImg infrequently may result in losing older images, if they are overwritten to make space on the storage device.^{3}^{3}3We assume that delivering a soil sample to the lander and processing images before they are overwritten have equal importance. The other actions consume a negligible amount of charge, and action charge (not shown) makes the battery fully charged.
3 Preliminaries and Assumptions
In this section we formally define the notion of reduction, and we state the remaining assumptions.
First, we separate the notion of method relevance from the notion of reduction in [11]. In what follows, we use the standard notion of substitution [18], and of applying a substitution to an expression , which we denote by .
Definition 1 (Relevant Method).
Let be a domain, a nonprimitive task, and a method. If for some substitution , then is a relevant methodbody for relative to .^{4}^{4}4All variables and task labels in must be renamed with variables and task labels that do not appear anywhere else. The set of all such methodbodies is denoted by .
In the definition of reduction below, and in the rest of the paper, we denote by the set of all task labels appearing in a given set of labelled tasks .
Definition 2 (Reduction (adapted from [11])).
Let , with , be a task network and a nonprimitive task, and let . The reduction of in with , denoted , is the task network , where is obtained from with the following modifications:

replace with as must come after every task in ’s decomposition;

replace with ;

replace with as must be true immediately before the first task in ’s decomposition;

replace with as must be true immediately after the last task in ’s decomposition;

replace with ;

replace with ; and

everywhere that appears in in a or a expression, replace it with .
For example, consider task network , where and . Observe that method in Figure 1 is , where and . Then, is task network where and is the conjunction of and updated to account for the reduction, i.e., .
In the rest of the paper, we ignore the charge task, and when we need to refer to a labelled task we simply use its task label if the corresponding task is obvious; e.g. we would represent above as .
The remaining assumptions that we make are the following. First, without loss of generality [5], we assume that HTN domains are conjunctive, i.e., they do not mention constraint formulas that specify a disjunction of elements. Thus, we sometimes treat a constraint formula as a set (of possibly negated constraints).
Definition 3 (Conjunctive HTNs [5]).
A task network is conjunctive if its constraint formula is a conjunction of possibly negated constraints. A domain is conjunctive if the task network in every method is conjunctive.
Second, to distinguish between reductions that are being pursued at different levels of abstraction, we assume a reduction produces at least two tasks, i.e., any method is such that . This can be achieved using ‘noop’ actions, denoted nop, if necessary, which have ‘empty’ preconditions and effects.
Third, for any method , there exists a (possibly ‘noop’) task such that for any , and for any . This will ensure that all the after stateconstraints in are evaluated by our semantics.
Finally, we assume that the user does not specify inconsistent ordering constraints in a method’s constraint formula, e.g. the constraints and . Formally, let denote the transitive closure of a constraint formula , i.e., the one that is obtained from by adding the constraint whenever holds for some . Then, for any method , there does not exist a pair nor .
4 A Formalism for HTN Acting
We now develop a formalism for HTN acting by defining, in particular, three notions of execution: via reduction, action, and replacement. The first notion is based on task reduction; the second notion defines what it means to execute an action in the HTN setting, in particular, the gathering and evaluating of constraints relevant to the action; and the third notion represents failure handling, i.e., the replacement of ‘blocked’ tasks by alternative ones.
We only allow a task occurring in a task network to be executed via action or reduction if it is a primary task in the network, i.e., there are no other tasks that must precede it. Formally, given a task network , we first define the following sets of tasks:
That is, and contain the tasks that cannot be primary ones; the above action occurring in a negated ordering constraint cannot be a primary task because one or more tasks (represented by above) must precede .^{5}^{5}5This is provided none of the actions associated with have already been executed. As we show later, in our semantics, such an execution will result in the (then ‘realised’) constraint being removed. Then, we define the set of primary tasks of task network as . For example, given task network in method in Figure 1, , and given task network in method , .
We can now define our first notion, an execution via reduction of a task network, as the reduction of an arbitrary primary nonprimitive task via a relevant method. To enable trying alternative reductions for the task if the one that was selected fails or is not applicable, we maintain the set of all relevant methods for the task, and update the set as alternative methods are tried. We use the term reduction couple to refer to a couple comprising two sets: (i) the set representing the reductions being pursued for a task (and its subtasks), and (ii) the set of current alternative methodbodies for the task. We use to denote the set of reduction couples corresponding to the tasks reduced so far, where each couple is of the form , with being a set of labelled tasks, and a set of task networks. While the initial value of and how it can ‘evolve’ will be made concrete via formal definitions, we shall for now illustrate these with an example.
Let us consider the task network , where the set ; the initial state ; the ‘initial’ set of reduction couples ; and the domain is as depicted in Figure 1. An execution via reduction of the task network from relative to and is the tuple , where , formula is in Figure 1 with variable substituted with , and the resulting set of reduction couples , where is the alternative methodbody for . Moreover, an execution via reduction of is the tuple , where , set , formula is the conjunction of and updated to account for the reduction, and set .
We call a 4tuple of the form , as in the example above, a configuration. (For brevity, we omit the fifth element , representing the substitutions applied so far to variables appearing in .) Formally, we define an execution via reduction as follows.
Definition 4 (Execution via Reduction).
Let be a domain; a state; a task network with a nonprimitive task ; a set of reduction couples; a methodbody, with ; and couple . An execution via reduction of from relative to and is the configuration , where is with any occurrence of replaced by the elements in set .
We now define the second kind of execution: performing an action. In order to execute a (primary) action, it must be applicable, i.e., its precondition and any constraints that are relevant to the action must hold in the current state. Such constraints could have been (directly) specified on the action, ‘inherited’ from one or more of the action’s ‘ancestors’, or ‘propagated’ from an earlier action. We first define the notion of a relevant constraint; we ignore negated between stateconstraints for brevity.^{6}^{6}6To account for a negated between stateconstraint , we check in every state between and whether holds. If so, we remove the constraint from the formula. If exists when the first action of is executed, is then relevant for it.
Definition 5 (Relevant Constraint).
Let be a task network with an action , and a between stateconstraint or a possibly negated before or after stateconstraint. Let be the nonnegated constraint corresponding to . Then, is relevant for executing relative to if for some literal :

; or
for some and , 
.
The set of relevant constraints for executing relative to is denoted by . For example, if is the resulting task network after the two reductions in our running example, the relevant constraints for in Figure 1 is the set: , where the first two constraints are ‘inherited’ from . In the above definition, and represent an action that was already executed, whose associated after or between stateconstraints have been ‘propagated’ to .
We next define what it means to ‘extract’ the literals from a given set of state constraints. Let us denote the subset of negated constraints as is a negated constraint, and the subset of positive ones as . Then, the set of extracted literals is denoted literal occurs in literal occurs in . We can now define what it means for an action to be applicable.
Definition 6 (Applicability).
Let be a domain, a state, and a task network with an action such that . Let denote the precondition and extracted literals, i.e., the formula . Then, is applicable in relative to and Op if .
Executing an applicable action results in changes to both the current state and the current task network: the action is removed from the network’s set of tasks, and the action’s ‘realised’ constraints, e.g. the relevant ones that do not need to be reevaluated before executing other actions, are removed from the network’s constraint formula. The constraints that do need to be reevaluated are the between stateconstraints that require literals to hold from the end of an action that was executed earlier, up to an action that is yet to be executed. Formally, given a task network and an action , we denote by the realised ordering constraints upon executing (relative to ), i.e., the set
where represents an action(s) that is yet to be executed. Notice that a negated ordering constraint is realised only if one or more (or all) of the actions corresponding to are executed after the first (or only) one corresponding to . Next, we denote by the realised state constraints upon executing , i.e., the set obtained from by removing any between stateconstraint when and . Then, we can define the set of realised constraints upon executing relative to as , and the result of executing an action as follows.
Definition 7 (Action Result).
Let Op be a set of operators, a state, a task network, a set of reduction couples, an action, and a substitution. The result of executing from relative to and Op, denoted , is the tuple , where

, where ;

; and

is obtained from by removing all occurrences of within expressions.^{7}^{7}7We also remove from any (remaining) constraint of the form such that occurs in , i.e., a between stateconstraint that holds trivially.
Notice that the only possible update to is a substitution of one or more variables (we do not remove executed actions from reduction couples). Finally, we define an execution via action of a task network as the execution of (a ground instance of) an applicable primary action in it.
Definition 8 (Execution via Action).
Let be a domain, a state, a set of reduction couples, and a task network such that for some and action , where is a ground formula. An execution via action of from relative to and is the configuration , where .
Continuing with our running example, let , with , be the configuration resulting from the two reductions from before. Then, an execution via action of from relative to and is the configuration , where ; set ; formula is obtained from by removing all constraints except for , which is updated to ; and is obtained from by applying substitution .
Observe that the applicability of a method (relative to the current state) is not checked at the point that it is chosen to reduce a task, but immediately before executing (for the first time) an associated primary action—which may be after performing further reductions and unordered actions. On the other hand, BDI agent programming languages such as AgentSpeak and CAN check the applicability of a relevant recipe at some point before (not necessarily just before) executing an associated primary action. Thus, in cases where the environment changes between checking the recipe’s applicability and executing an associated primary action (for the first time), and makes the recipe no longer applicable, the action will still be executed (provided, of course, the action itself is applicable). Such behaviour is not permitted by our semantics.
We now define the final notion of execution: execution via replacement, i.e., replacing the reductions being pursued for a task if they have become blocked. Intuitively, this happens when none of the primary actions in the pursued reductions are applicable, and none of the primary nonprimitive tasks have a relevant method.
Formally, let be a domain, a state, a task network, and a reduction couple with . Then, set is blocked in from relative to , denoted , if for all , either is an action and , or is a nonprimitive task and . Recall that represents the reductions that are being pursued for a particular task (and its subtasks).
When such pursued reductions are blocked, they are replaced by an alternative relevant methodbody for the task. In the definition below, we use the and constructs (if any) ‘inserted’ into the constraint formula by the first reduction of the task (Definition 2). Recall that these constructs represent the ‘inheritance’ of the task’s associated constraints by its descendant tasks.
Definition 9 (Replacement).
Let be a task network, a reduction couple, and . The replacement of (the elements of) in with relative to and , denoted , is the task network
where , and is obtained from by (i) replacing any occurrence of (all) the task labels in —within a or a expression—with the labels in , and then (ii) removing any element mentioning a task label in .
After a replacement, we need to update the set of reduction couples accordingly, by doing the same replacement in all relevant reduction couples. In the definition below, the set and task network are as above.
Definition 10 (Update).
Let be a set of reduction couples with , let , and . The update of in with relative to and , denoted , is the set obtained from by replacing any couple with , and then removing any couple that still mentions an element in .
Finally, we combine the two definitions above to define the configuration that results from an execution via replacement. While we provide a general definition, for replacing any task’s blocked (pursued) reductions, one might instead want to, as in depthfirst search, first replace a least abstract task’s blocked reductions. That is, one might want to first consider the smallest replaceable reduction couples. Formally, given a set of reduction couples , a couple is a smallest replaceable one in , denoted , if and for each couple , either (a) ; (b) and ; or (c) .
Definition 11 (Execution via Replacement).
Let be a domain, a state, a task network, and a set of reduction couples with an such that holds. An execution via replacement of from relative to and is the configuration
the replacement is complete if and partial otherwise, and a jump if .
A completereplacement represents the BDIstyle searching of an achievementgoal’s (i.e., a task’s) set of relevant recipes in order to find one that is applicable, and a partialreplacement represents BDIstyle recovery from the failure to execute (or successfully execute) an action, e.g. due to an environmental change. We illustrate these notions of replacement with the following examples.
Continuing with our running example, let be the configuration resulting from the two reductions from before. Suppose however that the rover’s instruments were not calibrated, i.e., . Then, action is not applicable, and an execution via completereplacement is performed on tasks in to obtain configuration , where ; set ; formula is the conjunction of , and updated by, e.g. removing the constraints that were copied from and replacing constraint with ; and the set of couples .
Suppose we now perform two executions via action to obtain configuration , with and formula (resp. set ) being (resp. ) updated to account for the executions. Finally, suppose that the battery level drops due to the execution of toplevel image processing action , which makes no longer applicable. (We will show later how procImg could instead be absent in the initial task network and arrive ‘dynamically’ from the environment.) Then, an execution via partialreplacement will be performed on tasks in to obtain configuration , where (resp. ) is the updated (resp. ), and the set .
5 Properties of the Formalism
In this section, we discuss the properties of our formalism, and in particular how it relates to HTN planning.
The properties are based on the definition of an execution trace, which formalises the consecutive execution of a configuration—via reduction, replacement, or action—as in our running example. In what follows, we use to denote that a configuration is an execution via reduction, action, or replacement of a task network from a state relative to a set of reduction couples and a domain .
Definition 12 (Execution Trace).
Let be a task network, a state, and a domain. An execution trace of from relative to is any sequence of configurations , with each , such that ; ; ; and for all .
We also need some auxiliary definitions related to execution traces. Consider configuration above. First, if (where ), then the trace is successful. Second, if for all couples we have that entails both and , then the trace is blocked. The following theorem states that if a trace is successful or blocked as we have ‘syntactically’ defined, then there is no way to ‘extend’ the trace further, and vice versa.
Proposition 1.
Let be an execution trace of a task network from a state relative to a domain . There exists an execution trace , with , of (from relative to ) if and only if is neither successful nor blocked. The inverse also holds.
Proof.
If there exists a trace with then cannot be successful as its final task network would then not mention any tasks, and thus we cannot ‘extend’ it to . The fact that cannot be blocked follows from the fact that an execution via replacement, action, or reduction of is possible. Conversely, if is neither successful nor blocked, then the only reason it would not be possible to ‘extend’ it is if but . However, this is only possible if a methodbody exists where its constraint formula contains inconsistent (possibly negated) ordering constraints. Such methodbodies are not allowed due to our assumption in Section 3. The inverse of the theorem is proved similarly. ∎
The next three properties rely on traces that are free from certain kinds of execution. A trace is completereplacement free if there does not exist an index such that is an execution via completereplacement of from relative to and . We define partialreplacement free and jump free traces similarly.
Given any execution trace, the next theorem states that there is an equivalent one—in terms of actions performed—that is completereplacement free. Intuitively, this is because, either with some ‘lookahead’ mechanism or ‘luck’, a completereplacement can be avoided by choosing a different (or ‘correct’) relevant methodbody for a task. We define the actions performed by a trace (or the pursued ‘solution’), denoted , as follows. Given an index , we first define if and is an execution via action of from relative to and ; otherwise, we define . Then, is with substitution of configuration applied to the sequence.
Theorem 1.
Let be an execution trace of a task network from a state relative to a domain . There exists a completereplacement free execution trace of from relative to such that and .
Proof.
Without loss of generality, we use a slightly modified version of Definition 4 that stores also the unique task label that was reduced, i.e., we add tuple to instead of the one that is currently added in the definition. Then, given a tuple occurring in the above execution trace , with a tuple , we say that the set is an evolution of (relative to the trace and ), denoted , if for some and .
Consider the smallest such that (with each ) is an execution via complete replacement of task network from relative to and . If there is no such then the theorem holds trivially; otherwise, for some and .
Consider prefix with the smallest such that the ‘incorrect’ reduction was performed at , i.e., where is an execution via reduction of from relative to and , and (for some and ) but (for any ), where is a set of ‘ancestors’ of , i.e., .
Let for some and , i.e., is the ‘incorrect’ reduction. Suppose instead that the ‘correct’ one was performed on , i.e., let tuple be an execution via reduction of from relative to and such that where is from earlier. We now show that all executions performed from up to (which do not involve completereplacements) can also be performed from .
Suppose that there is at least one such execution, i.e., . Then is an execution via reduction, partialreplacement or action of from relative to and . Let be the task that was executed or reduced, or the tasks that were replaced, i.e., the largest set such that ; in the case of an execution via reduction or partialreplacement, let be the new tasks, i.e., the largest set such that . If , i.e., the execution is ‘relevant’ to and the execution is not a reduction of some ‘descendant’ of ,^{8}^{8}8The execution cannot be via complete or partialreplacement of a descendant of either, as the first execution via completereplacement of happens at index . we then show that there exists also a corresponding tuple that is an execution via reduction, partialreplacement or action of from relative to and , such that , and in the case of an execution via reduction or partialreplacement, and .
There are two main cases to consider: an execution via action and partialreplacement.
In the case of an execution via partialreplacement, observe from Definition 11 that all tasks in are blocked in from relative to . The same applies to in from relative to for the following two reasons. Consider any primitive task (which is not applicable in relative to ). First, observe from Definition 2 that and are identical except for the tasks and constraints that were introduced by the two different reductions of above. Second, observe from Definition 5 that any constraint occurring in and containing expression is relevant to irrespective of the other task labels that occur in the expression. Similarly, any constraint occurring in and containing expression is not relevant to irrespective of the other task labels that occur in the expression. The same applies when