Unsynthesizable Cores – Minimal Explanations for Unsynthesizable High-Level Robot Behaviors

Unsynthesizable Cores – Minimal Explanations for Unsynthesizable High-Level Robot Behaviors

Vasumathi Raman and Hadas Kress-Gazit *V. Raman is supported by STARnet, a Semiconductor Research Corporation program, sponsored by MARCO and DARPA. H. Kress-Gazit is supported by NSF CAREER CNS-0953365 and DARPA N66001-12-1-4250.V. Raman is with the Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena CA 91125 (e-mail: vasu@caltech.edu).H. Kress-Gazit is with the Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853, USA hadaskg at cornell.eduManuscript received August 21, 2014.

With the increasing ubiquity of multi-capable, general-purpose robots arises the need for enabling non-expert users to command these robots to perform complex high-level tasks. To this end, high-level robot control has seen the application of formal methods to automatically synthesize correct-by-construction controllers from user-defined specifications; synthesis fails if and only if there exists no controller that achieves the specified behavior. Recent work has also addressed the challenge of providing easy-to-understand feedback to users when a specification fails to yield a corresponding controller. Existing techniques provide feedback on portions of the specification that cause the failure, but do so at a coarse granularity. This work presents techniques for refining this feedback, extracting minimal explanations of unsynthesizability.

high-level behaviors, formal methods, temporal logic.

I Introduction

As robots become increasingly general-purpose and ubiquitous, there is a growing need for them to be easily controlled by a wide variety of users. The near future will likely see robots in homes and offices, performing everyday tasks such as fetching coffee and tidying rooms. The challenge of programming robots to perform these tasks has until recently been the domain of experts, requiring hard-coded implementations with the ad-hoc use of low-level techniques such as path-planning during execution.

Recent advances in the use of formal methods for robot control have enabled non-expert users to command robots to perform high-level tasks using a specification language instead of programming the robot controller (e.g., [Kloetzer2008a, Karaman09, Bhatia2010, LaValle11, KGF, Nok10]). There are several approaches in which correct-by-construction controllers are automatically synthesized from a description of the desired behavior and assumptions on the environment in which the robot operates [KGF, Nok10]. If a controller implementing the specification exists, one is returned. However, for specifications that have no implementation (i.e. unsynthesizable specifications), the process of pin-pointing the cause of the problem can be a frustrating and time-consuming process. This has motivated recent algorithms for explaining unsynthesizability of specifications [ICRA12, TRO12], revising specifications [Fainekos11], and adding environment assumptions that would make the specification synthesizable [Li11].

There are two ways in which a specification can be unsynthesizable – it is either unsatisfiable, in which case the specified robot behavior cannot be achieved in any environment, or it is unrealizable, in which case there exists an admissible environment (satisfying the specified assumptions) that prevents the robot from achieving its specified behavior. Feedback about the cause of unsynthesizability can be provided to the user in the form of a modified specification [Fainekos11, KimFainekos12a, KimFainekos12b], a highlighted fragment of the original specification [CAV11], or by allowing the user to interact with a simulated adversarial environment that prevents the robot from achieving the specified behavior [TRO12].

Previous approaches left open the challenge of refining feedback to the finest possible granularity, providing the user with a minimal cause of unsynthesizability. The main contribution of this paper is to to identify unsynthesizable cores – minimal subsets of the desired robot behavior that cause it to be unsatisfiable or unrealizable. The analysis makes use of off-the-shelf Boolean satisfiability (SAT) solvers and existing synthesis tools to find this minimal cause of unsynthesizability, and therefore lends itself to generalization to any formal specification language for which the relevant tools exist. This paper subsumes the results presented in [RSS13, IROS13], and extends the core-finding capabilities to previously unaddressed cases, as described in Section VII. In particular, this is the first paper to present a sound and complete algorithm for finding unsynthesizable cores (as defined in Section III) for specifications in the GR(1) fragment of Linear Temporal Logic (LTL), and discuss special considerations necessitated by specifications in the robotics problem domain.

The paper is structured as follows. Section II reviews terms and preliminaries. Section III describes types of unsynthesizability, and presents a formal definition of the problem of identifying unsynthesizable cores. Section IV describes related work on analyzing unsynthesizable specifications. Sections V and VI present techniques for using Boolean satisfiability to identify unsatisfiable and unrealizable cores respectively. Section VII describes an alternative method for identifying cores, based on iterated realizability checks. Section VIII demonstrates the effectiveness of the more fine-grained feedback on example specifications. The paper concludes with a description of future work in Section IX.

Ii Preliminaries

The high-level tasks considered in this work involve a robot operating in a known workspace. The robot reacts to events in the environment, which are captured by its sensors, in a manner compliant with the task specification, by choosing from a set of actions including moving between adjacent locations. The tasks may include infinitely repeated behaviors such as patrolling a set of locations. Examples of such high-level tasks include search and rescue missions and the DARPA Urban Challenge.

Ii-a Controller Synthesis

High-level control for robotics is inherently a hybrid domain, consisting of both discrete and continuous components. Automated correct-by-construction controller synthesis for this domain using formal methods requires a discrete abstraction and a description of the task in a formal specification language. The discrete abstraction of the high-level robot task consists of a set of propositions whose truth value is controlled by the environment and read by the robot’s sensors, and a set of action and location propositions controlled by the robot; the set of all propositions . The value of each is the abstracted binary state of a low-level black box component. More details on the discrete abstraction used in this work can be found in [KGF].

The formal language used for high-level specifications in this work is Linear Temporal Logic (LTL) [LTL], a modal logic that includes temporal operators, allowing formulas to specify the truth values of atomic propositions over time. LTL is appropriate for specifying robotic behaviors because it provides the ability to describe changes in the truth values of propositions over time. To allow users who may be unfamiliar with LTL to define specifications, some tools like LTLMoP [LTLMoP] include a parser that automatically translates English sentences belonging to a defined grammar [grammar] into LTL formulas, as well as some natural language capabilities, as described in [RSS13].

Ii-A1 Linear Temporal Logic (LTL)

LTL formulas are constructed from atomic propositions according to the following recursive grammar:

where is negation, is disjunction, is “next”, and is a strong “until”. Conjunction (), implication (), equivalence (), “eventually” () and “always” () are derived from these operators. The truth of an LTL formula is evaluated over sequences of truth assignments to the propositions in . In this paper, truth assignments are represented as subsets , with being set to true and being false. Informally, the formula expresses that is true in the next position in the sequence. Similarly, expresses that is true at every position, and expresses that is true at some position in the sequence. Therefore, the formula is satisfied if is true infinitely often. For a formal definition of the semantics of LTL, see [MCBk].

The task specifications in this paper are expressed as LTL formulas of the form , where encodes assumptions about the environment’s behavior and represents the desired robot behavior. and each have the structure , where and for represent the initial conditions, transition relation and goals for the environment () and the robot () respectively. This fragment of LTL is called Generalized Reactivity (1) or GR(1) [Piterman06].

The subformulas and above are referred to as safety formulas, and encode assumptions on the environment and restrictions on the system transitions respectively. Each consists of a conjunction of formulas of the form , where each is a Boolean formula over (formulas over are also allowed in ). On the other hand, and are referred to as liveness formulas, and consist of conjunctions of clauses of the form . Each is a Boolean formula over , and represents an event that should occur infinitely often when the robot controller is executed. The initial conditions and are non-temporal Boolean formulae over and respectively.

An LTL formula is realizable if, for every time step, given a truth assignment to the environment propositions for the next time step, there is an assignment of truth values to the robot propositions such that the resulting infinite sequence of truth assignments satisfies . The synthesis problem is to find an automaton that encodes these assignments, i.e. whose executions satisfy . For a synthesizable specification , synthesis produces an implementing automaton, enabling the construction of a hybrid controller that produces the desirable high-level, autonomous robot behavior. The reader is referred to [Piterman06] and [KGF] for details of the synthesis procedure, and to [KGF, LTLMoP] for a description of how the extracted discrete automaton is transformed into low-level robot control.

Ii-B Environment Counterstrategy

In the case of unsynthesizable specifications, the counterstrategy synthesis algorithm introduced in [Konig09] gives an automatic method of constructing a strategy for the environment, which provides sequences of environment actions that prevent the robot from achieving the specified behavior. The counterstrategy takes the form of a finite state machine:

Definition 1.

An environment counterstrategy for LTL formula is a tuple where

  • is a set of states.

  • is a set of initial states.

  • is a set of inputs (sensor propositions).

  • is a set of outputs (location and action propositions).

  • is a deterministic input function, which provides the input propositions that are true in the next time step given the current state , and satisfies .

  • is the (nondeterministic) transition relation. If for some , then there is no next-step assignment to the set of outputs that satisfies the robot’s transition relation , given the next set of environment inputs and the current state .

  • is a transition labeling, which associates with each state the set of environment propositions that are true over incoming transitions for that state (note that this set is the same for all transitions into a given state). Moreover, if then .

  • is a state labeling, associating with each state the set of robot propositions true in that state.

  • labels each state with the index of a robot goal that is prevented by that state. During the counterstrategy extraction, every state in the counterstrategy is marked with some robot goal [Konig09].

The counterstrategy provides truth assignments to the input propositions (in the form of the transition function ) that prevent the robot from fulfilling its specification. The inputs provided by in each state satisfy , meaning that for all , the truth assignment sequence satisfies for each conjunct in (note that is a formula over two consecutive time steps). In addition, all infinite executions of satisfy .

Iii Problem Statement

A specification that does not yield an implementing automaton is called unsynthesizable. Unsynthesizable specifications are either unsatisfiable, in which case the robot cannot succeed no matter what happens in the environment (e.g., if the task requires patrolling a disconnected workspace), or unrealizable, in which case there exists at least one environment that can prevent the desired behavior (e.g., if in the above task, the environment can disconnect an otherwise connected workspace, such as by closing a door). More examples illustrating the two cases can be found in [TRO12].

In either case, the robot can fail in one of two ways: either it ends up in a state from which it has no moves that satisfy the specified safety requirements (this is termed deadlock), or the robot is able to change its state infinitely, but one of the goals in is unreachable without violating (termed livelock). In the context of unsatisfiability, an example of deadlock is when the system safety conditions contain a contradiction within themselves. Similarly, unrealizable deadlock occurs when the environment has at least one strategy for forcing the system into a deadlocked state. Livelock occurs when there is one or more goals that cannot be reached while still following the given safety conditions.

Consider Specification 1, in which the robot is operating in the workspace depicted in Fig. 1. The robot starts at the left hand side of the hallway (1), and must visit the goal on the right (4). The safety requirements specify that the robot should not pass through region if it senses a person (2). Additionally, the robot should always activate its camera (3).

  1. Robot starts in start with camera
    (, part of )

  2. If you are sensing a person then do not r5
    (, part of )

  3. Always activate the camera (, part of )

  4. Visit the goal (, part of )

Specification 1 Unrealizable specification – livelock
Fig. 1: Map of robot workspace in Specification 1

It is clear that the environment can prevent the goal in (4) by always activating the “person” sensor (), because of the initial condition in (1) and the safety requirement in (2). Note that Specification 1 is a case of livelock: the robot can satisfy the safety requirement indefinitely by moving between the first four rooms on the left, but is prevented from ever reaching the goal if it sees a person all the time – the environment is able to disconnect the topology using the person sensor.

Previous work produced explanations of unsynthesizability in terms of combinations of the specification components (i.e., initial conditions, safeties and goals) [TRO12]. However, in many cases, the true conflict lies in small subformulas of these components. For example, the safety requirement (3) in Specification 1 is irrelevant to its unsynthesizability, and should be excluded from any explanation of failure. The specification analysis algorithm presented in [TRO12] will narrow down the cause of unsynthesizability to the goal in (4), but will also highlight the entirety of , declaring that the environment can prevent the goal because of some subset of the safeties (without identifying the exact subset).

This motivates the identification of small, minimal, “core” explanations of the unsynthesizability. A first step is to define what is meant by an unsynthesizable core. This paper draws inspiration from the Boolean satisfiability (SAT) literature to define an unsynthesizable core of a GR(1) LTL formula. Given an unsatisfiable SAT formula in conjunctive normal form (CNF), an unsatisfiable core is traditionally defined as a subset of CNF clauses that is still unsatisfiable. A minimal unsatisfiable core is one such that every proper subset is satisfiable; a given SAT formula can have multiple minimal unsatisfiable cores of varying sizes. This definition should be distinguished from that of a minimum unsatisfiable core, which is one containing the smallest number of original clauses that are unsatisfiable in themselves. While there are several practical techniques for computing minimal unsatisfiable cores, and many modern SAT-solvers include this functionality, there are no known practical algorithms for computing minimum cores. This paper will therefore focus on leveraging existing tools for computing minimal unsatisfiable cores, to compute minimal unsynthesizable cores.

Let () denote that is a subformula (strict subformula) of .

Definition 2.

Given a specification , a minimal unsynthesizable core is a subformula such that is unsynthesizable, and for all , is synthesizable.

Problem 1.

Given an unsynthesizable formula , return a minimal unsynthesizable core .

Iv Related Work

Robotics researchers have recently considered the problem of revising specifications that cannot be satisfied by a given robot system [Fainekos11, KimFainekos12a, KimFainekos12b] . The work in [Fainekos11] addressed the problem of revising unsatisfiable LTL specifications. The author defines a partial order on LTL formulas, and defines the notion of a valid relaxation for an LTL specification, which informally corresponds to the set of formulas “greater than” formulas in the specification according to this partial order. Formula relaxation for unreachable states is accomplished by recursively removing all positive occurrences of unreachable propositions. Specifications with logical inconsistencies are revised by augmenting the synchronous product of the robot and environment specifications with previously disallowed transitions as needed to achieve the goal state. In [KimFainekos12a, KimFainekos12b], the authors present exact and approximate algorithms for finding minimal revisions of specification automata, by removing the minimum number of constraints from the unsatisfiable specification. They too encode the problem as an instance of Boolean satisfiability, and solve it using efficient SAT solvers. However, the work presented in this paper differs in its objective, which is to provide feedback on existing specifications, not rewrite them. Moreover, unlike the above approaches, this work deals with reactive specifications.

Although explaining unachievable behaviors has only recently been studied in the context of robotics, there has been considerable prior work on unsatisfiability and unrealizability of LTL in the formal methods literature, and the problem of identifying small causes of failure has been studied from several perspectives. For unsatisfiable LTL formulas, the authors of [Schuppan09] suggest a number of notions of unsatisfiable cores, tied to the corresponding method of extraction. These include definitions based on the syntactic structure of the formula parse tree, subsets of conjuncts in various conjunctive normal form translations of the formula, resolution proofs from bounded model-checking (BMC), and tableaux constructions. The authors of [Beer12] employ a formal definition of causality to explain counterexamples provided by model-checkers on unsatisfiable LTL formulas; the advantage of this method is the flexibility of defining an appropriate causal model.

The technique of extracting an unsatisfiable core from a BMC resolution proof is one that is well-used in the Boolean satisfiability (SAT) and SAT Modulo Theories (SMT) (e.g., [SATCores, SMTCores]) literature. A similar technique was used in [Shlyakhter03] for debugging declarative specifications. In that work, the abstract syntax tree (AST) of an inconsistent specification was translated to CNF, an unsatisfiable core was extracted from the CNF, and the result was mapped back to the relevant parts of the AST. The approach in [Shlyakhter03] only generalizes to specification languages that are reducible to SAT, a set which does not include LTL; this paper presents a similar approach, using SAT solvers to identify unsatisfiable cores for LTL.

The authors of [Cimatti07] also attempted to generalize the idea of unsatisfiable cores to the case of temporal logic using SAT-based bounded model checkers. Temporal atoms of the original LTL specification were associated with activation variables, which were then used to augment the formulas used by a SAT-based bounded model checker. The result, in the case of an unsatisfiable LTL formula, was a subset of the activation variables corresponding to the atoms that cannot be satisfied simultaneously. The approach presented here for unsatisfiability is very similar, in that the SAT formulas used to determine the core are exactly those that would be used for bounded model checking. However, a major difference is that this work does not use activation variables in order to identify conjuncts in the core, but maintains a mapping from the original formula to clauses in the SAT instance.

In the context of unrealizability, the authors of [Cimatti08] propose definitions for helpful assumptions and guarantees, and compute minimal explanations of unrealizability (i.e., unrealizable cores) by iteratively expelling unhelpful constraints. Their algorithm assumes an external realizability checker, which is treated as a black box, and performs iterated realizability tests. This work will draw on the same iterative realizability approach in Section VII. The authors in [Konighofer10] use model-based diagnosis to remove not only guarantees but also irrelevant output signals from the specification. These output signals are those that can be set arbitrarily without affecting the unrealizability of the specification. Model-based diagnoses provide more information than a single unrealizable core, but requires the computation of many unrealizable cores. In [Konighofer10], this is accomplished using techniques similar to those in [Cimatti08], which in turn require many realizability checks. The main advantage of the work presented here is that it reduces the number of computationally expensive realizability checks required for most specifications, as detailed in Sections V and VI.

To identify and eliminate the source of unrealizability, some works like [Li11, Chatterjee08] provide a minimal set of additional environment assumptions that, if added, would make the specification realizable; this is accomplished in [Chatterjee08] using efficient analysis of turn-based probabilistic games, and in [Li11] by mining the environment counterstrategy. On the other hand, the work presented in this paper takes the environment assumptions as fixed, and the goal is to compute a minimal subset of the robot guarantees that is unrealizable. Seen from another perspective, this work presumes that the assumptions accurately capture the specification designer’s understanding of the robot’s environment, and provides the source of failure in the specified guarantees.

V Unsatisfiable Cores via SAT

This section describes how unsatisfiable components of the robot specification are further analyzed to narrow the cause of unsatisfiability, for both deadlock and livelock, using Boolean satisfiability testing. Extending these techniques to the environment assumptions is straightforward.

The Boolean satisfiability problem or SAT is the problem of determining whether there exists a truth assignment to a set of propositions that satisfies a given Boolean formula. A Boolean formula in Conjunctive Normal Form (CNF) is one that has been rewritten as a conjunction of clauses, each of which is a disjunctions of literals, where a literal is a Boolean proposition or its negation. For a Boolean formula in CNF, an unsatisfiable core is defined as a subset of CNF clauses whose conjunction is still unsatisfiable; a minimal unsatisfiable core is one such that removing any clause results in a satisfiable formula.

V-a Unsatisfiable Cores for Deadlock

Given a depth and an LTL safety formula over propositions , the propositional formula is constructed over , where represents the value of at time step , as:

where represents with all occurrences of subformula replaced with . This formula is called the depth- unrolling of . Consider Specification 2. and , where is a propositional variable representing the value of at time step . Given the depth- unrolling of the robot safety formula, define .

In the case of deadlock, which can be identified as in [ICRA12, TRO12], a series of Boolean formulas is produced by incrementally unrolling the robot safety formula , and the satisfiability of is checked at each depth. To perform this check, the formula is first converted into CNF, so that it can be provided as input to an off-the-shelf SAT-solver; this work uses PicoSAT [PicoSAT]. Converting a Boolean formula to CNF form can, in the worst case, cause an exponential increase in the size of the formula. However, since consists of conjunctions of simple Boolean formulas, the resulting CNFs are small in practice. If is found unsatisfiable, there is no valid sequence of actions that follow the robot safety condition for time steps starting from the initial condition. In this case, the SAT solver returns a minimal unsatisfiable subformula, in the form of a subset of the CNF clauses.

When translating the Boolean formula to CNF, a mapping is maintained between the portions of the the original specification, and the clauses they generate. This enables the CNF minimal unsatisfiable core to be traced back to the corresponding safety conjuncts and initial conditions in the specification.

  1. Start in the kitchen ():

  2. Avoid the kitchen (, ):

  3. Always activate your camera ():

Specification 2 Core-finding example – unsatisfiable deadlock
Fig. 2: Map of hospital workspace (“c” is the closet)

Specification 2 is a deadlocked specification, referring to a robot operating in the workspace depicted in Fig. 2. The described method begins at the initial state described by (lines 1 and 2), and unrolls it to above. Note that is already unsatisfiable, and the core is given by the subformula , which in turn maps back to lines 1 and 2. This is because the two statements combined require the robot to both start in the kitchen and not start in the kitchen. Section VIII contains another example demonstrating unsatisfiable core-finding for deadlock.

V-B Unsatisfiable Cores for Livelock

In the case of livelock, a similar unrolling procedure can be applied to determine the core set of clauses that prevent a goal from being fulfilled. A propositional formula is generated by unrolling the robot safety from the initial state for a pre-determined number of time steps, with an additional clause representing the unsatisfied liveness condition being required to hold at the final time step for that depth. Consider the livelocked Specification 3.

  1. Start in the kitchen ():

  2. Avoid hall_w (, ):

  3. Always activate your camera ():

  4. Patrol r3 ():

Specification 3 Core-finding example – unsatisfiable livelock

Unrolling the robot safety to depth , with the added clause for liveness at depth , results in:

where represents the topology constraints on the robot unrolled at time . is unsatisfiable for any .

V-C Unroll Depth

In the case of deadlock, the propositional formula can be built for increasingly larger depths until it is found to be unsatisfiable for some ; by the definition of deadlock, there will always exist such a . This gives us a sound and complete method for determining the depth to which the safety formula must be unrolled in order to identify an unsatisfiable core for deadlock. For livelock, on the other hand, determining the shortest depth that will produce a meaningful core is a much bigger challenge. Consider the above example. For unroll depths less than or equal to 3, the unsatisfiable core returned will include just the environment topology, since the robot cannot reach r3 from the kitchen in 3 steps or fewer, even if it is allowed into ; however, this is not a meaningful core.

For the core is given by the subformula:

which maps back to specification sentences (1), (2) and (4) in Specification 3. This is because the robot cannot reach without passing through . Section VIII contains another example demonstrating unsatisfiable core-finding for livelock.

The depth required to produce a meaningful core for unsatisfiability is bounded above by the number of distinct states that the robot can be in, i.e. the number of possible truth assignments to all the input and output propositions. However, efficiently determining the shortest depth that will produce a meaningful core remains a future research challenge, and for the purpose of this work, a fixed depth of 15 time steps was used for the examples presented, unless otherwise indicated.

Algorithm 1 summarizes the core-finding procedure described in this section. The module SAT_SOLVER takes as input a Boolean formula in CNF form and returns a minimal unsatisfiable core (MUS). MAP_BACK maps the returned CNF clauses to portions of the original specification; in LTLMoP, this mapping returns sentences in structured English or natural language.

1:function UNSAT_BMC(, reason)
2:     MUS
3:     if reason==deadlock then
4:          for  to max_depth do
5:               MUS SAT_SOLVER()
6:               if MUS  then return MAP_BACK(MUS)                          
7:     else
8:          MUS SAT_SOLVER()      return MAP_BACK(MUS)
Algorithm 1 Unsatisfiable Cores via SAT solving

V-D Interactive Exploration of Unrealizable Tasks

If the specification is unrealizable rather than unsatisfiable, the above techniques do not apply directly to identify a core. This is because if the specification is satisfiable but unrealizable, there exist sequences of truth assignments to the input variables that allow the system requirements to be met. Therefore, in order to produce an unsatisfiable Boolean formula, all sequences of truth assignments to the input variables that satisfy the environment assumptions must be considered. This requires one depth- Boolean unrolling for each possible length- sequence of inputs, where each unrolling encodes a distinct sequence of inputs in the unrolled Boolean formula. In the worst case, the number of depth- Boolean formulas that must be generated before an unsatisfiable formula is found grows exponentially in .

However, unsatisfiable cores do enable a useful enhancement to an interactive visualization of the environment counterstrategy. Since succinctly summarizing the cause of an unrealizable specification is often challenging even for humans, one approach to communicating this cause in a user-friendly manner is through an interactive game (shown in Fig. 3). The tool illustrates environment behaviors that will cause the robot to fail, by letting the user play as the robot against an adversarial environment. At each discrete time step, the user is presented with the current goal to pursue and the current state of the environment. They are then able to respond by changing the location of the robot and the status of its actuators. Examples of this tool in action are given in [ICRA12, TRO12].

An initial version of this tool simply prevented the user from making moves that were disallowed by the specification. However, by using the above core-finding technique, a specific explanation can be given about the part of the original specification that would be violated by the attempted invalid move. This is achieved by finding the unsatisfiable core of a single-step satisfiability problem constructed over the user’s current state, the desired next state, and all of the robot’s specified safety conditions.

Consider Specification 4, which first appeared in [RSS13]. The robot is instructed to follow a human partner (Line 1) through the workspace depicted in 2. This means that the robot should always eventually be in any room that the human visits. Additionally, the robot has been banned from entering the kitchen in Line 2. We discover that the robot cannot achieve its goal of following the human if the human enters the kitchen. This conflict is presented to the user as depicted in Fig. 3: the environment sets its state to represent the target’s being in the kitchen, and then, when the user attempts to enter the kitchen, the tool explains that this move is in conflict with Line 2.

  1. Follow me.

  2. Avoid the kitchen.

Specification 4 Example of unrealizability
Fig. 3: Screenshot of interactive visualization tool for Specification 4. The user is prevented from following the target into the kitchen in the next step (denoted by the blacked out region) due to the portion of the specification displayed.

Vi Unrealizable Cores via SAT

As described in Section V-D, the extension of the SAT-based core-finding techniques described in Section V to unrealizable specifications requires examining the exact environment input sequences that cause the failure. Considering all possible environment input sequences is not feasible; fortunately, the environment counterstrategy sometimes provides us with exactly those input sequences that cause unsynthesizability.

Consider a counterstrategy for formula . It allows the following characterizations of deadlock and livelock:

  • Deadlock There exists a state in the counterstrategy such that there is a truth assignments to inputs, for which no truth assignment to outputs satisfies the robot transition relation. Formally,

  • Livelock There exist a set of states in the counterstrategy such that the robot is trapped in no matter what it does, and there is some robot liveness in that is not satisfied by any state in . Formally,

    Here “” indicates that the truth assignment given by does not satisfy the propositional formula . Note that there is always such a set of states in the counterstrategy in the case of livelock.

Vi-a Unrealizable Cores for Deadlock

Consider Specification 5 on the workspace in Fig. 1. The robot starts in with the camera on (1). The safety conditions specify that the robot should not pass through the region marked if it senses a person (2). In addition, the robot must stay in place if it senses a person (3). Finally, the robot should always activate its camera (4). Here, the environment can force the robot into deadlock by activating the “person” sensor () when the robot is in , because there is then no way the robot can fulfil both (2) and (3).

  1. Robot starts in r5 with camera ():

  2. If you are sensing person then do not r5 ():

  3. If you are sensing person then stay there ():

  4. Always activate your camera ():

Specification 5 Core-finding example – deadlock

The environment counterstrategy is as follows:

  • ,

  • , ,

The state is deadlocked, because given the input in the next time step, there is a conflict between safety conditions 2 and 3, and the robot has no valid move (so ). Note that indicates that the environment strategy does not include any transition out of where the environment does not activate the “person” sensor.

For , the propositional-representation of q is defined as:

In the example above, .

As before, let represent the value of at time step , and . For example, in Specification 5, and .

Given LTL specification , such that , construct a propositional formula over as follows:

Intuitively, this formula represents the satisfaction of the robot safety condition in the next step from state , with the additional restriction that the input variables be bound to the values provided by in the next time step.

In the above case,

where is a formula over representing the topological constraints on the robot motion at time (i.e. which rooms it can move to at time given where it is at time , and mutual exclusion between rooms).

Note that if is a deadlocked state, then by definition is unsatisfiable, since there is no valid setting to the robot propositions in the next time step starting from q. A SAT solver can now be used to find a minimal unsatisfiable subformula, as in Section V.

In the above example, the SAT solver finds the core of as the subformula

This is because the two statements combined require the robot to both stay in r5 and not be in r5 in time step 1. This gives us a core explanation of the deadlock caused in state . Taking the union over the cores for all the deadlocked states provides a concise explanation of how the environment can force the robot into a (not necessarily unique) deadlock situation. Section VIII contains another, more complex example demonstrating unrealizable core-finding for deadlocked specifications.

Vi-B Unrealizable Cores for Livelock

Consider Specification 1 again. If the environment action is to always set , then the safety requirement in 2 enforces that the robot will never activate , because it is explicitly forbidden from doing so when sensing a person. This is livelock because the robot can continue to move between and . The environment counterstrategy is as follows:

  • Additionally, for all .

  • .

  • (since there is only one goal).

Vi-B1 Countertraces

One of the main sources of complexity in analyzing an environment counterstrategy is that it might depend on the behavior of the system. However, sometimes it is possible to extract a single trace of inputs such that no output trace fulfil the specification. Following [Konig09], we call such a trace a countertrace. A countertrace does not always exist, but it does simplify our analysis. Computing countertraces is expensive [Konig09], but it is possible to identify whether a given counterstrategy is a countertrace by checking that all paths from an initial state follow the same sequence of environment inputs. For the analysis that follows, we assume that the counterstrategy is a countertrace. We will later discuss how to analyze counterstrategies that are not traces. Note that counterstrategy for Specification 1 above is a countertrace.

In the case of livelock, we know there exists a set of states in the counterstrategy that trap the robot, locking it away from the goal. Without loss of generality, consists of (possibly overlapping) cycles of states. In the specifications of the form considered in this work, robot goals are of the form for , where each is a propositional formula over . Suppose the algorithm in [TRO12] identified goal as the goal responsible for livelock. Let be the set of all states in that prevent goal , and let be the set of maximal -preventing cycles in , i.e. cycles that are not contained in any other cycle in (modulo state-repetition). Let and be cycles, and define if and there is some offset index in such that all of is found in starting at , i.e. for all . This expresses that is a strict sub-cycle of . Formally,

In Specification 1, there is only one goal, . is a maximal -preventing cycle.

Given an initial state , a depth and an LTL safety formula over , we construct a propositional formula over as:

This formula is called the depth- unrolling of from , and represents the tree of length- truth assignment sequences that satisfy , starting from . Note that a depth- unrolling governs time steps, because each conjunct in governs the current as well as next time steps. In the example,

Given a cycle of states , and a depth , construct a propositional formula over , where represents the value of each input in state for , as:

This formula is called the depth- environment-unrolling of , and represents the sequence of inputs seen when following cycle for time-steps. In the example, the depth- environment unrolling of is .

Now, given an LTL safety specification over , a goal , a maximal -preventing cycle , and an unrolling depth , construct propositional formula over as:

Intuitively, this formula expresses the requirement that the goal be fulfilled after some depth- unrolling of the safety formula starting from state , given the input sequence provided by (note that this input sequence extends to the final time step in the safety formula unrolling). This is an unsatisfiable propositional formula, and can be used to determine the core set of clauses that prevent a goal from being fulfilled. Taking the union of cores over all gives a concise explanation of the ways in which the environment can prevent the robot from fulfilling the goal. This step makes use of the fact that the counterstrategy is a countertrace, since otherwise the reason for unrealizability might involve an interplay between two input sequences.

In the above example,

In the case of livelock, the choice of unroll depth determines the quality of the core returned. Recall that for deadlock, the propositional formula is built over just one step, since it is already known to cause a conflict with the robot transition relation, and be unsatisfiable. The unsatisfiable core of this formula is a meaningful unrealizable core in this case because it provides the immediate reason for the deadlock. For livelock, on the other hand, determining the shortest depth to which a cycle must be unrolled to produce a meaningful core is not obvious.

In the above example, for unroll depths less than or equal to 8, the unsatisfiable core returned will include just the environment topology, since the robot cannot reach the goal from the start in 8 steps or fewer, even if it is allowed into ; however, this is not a meaningful core. Unrolling to depth 9 or greater returns the expected subformula that includes . Automatically determining the shortest depth that will produce a meaningful core remains a research challenge, but a good heuristic is to use the maximum distance between two states in the environment counterstrategy (i.e. the diameter of the graph representing the counterstrategy, or the sum of the diameters of its connected components).

Vi-B2 General Counterstrategies

It may be tempting to try and use a similar approach to find unrealizable cores from counterstrategies that are not countertraces. However, since the input sequences can now depend on the system behavior, it becomes necessary to encode all possible paths through the counterstrategy. Even so, since the counterstrategy only contains paths that are valid for the system specification, we found that the returned core often contained these added constraints on the system instead of the original specification. Instead, we used the approach described in Section VII to extract a minimal core in the case of livelock, when the counterstrategy was not a countertrace.

1:function UNREAL_BMC(, reason)
2:     MUS
4:     if reason==deadlock then
5:          for  such that  do
6:               MUS MUS SAT_SOLVER())           
7:     else
8:           livelocked goal
9:          if IS_COUNTERTRACE(, then
10:                FIND_PREVENTING_CYCLES(, )
11:               for  do
12:                    MUS MUS SAT_SOLVER()                
13:          else
14:                CHOOSE_ONE()
15:               MUS UNREAL_ITERATE()                return MAP_BACK(MUS)
Algorithm 2 Unrealizable Cores via SAT solving

Algorithm 2 summarizes the core-finding procedure described in this section. The module IS_COUNTERTRACE checks that all paths form a single initial state in the counterstrategy follow the same sequence of inputs. FIND_PREVENTING_CYCLES finds cycles in that prevent goal . UNREAL_ITERATE is described in Section VII.

Note that, since unsatisfiability is a special case of unrealizability (in which not just some, but any environment can prevent the robot from fulfilling its specification), the above analysis also applies to unsatisfiable specifications. Moreover, a countertrace can always be extracted for an unsatisfiable specification, and so the approach in Section VI-B1 applies for livelock. However, the analysis presented in Section V is more efficient for unsatisfiability, as it does not require explicit-state extraction of the environment counterstrategy.

Vii Unsynthesizable Cores via Iterated Synthesis

As discussed in Sections V and VI, the SAT-based approach to identifying an unsynthesizable core for the case of livelock presents the challenge of determining a depth to which the LTL formula must be instantiated with propositions. This minimal depth is often tied to the number of regions in the robot workspace, and is usually easy to estimate. However, no efficient, sound method is known for determining this minimal unrolling depth. In addition, if the counterstrategy is not a countertrace, the techniques in Section VI do not readily apply to extracting an unrealizable core. In both these cases, the SAT-based analysis described in Section VI may return a core that does not capture the real cause of failure, causing confusion when presented to the user. Fortunately, alternative, more computationally expensive techniques can be used to return a minimal core in these cases.

This section presents one such alternative approach to determining the minimal subset of the robot safety conjuncts that conflicts with a specified goal. The approach is based on iterated realizability checks, removing conjuncts from the safety formula and testing realizability of the remaining specification. While this approach is guaranteed to yield a minimal unsynthesizable core, it requires repeated calls to a realizability oracle, which may be expensive for specifications with a large number of conjuncts.

Recall from Section II the syntactic form of the LTL specifications considered in this work. In particular, the formula is a conjunction , where each is a Boolean formula over , and represents an event that should occur infinitely often when the robot controller is executed. Similarly, represents the robot safety constraints; it is a conjunction where each is a Boolean formula over and .

In the case of livelock, the initial specification analysis presented in [TRO12] provides a specific liveness condition that causes the unsynthesizability (i.e. either unsatisfiability or unrealizability), and can also identify one of the initial states from which the robot cannot fulfil . However, the specific conjuncts of the safety formula that prevent this liveness are not identified. The key idea behind using realizability tests to determine an unrealizable or unsatisfiable core of safety formulas is as follows. If on removing a safety conjunct from the robot formula, the specification remains unsynthesizable, then there exists an unsynthesizable core that does not include that conjunct (since the remaining conjuncts are sufficient for unsynthesizability). Therefore, in order to identify an unsynthesizable core, it is sufficient to iterate through the conjuncts of , removing safety conditions one at a time and checking for realizability.

Algorithm 3 presents the formal procedure for performing these iterated tests, given the index of the liveness condition that causes the unsynthesizability. Denote by the formula for indices in a set . Let denote set at iteration . Set is initialized to the indices of all safety conjuncts, i.e. in line 2. In each iteration of the loop in lines 3-7, the next conjunct is omitted from the robot transition relation, and realizability of is checked (line 4). If removing conjunct causes an otherwise unsynthesizable specification to become synthesizable, it is retained for the next iteration (line 5); otherwise it is permanently deleted from the set of conjuncts (line 6-7). After iterating through all the conjuncts in , the final set determines a minimal unsynthesizable core of that prevents liveness . Note that the core is non-unique, and depends both on the order of iteration on the safety conjuncts, and on the initial state returned by the synthesis algorithm.

Theorem VII.1.

Algorithm 3 yields a minimal unsynthesizable core of .

Proof: Each iteration of the loop in Algorithm 3, lines 3-7, maintains the invariant that is unsynthesizable; thus, is unsynthesizable when the loop is exited.

Moreover, removing any of the safety conjuncts in yields a synthesizable specification. To see this, assume for a contradiction that there exists such that is unsynthesizable. Clearly, , so by definition of , . Therefore, if is synthesizable, then must be synthesizable, since any implementation that satisfies also satisfies . Since was not removed from on the iteration, is synthesizable. It follows that must be synthesizable, a contradiction.

1:function UNREAL_ITERATE()
3:     for  to  do
4:          if