CounterStrategy Guided Refinement of GR(1) Temporal Logic Specifications
Abstract
The reactive synthesis problem is to find a finitestate controller that satisfies a given temporallogic specification regardless of how its environment behaves. Developing a formal specification is a challenging and tedious task and initial specifications are often unrealizable. In many cases, the source of unrealizability is the lack of adequate assumptions on the environment of the system. In this paper, we consider the problem of automatically correcting an unrealizable specification given in the generalized reactivity (1) fragment of linear temporal logic by adding assumptions on the environment. When a temporallogic specification is unrealizable, the synthesis algorithm computes a counterstrategy as a witness. Our algorithm then analyzes this counterstrategy and synthesizes a set of candidate environment assumptions that can be used to remove the counterstrategy from the environment’s possible behaviors. We demonstrate the applicability of our approach with several case studies.
I Introduction
Automatically synthesizing a system from a highlevel specification is an ambitious goal in the design of reactive systems. The synthesis problem is to find a system that satisfies the specification regardless of how its environment behaves. Therefore, it can be seen as a twoplayer game between the environment and the system. The environment attempts to violate the specification while the system tries to satisfy it. A specification is unsatisfiable if there is no input and output trace that satisfies the specification. A specification is unrealizable if there is no system that can implement the specification. That is, the environment can behave in such a way that no matter how the system reacts, the specification would be violated. In this paper we consider specifications which are satisfiable but unrealizable. We address the problem of strengthening the constraints over the environment by adding assumptions in order to achieve realizability.
Writing a correct and complete formal specification which conforms to the (informal) design intent is a hard and tedious task [3, 4]. Initial specifications are often incomplete and unrealizable. Unrealizability of the specification is often due to inadequate environment assumptions. In other words, assumptions about the environment are too weak, leading to an environment with too many behaviors that make it impossible for the system to satisfy the specification. Usually there is only a rough and incomplete model of the environment in the design phase; thus it is easy to miss assumptions on the environment side. We would like to automatically find such missing assumptions that can be added to the specification and make it realizable. Computed assumptions can be used to give the user insight into the specification. They also provide ways to correct the specification. In the context of compositional synthesis [5, 8], derived assumptions based on the components specifications can be used to construct interface rules between the components.
An unrealizable specification cannot be executed or simulated which makes its debugging a challenging task. Counterstrategies are used to explain the reason for unrealizabilty of linear temporal logic (LTL) specifications [4]. Intuitively, a counterstrategy defines how the environment can react to the outputs of the system in order to enforce the system to violate the specification. Konighofer et al. in [4] show how such a counterstrategy can be computed for an unrealizable LTL specification. The requirement analysis tool RATSY [1] implements their method for a fragment of LTL known as generalized reactivity (1) (GR(1)). We also consider GR(1) specifications in this paper because the realizability and synthesis problems for GR(1) specifications can be solved efficiently in polynomial time and GR(1) is expressive enough to be used for interesting realworld problems [2, 11].
Counterstrategies can still be difficult to understand by the user especially for larger systems. We propose a debugging approach which uses the counterstrategies to strengthen the assumptions on the environment in order to make the specification realizable. For a given unrealizable specification, our algorithm analyzes the counterstrategy and synthesizes a set of candidate assumptions in the GR(1) form (see section II). Any of the computed candidate assumptions, if added to the specification, restricts the environment in such a way that it cannot behave according to the counterstrategy—without violating its assumptions—anymore. Thus we say the counterstrategy is ruled out from the environment’s possible behaviors by adding the candidate assumption to the specification.
The main flow for finding the missing environment assumptions is as follows. If the specification is unrealizable, a counterstrategy is computed for it. A set of patterns are then synthesized by processing an abstraction of the counterstrategy. Patterns are LTL formulas of special form that define the structure for the candidate assumptions. We ask the user to specify a set of variables to be used for generating candidates for each pattern. The user can specify the set of variables which she thinks contribute to unrealizability or are underspecified. The variables are used along with patterns to generate the candidate assumptions. Any of the synthesized assumptions can be added to the specification to rule out the counterstrategy. The user can choose an assumption from the candidates in an interactive way or our algorithm can automatically search for it. The chosen assumption is then added to the specification and the process is repeated with the new specification.
The contributions of this paper are as follows: We propose algorithms to synthesize environment assumptions by directly processing the counterstrategies. We give a counterstrategy guided synthesis approach that finds the missing environment assumptions. The suggested refinement can be validated by the user to ensure compatibility with her design intent and can be added to the specification to make it realizable. We demonstrate our approach with examples and case studies.
The problem of correcting an unrealizable LTL specification by constructing an additional environment assumption is studied by Chatterjee et al. in [3]. They give an algorithm for computing the assumption which only constrains the environment and is as weak as possible. Their approach is more general than ours as they consider general LTL specifications. However, the synthesized assumption is a Büchi automaton which might not translate to an LTL formula and can be difficult for the user to understand (for an example, see Fig. in [3]). Moreover, the resulting specification is not necessarily compatible with the design intent [6]. Our approach generates a set of assumptions in GR(1) form that can easily be validated by the user and be used to make the specification realizable.
The closest work to ours is the work by Li et al. [6] where they propose a templatebased specification mining approach to find additional assumptions on the environment that can be used to rule out the counterstrategy. A template is an LTL formula with at least one placeholder, , that can be instantiated by the Boolean variable or its negation. Templates are used to impose a particular structure on the form of generated candidates and are engineered by the user based on her knowledge of the environment. A set of candidate assumptions is generated by enumerating all possible instantiations of the defined templates. For a given counterstrategy, their method finds an assumption from the set of candidate assumptions which is satisfied by the counterstrategy. By adding the negation of such an assumption to the specification, they remove the behavior described by the counterstrategy from the environment. Similar to their work, we consider unrealizable GR(1) specifications and achieve realizability by adding environment assumptions to the specification. But, unlike them, we directly work on the counterstrategies to synthesize a set of candidate assumptions that can be used to rule out the counterstrategy. Similar to templates, patterns impose structure on the assumptions. However, our method synthesizes the patterns based on the counterstrategy and the user does not need to manipulate them. We only require the user to specify a subset of variables to be used in the search for the missing assumptions. The user can specify a subset that she thinks leads to the unrealizability. In our method, the maximum number of generated assumptions for a given counterstrategy is independent from what subset of variables is considered, whereas increasing the size of the chosen subset of variables in [6] will result in exponential growth in the number of candidates, while only a small number of them might hold over all runs of the counterstrategy (unlike our method). Moreover, we compute the weakest environment assumptions for the considered structure and given subset of variables. Our work takes an initial step toward bridging the gap between [3] and [6]. Our method synthesizes environment assumptions that are simple formulas, making them easy to understand and practical, and they also constrain the environment as weakly as possible within their structure. We refer the reader to [6] for a survey of related work.
Ii Preliminaries
Linear temporal logic (LTL) is a formal specification language with two kinds of operators: logical connectives (negation (), disjunction (), conjunction () and implication ()) and temporal modal operators (next (), always (), eventually () and until ()). Given a set of atomic propositions, an LTL formula is defined inductively as follows: ) any atomic proposition is an LTL formula. ) if and are LTL formulas, then , , and are also LTL formulas. Other operators can be defined using the following rules: , , and . An LTL formula is interpreted over infinite words . For an LTL formula , we define its language to be the set of infinite words that satisfy , i.e., .
A finite transition system (FTS) is a tuple where is a finite set of states, is the set of initial states and is the transition relation. An execution or run of a FTS is an infinite sequence of states where and for any , and . The language of a FTS is defined as the set , i.e., the set of (infinite) words generated by the runs of . We often consider a finite transition system as a directed graph with a natural bijection between the states and transitions of the FTS and vertices and edges of the graph, respectively. Formally for a FTS , we define the graph where each corresponds to a unique state , and if and only if .
Let be a set of atomic propositions, partitioned into input, and output, propositions. A Moore transducer is a tuple , where is the set of states, is the initial state, is the input alphabet, is the output alphabet, is the transition function and is the state output function. A Mealy transducer is similar, except that the state output function is . For an infinite word , a run of is the infinite sequence such that and for all we have . The run on input word produces an infinite word such that for all . The language of is the set of infinite words generated by runs of .
An LTL formula is satisfiable if there exists an infinite word such that . A Moore (Mealy) transducer satisfies an LTL formula , written as , if . An LTL formula is Moore (Mealy) realizable if there exists a Moore (Mealy, respectively) transducer such that . The realizability problem asks whether there exists such a transducer for a given LTL specification .
A twoplayer deterministic game graph is a tuple where can be partitioned into two disjoint sets and . and are the sets of states of player and , respectively. is the set of initial states. is the set of directed edges. Players take turns to play the game. At each step, if the current state belongs to , player chooses the next state. Otherwise player makes a move. A play of the game graph is an infinite sequence of states such that , and for all . We denote the set of all plays by . A strategy for player is a function that chooses the next state given a finite sequence of states which ends at a player state. A strategy is memoryless if it is a function of current state of the play, i.e., . Given strategies and for players and a state , the outcome is the play starting at , and evolved according to and . Formally, where , and for all we have if and if . An objective for a player is a set of plays. A strategy for player is winning for some state if for every strategy of player , we have .
Given an LTL formula over and a partitioning of into and , the synthesis problem is to find a Mealy transducer with input alphabet and output alphabet that satisfies . This problem can be reduced to computing winning strategies in game graphs. A deterministic game graph , and an objective can be constructed such that is realizable if and only if the system (player ) has a memoryless winning strategy from the initial state in [10]. Every memoryless winning strategy of the system can be represented by a Mealy transducer that satisfies . If the specification is unrealizable, then the environment (player ) has a winning strategy. A counterstrategy for the synthesis problem is a strategy for the environment that can falsify the specification, no matter how the system plays. Formally, a counterstrategy can be represented by a Moore transducer that satisfies , where and are the input and output alphabet for which are generated by the system and the environment, respectively.
In this paper, we consider specifications of the form
(1) 
where for can be written as a conjunction of the following parts:

: A Boolean formula over if and over otherwise, characterizing the initial state.

: An LTL formula of the form . Each subformula is either characterizing an invariant, in which case is a Boolean formula over , or it is characterizing a transition relation, in which case is a Boolean formula over expressions and where and, if and if .

: A formula of the form characterizing fairness/liveness, where each is a Boolean formula over .
For the specifications of the form in (1), known as GR(1) formulas, Piterman et al. [9] show that the synthesis problem can be solved in polynomial time. Intuitively, in (1), characterizes the assumptions on the environment and characterizes the correct behavior (guarantees) of the system. Any correct implementation of the specification guarantees to satisfy , provided that the environment satisfies .
For a given unrealizable specification , we define a refinement as a conjunction of a collection of environment assumptions in the GR(1) form such that is realizable. Intuitively it means that adding the assumptions to the specification results in a new specification which is realizable. We say a refinement is consistent with the specification if is satisfiable. Note that if is not satisfiable, i.e., , the specification is trivially realizable [6], but obviously is not an interesting refinement.
Iii Problem Statement and Overview
Iiia Problem Statement
Given a specification in the GR(1) form which is satisfiable but unrealizable, find a refinement as a conjunction of environment assumptions such that is satisfiable and is realizable.
IiiB Overview of the Method
We now give a highlevel view of our method. Specification refinements are constructed in two phases. First, given a counterstrategy’s Moore machine , we build an abstraction which is a finite transition system . The abstraction preserves the structure of the counterstrategy (its states and transitions) while removing the input and output details. The algorithm processes and synthesizes a set of LTL formulas in special forms, called patterns, which hold over all runs of . Our algorithm then uses these patterns along with a subset of variables specified by the user to generate a set of LTL formulas which hold over all runs of . We ask the user to specify a subset of variables which she thinks contribute to the unrealizability of the specification. This set can also be used to guide the algorithm to generate formulas over the set of variables which are underspecified. Using a smaller subset of variables leads to simpler formulas that are easier for the user to understand.
The complement of the generated formulas form the set of candidate assumptions that can be used to rule out the counterstrategy from the environment’s possible behaviors. We remove the candidates which are not consistent with the specification in order to avoid a trivial solution .
Any assumption from the set of generated candidates can be used to rule out the counterstrategy. Our approach does a breadthfirst search over the candidates. If adding any of the candidates makes the specification realizable, the algorithm returns that candidate as a solution. Otherwise at each iteration, the process is repeated for any of the new specifications resulting from adding a candidate. The depth of the search is controlled by the user. The search continues until either a consistent refinement is found or the algorithm cannot find one within the specified depth (hence the search algorithm is sound, but not complete).
Example 1.
Consider the following example borrowed from [6] with the environment variables and system variables . Here and stand for request, clear, grant and valid signals respectively. We start with no assumption, that is we only assume . Consider the following system guarantees: , , and . Let be the conjunction of these formulas. requires that every request must be granted eventually starting from the next step by setting signal to high. says that if clear or grant signal is high, then grant must be low at the next step. says if clear is high, then the valid signal must be low. Finally, says that system must issue a valid grant infinitely often.
The specification is unrealizable. A simple counterstrategy is for the environment to keep and high at all times. Then, by , needs to be always low and thus cannot be satisfied by any system. RATSY produces this counterstrategy which is then fed to our algorithm. An example candidate found by our algorithm to rule out this counterstrategy is the assumption . Adding to the specification prevents the environment from always keeping high, thus the environment cannot use the counterstrategy anymore. However, the specification is still unrealizable. RATSY produces the counterstrategy shown in Figure 1(a) for the new specification. The new counterstrategy keeps the high all the times. The value of is changed depending on the state of the counterstrategy as shown in Figure 1(a). The top block in each state of Figure 1(a) is the name of the state. RATSY produces additional information, shown in middle blocks, on how the counterstrategy enforces the system to violate the specification. We do not use this information in the current version of the algorithm.
The following formulas are examples of consistent refinements produced by our algorithm for the specification :
Assumptions in both of the refinements and imply , that is, adding them requires the environment to keep the signal always low. Although adding these assumptions make the specification realizable, it may not conform to the design intent. Refinement does not restrict like and , and only assumes that the environment sets the signal to low infinitely often and that, when the request signal is low, the clear signal should be low at the same and the next step.
Iv Specification Refinement
Algorithm 1 finds environment assumptions that can be added to the specification to make it realizable. It gets as input the initial unrealizable specification , the set of subsets of variables to be used in generated assumptions and the maximum depth of the search. It outputs a consistent refinement , if it can find one within the specified depth.
For an unrealizable specification, a counterstrategy is computed as a Moore transducer using the techniques in [4, 1]. The counterstrategy is then fed to the GeneratePatterns procedure which constructs a set of patterns and is detailed in Section IVC. Procedure GenerateCandidates, described in Section IVA, produces a set of candidate assumptions in the form of GR(1) formulas using patterns and the set of variables. Algorithm 1 runs a breadthfirst search to find a consistent refinement. Each node of the search tree is a generated candidate assumption, while the root of the tree corresponds to the assumption (i.e., no assumption). Each path of the search tree starting from the root corresponds to a candidate refinement as conjunction of candidate assumptions of the nodes visited along the path. When a node is visited during the search, its corresponding candidate refinement is added to the specification. If the new specification is consistent and realizable, the refinement is returned by the algorithm. Otherwise, if the depth of the current node is less than the maximum specified, a set of candidate assumptions are generated based on the counterstrategy for the new specification and the search tree expands.
In Algorithm 1, the queue CandidatesQ keeps the candidate refinements which are found during the search. At each iteration, a candidate refinement is removed from the head of the queue. The procedure Consistent checks if is consistent with the specification . If it is, the algorithm checks the realizability of the new specification using the procedure Realizable [2, 1]. If is realizable, is returned as a suggested refinement. Otherwise, if the depth of the search for reaching the candidate refinement is less than , a new set of candidate assumptions are generated using the counterstrategy computed for . Algorithm 1 keeps track of the number of counterstrategies produced along the path to reach a candidate refinement in order to compute its depth (Depth()). Each new candidate assumption results in a new candidate refinement which is added to the end of the queue for future processing . The algorithm terminates when either a consistent refinement is found, or there is no more candidates in the queue to be processed.
Iva Generating Candidates
Consider the Moore transducer of a counterstrategy, where and , and and are the set of the system and environment variables, respectively. Given , we construct a finite transition system which preserves the structure of the while removing all details about its input and output. More formally, for each state , has a corresponding state , and is the state corresponding to . There exists a transition if and only if there exists such that . It is easy to see that any run of corresponds to a run of and vice versa.
By processing the abstract FTS of the counterstrategy, we synthesize a set of patterns which are LTL formulas of the form , and that hold over all runs of . Each for is a disjunction of a subset of states of , i.e., where . The complements of these formulas, (liveness), (safety), and (transition), respectively, are of the desired GR(1) form and provide the structure for the candidate assumptions that can be used to rule out the counterstrategy. Note that similar to [6], we do not synthesize assumptions characterizing the initial state because they are easy to specify in practice. Besides, it is simple to discover them from the counterstrategy. Patterns are generated using simple graph search algorithms explained in Section IVC.
Example 2.
Figure 1(b) shows the abstract FTS for the counterstrategy of Figure 1(a). For this FTS our algorithm produces the set of patterns , , and . Any run of satisfies all of the above formulas. For example for , meaning that any run of the will eventually visit state . The formula means that any run of will eventually visit state and then state at the next step. Also any run of satisfies , meaning that any run of will eventually reach and stay in the set of states .
As we mentioned previously, each state of the FTS corresponds to a state of the Moore transducer of the counterstrategy. Also recall that each run of corresponds to a run of . , at any state , outputs the propositional formula which is a valuation over all environment variables. Formally, for any state of , we have where each is a literal over the environment variable . We call the state predicate of and also . We replace the states in the patterns with their corresponding state predicates to get a set of formulas which hold over all runs of the counterstrategy.
Example 3.
Consider the counterstrategy shown in Figure 1(a). The state predicates are and , where and are the states of . Using the patterns obtained in Example 2 and replacing the states with their corresponding state predicates, we obtain LTL formulas which hold over all runs of . For example, the pattern gives us the formula . Replacing with in the pattern leads to . Similarly, the pattern gives .
The structure of the state predicates and patterns is such that any subset of the environment variables can be used along with the patterns to generate candidates and the resulting formulas still hold over all runs of the counterstrategy. Algorithm 1 gets the set as input, where each is a subset of environment variables that should be used in the corresponding for generating the candidate assumptions from the patterns of the form , and .
Example 4.
Assume that the designer specifies , , and . Then the pattern results in . From we obtain , and leads to . Note that using a smaller subset of variables leads to simpler formulas (and sometimes trivial as in ). However, this simplicity may result in assumptions which put more constraints on the environment as we will show later.
The complement of the generated formulas form the set of candidate assumptions that can be used to rule out the counterstrategy. For instance, formulas , , and are the candidate assumptions computed based on the user input in Example 4. Note that there might be repetitive formulas among the generated candidates. We remove the repeated formulas in order to prevent the process from checking the same assumption repeatedly. We also use some techniques to simplify the synthesized assumptions (see the Appendix).
IvB Removing the Restrictive Formulas
Given two nonequivalent formulas and we say is stronger than if holds. Assume and are two formulas that hold over all runs of the counterstrategy computed for the specification , and that . Note that also holds, that is is a weaker assumption compared to . Adding either or to the environment assumptions rules out the counterstrategy. However, adding the stronger assumption restricts the environment more than adding . That is, puts more constraints on the environment compared to .
As an example, consider the counterstrategy shown in Figure 1(a). Both and hold over all runs of . Moreover, . Consider the corresponding assumptions and . Adding restricts the environment more than adding . requires the environment to keep the signal always low, whereas in case of , the environment is free to assign additional values to its variables. It only prevents the environment from setting to high and to low at the same time.
We construct patterns which are strongest formulas of their specified form that hold over all runs of the counterstrategy. Therefore the generated candidate assumptions are the weakest formulas that can be constructed for the given structure and the user specified subset of variables.
IvC Synthesizing Patterns
In this section we show how certain types of patterns can be synthesized using the abstract FTS of the counterstrategy. A pattern , is an LTL formula which holds over all runs of the FTS , i.e., . We are interested in patterns of the form , and . The complements of these patterns are of the GR(1) form and, after replacing states with their corresponding state predicates, will yield to candidate assumptions for removing the counterstrategy.
IvC1 Patterns of the Form
For a FTS , we define a configuration as a subset of states of . We say a configuration is an eventually configuration if for any run of there exists a state and a time step such that . That is, any run of eventually visits a state from configuration . It follows that if is an eventually configuration for , then . We say an eventually configuration is minimal if there exists no such that is an eventually configuration. Note that removing any state from a minimal eventually configuration leads to a configuration which is not an eventually configuration.
Algorithm 2 constructs eventually patterns which correspond to the minimal eventually configurations of with size less than or equal to . The larger configurations lead to larger formulas which are hard for the user to parse. The user can specify the value of . Heuristics can also be used to automatically set based on the properties of , e.g. the maximum outdegree of the vertices in the corresponding directed graph , where the outdegree of a vertex is the number of its outgoing edges. In Algorithm 2, the set keeps the minimal eventually configurations discovered so far. Algorithm 2 initializes the sets Patterns and to and , respectively. Note that holds over all runs of . The algorithm then checks each possible configuration with size less than or equal to in a nondecreasing order of to find minimal eventually configurations. Without loss of generality we assume that all states in have outgoing edges^{1}^{1}1A transition from any state with no outgoing transition can be added to a dummy state with a self loop. Patterns which include the dummy state will be removed.. At each iteration, a configuration is chosen. Algorithm 2 checks if there is a minimal eventually configuration which is already discovered and . If such exists, is not minimal. Otherwise, the algorithm checks if it is an eventually configuration by first removing all the states in and their corresponding incoming and outgoing transitions from to obtain another FTS . Now, if there is an infinite run from in , then there is a run in that does not visit any state in . Otherwise, is a minimal eventually configuration and is added to . The corresponding formula = is also added to the set of eventually patterns. Note that checking if there exists an infinite run in can be done by considering as a graph and checking if there is a reachable cycle from , which can be done in linear time in number of states and transitions of . Therefore, the algorithm is of complexity .
Example 5.
Consider the FTS shown in Figure 2. Algorithm 2 starts at initial configuration and generates the formula . None of , or is an eventually configuration. For example for configuration , there exists the run which never visits . Configurations and are minimal eventually configurations. For example removing will lead to a FTS with no infinite run (no cycle is reachable from in the corresponding graph). It is easy to see that configuration is not an eventually configuration. Configuration is not minimal, although it is an eventually configuration. Thus Algorithm 2 returns the set of patterns .
IvC2 Patterns of the Form
To compute formulas of the form which hold over all runs of the FTS of the counterstrategy, we view as a graph and separate its states into two groups: , the set of states that are part of a cycle in (including the cycle from one node to itself), and . Without loss of generality we assume that any state is reachable from . Therefore, any state belongs to a reachable strongly connected component of . Also for any strongly connected component of , there exists a run of which reaches states in and keeps cycling there forever. Hence, the formula holds over the run . Indeed is the minimal formula of disjunctive form which holds over all runs that can reach the strongly connected component . That is, by removing any of the states from , one can find a run which can reach the strongly connected component and visit the removed state, falsifying the resulted formula. Therefore, eventually for any execution of , the state of the system will always be in one of the states . Thus the formula is the minimal formula of the form eventually always which holds over all runs of .
To partition the states of the into and we use Tarjan’s algorithm for computing strongly connected components of the graph. Thus the algorithm is of linear time complexity in number of states and transitions of .
Example 6.
Consider the nondeterministic FTS shown in Figure 2. It has three strongly connected components: , and . Only the latter two components include a cycle inside them, that is . Thus, the pattern is generated. Note that the possible runs of the system are and . The generated pattern holds over both of these runs. Observe that removing any of the states in will result in a formula which is not satisfied by any more.
IvC3 Patterns of the Form
To generate candidates of the form , first note that holds only if holds. Therefore, a set of eventually patterns is first computed using Algorithm 2. Then for each formula , the pattern is generated, where is the set of states that can be reached in one step from the configuration specified by . Formally, and is the configuration represented by . The most expensive part of this procedure is computing the eventually patterns, therefore its complexity is the same as Algorithm 2. Due to the lack of space, the algorithms for computing and patterns are given in the Appendix.
Example 7.
The procedures described for producing patterns, lead to assumptions which only include environment variables, and are enough for resolving unrealizability in our case studies. However, in general, GR(1) assumptions can also include the system variables. The procedures can be easily extended to the general case (see the Appendix).
The following theorem states that the procedures described in this section, generate the strongest patterns of the specified forms. Its proof can be found in the Appendix. Removing the weaker patterns leads to shorter formulas which are easier for the user to understand. It also decreases the number of generated candidates at each step. More importantly, it leads to weaker assumptions on the environment that can be used to rule out the counterstrategy. If the restriction imposed by any of these candidates is not enough to make the specification realizable, the method analyzes the counterstrategy computed for the new specification to find assumptions that can restrict the environment more. This way the counterstrategies guide the method to synthesize assumptions that can be used to achieve realizability.
Theorem 1.
For any formula of the form , or which hold over all runs of a given FTS , there is an equivalent or stronger formula of the same form synthesized by the algorithms described in Section IVC.
V Case studies
We now present two case studies. We use RATSY to generate counterstrategies and Cadence SMV model checker [7] to check the consistency of the generated candidates. In our experiments, we set in Algorithm 1 to two, and in Algorithm 2 to the maximum outdegree of the vertices of the counterstrategy’s abstract directed graph. We slightly change Algorithm 1 to find all possible refinements within the specified depth.
Va Lift Controller
We borrow the lift controller example from [2]. Consider a lift controller serving three floors. Assume that the lift has three buttons, denoted by the Boolean variables , and , which are controlled by the environment. The location of the lift is represented using Boolean variables , and controlled by the system. The lift may be requested on each floor by pressing the corresponding button. We assume that once a request is made, it cannot be withdrawn, once the request is fulfilled it is removed, and initially there are no requests. Formally, the specification of the environment is , where , , and for .
The lift initially starts on the first floor. We expect the lift to be only on one of the floors at each step. It can move at most one floor at each time step. We want the system to eventually fulfill all the requests. Formally the specification of the system is given as , where

,

,

,

, and

.
The requirement says that the lift moves up one floor only if some button is pressed. The specification is realizable. Now assume that the designer wants to ensure that all floors are infinitely often visited; thus she adds the guarantees where to the set of system requirements. The specification is not realizable. A counterstrategy for the environment is to always keep all ’s low. We run our algorithms with the set of all the environment variables for all assumption forms. The algorithm generates the refinements and . Refinement requires that the environment infinitely often presses a button. Refinement is another suggestion which requires the environment to make a request after any inactive turn. Refinement seems to be more reasonable and the user can add it to the specification to make it realizable.
Only one counterstrategy is processed during the search for finding refinements and three candidate assumptions are generated overall, where one of the candidates is inconsistent with and the two others are refinements and . Thus, the search terminates after checking the generated assumptions at first level. Only percent of total computatuion time was spent on generating candidate assumptions from the counterstrategy. Note that to generate using the templatebased method in [6], the user needs to specify a template with three variables which leads to candidate assumptions, although only one of them is satisfied by the counterstrategy.
VB Amba Ahb
ARM’s Advanced Microcontroller Bus Architecture (AMBA) defines the Advanced HighPerformance Bus (AHB) which is an onchip communication protocol. Up to masters and slaves can be connected to the bus. The masters start the communication (read or write) with a slave and the slave responds to the request. Multiple masters can request the bus at the same time, but the bus can only be accessed by one master at a time. A bus access can be a single transfer or a burst, which consists of multiple number of transfers. A bus access can be locked, which means it cannot be interrupted. Access to the bus is controlled by the arbiter. More details of the protocol can be found in [2]. We use the specification given by one of RATSY’s example files (amba02.rat). There are four environment signals:

[]: Master requests access to the bus.

[]: Master requests a locked access to the bus. This signal is raised in combination with [].

[]: Type of transfer. Can be SINGLE (a single transfer), BURST4 (a fourtransfer), or INCR (unspecified length burst).

: Raised if the slave has finished processing the data. The bus owner can change and transfers can start only when HREADY is high.
The first three signals are controlled by the masters and the last one is controlled by the slaves. The specification of amba02.rat consists of one master and two slaves. For our experiment, we remove the fairness assumption from the specification. The new specification is unrealizable. We run our algorithm with the sets of variables , , and to be used in liveness, safety, left and right hand side of transition assumptions, respectively. Some of the refinements generated by our method are: , , and . Note that although is a consistent refinement, it includes as a subformula and it is more restrictive. The refinement implies that must always be low from the second step on. Among these suggested refinements, appears to be the best option. Our method only processes one counterstrategy with five states and generates five candidate assumptions to find the first refinement . To find all refinements within the depth two, overall five counterstrategies are processed by our method during the search, where the largest counterstrategy had states. The number of assumptions generated for each counterstrategy during the search is less than nine. percent of total computation time was spent on generating candidate assumptions from the counterstrategies.
Vi Conclusion and Future Work
We presented a counterstrategy guided approach for adding environment assumptions to an unrealizable specifications in order to achieve realizability. We gave algorithms for synthesizing weakest assumptions of certain forms (based on “patterns”) that can be used to rule out the counterstrategy.
We chose to apply explicitstate graph search algorithms on the counterstrategy because the available tools for solving games output the counterstrategy as a graph in an explicit form. Symbolic analysis of the counterstrategy may be desirable for scalability, but the key challenge for this is to develop algorithms for solving games that can produce counterexamples in compact symbolic form. Synthesizing symbolic patterns is one of the future directions.
Counterstrategies provide useful information for explaining reasons for unrealizability. However, there can be multiple ways to rule out a counterstrategy. We plan to investigate how the multiplicity of the candidates generated by our method can be used to synthesize better assumptions. Furthermore, our method asks the user for subsets of variables to be used in generating candidates. The choice of the subsets can significantly impact how fast the algorithm can find a refinement. Automatically finding good subsets of variables that contribute to the unrealizability problem is another future direction. Synthesizing environment assumptions for more general settings, and using the method for synthesizing interfaces between components in context of compositional synthesis are subject to our current work.
References
 [1] R. Bloem, A. Cimatti, K. Greimel, G. Hofferek, R. Könighofer, M. Roveri, V. Schuppan, and R. Seeber. Ratsy–a new requirements analysis tool with synthesis. In CAV 2010, pages 425–429. Springer, 2010.
 [2] R. Bloem, B. Jobstmann, N. Piterman, A. Pnueli, and Y. Sa’ar. Synthesis of reactive (1) designs. Journal of Computer and System Sciences, 78(3):911–938, 2012.
 [3] K. Chatterjee, T. Henzinger, and B. Jobstmann. Environment assumptions for synthesis. In CONCUR 2008, pages 147–161. Springer, 2008.
 [4] R. Konighofer, G. Hofferek, and R. Bloem. Debugging formal specifications using simple counterstrategies. In FMCAD 2009, pages 152–159, 2009.
 [5] O. Kupferman, N. Piterman, and M. Vardi. Safraless compositional synthesis. In CAV 2006, pages 31–44. Springer, 2006.
 [6] W. Li, L. Dworkin, and S. Seshia. Mining assumptions for synthesis. In MEMOCODE 2011, pages 43–50. IEEE, 2011.
 [7] K. McMillan. Cadence SMV. http://www.kenmcmil.com/smv.html.
 [8] N. Ozay, U. Topcu, and R. Murray. Distributed power allocation for vehicle management systems. In CDCECC 2011, pages 4841–4848. IEEE, 2011.
 [9] N. Piterman, A. Pnueli, and Y. Sa’ar. Synthesis of reactive (1) designs. In VMCAI 2006, pages 364–380. Springer, 2006.
 [10] Amir Pnueli and Roni Rosner. On the synthesis of a reactive module. In POPL 1989, pages 179–190. ACM, 1989.
 [11] T. Wongpiromsarn, U. Topcu, and R. M. Murray. Receding horizon temporal logic planning. IEEE Transactions on Automatic Control, 57(11):2817–2830, 2012.
Appendix A Appendix
Aa Simplifying the Generated Candidates
Some simple techniques are used to simplify the generated candidate assumptions for a given counterstrategy. We explain them over a synthesized liveness assumption . Other forms are simplified similarly. Note that is of conjunctive normal form, that is, where is a literal over a Boolean variable in a clause . The clauses correspond to the complement of the state predicates in our method. First, if a literal over a Boolean variable has the same form in all clauses, it is factored out from the clauses. For example, consider the formula , where and are Boolean variables. can be factored out giving . We scan the formula and remove the repetitive clauses, for example and clauses are repetitive in , thus it can be simplified to . Finally, if there are two clauses with one variable, the formula can further be simplified as . In future we plan to find better simplifying techniques for more general candidate assumptions. These simplifications is important because one of our goals is to generate formulas which are easy for the user to understand.
AB Algorithms
AC Extending patterns to include system variables
To be able to include system variables, we extend the finite state transition system with labels over transitions which are propositions over system variables, . Formally an extended FTS is a tuple where and is similar to what we had before and is a labeling function which maps each transition to a proposition over system variables. For each transition , where each is a literal over a variable . Generated patterns are of the form and where is the set of transitions going out of states included in , i.e., transitions such that , and is the set of states included in the formula .
AD Proof of Theorem 1
Note that if is an eventually configuration, then any configuration such that is also an eventually configuration. Moreover, , that is, the formula corresponding to is stronger than the one corresponding to .
We use the following lemma in proof of Theorem 1. Intuitively it says that any propositional formula over states of that hold over some run of it, can be written as disjunction of a subset of the states in .
Lemma 2.
Let