ltl Synthesis with Fairness and Stability Assumptions
Abstract
In synthesis, assumptions are constraints on the environment that rule out certain environment behaviors. A key observation here is that even if we consider systems with ltl goals on finite traces, environment assumptions need to be expressed over infinite traces, since accomplishing the agent goals may require an unbounded number of environment action. To solve synthesis with respect to finitetrace ltl goals under infinitetrace assumptions, we could reduce the problem to ltl synthesis. Unfortunately, while synthesis in ltl and in ltl have the same worstcase complexity (both 2EXPTIMEcomplete), the algorithms available for ltl synthesis are much more difficult in practice than those for ltl synthesis. In this work we show that in interesting cases we can avoid such a detour to ltl synthesis and keep the simplicity of ltl synthesis. Specifically, we develop a BDDbased fixpointbased technique for handling basic forms of fairness and of stability assumptions. We show, empirically, that this technique performs much better than standard ltl synthesis.
Introduction
In many situations we are interested in expressing properties over an unbounded but finite sequence of successive states. Lineartime Temporal Logic over finite traces (ltl) and its variants have been thoroughly investigated for doing so. There has been broad research for logical reasoning [13, 25], synthesis [15, 7], and planning [9, 14].
Recently synthesis under assumptions in ltl has attracted specific interest [14, 8]. First, planning for ltl goals can be considered as a form of ltl synthesis under assumptions, where the assumptions are the dynamics of the environment encoded in the planning domain [23, 8, 1, 2]. However, more generally, assumptions can be arbitrary constraints on the environment that can be exploited by the agent in devising a strategy to fulfill its goal.
Synthesis under assumptions has been extensively studied in ltl, where environment assumptions are expressed as ltl formulas [11, 10, 12, 3, 5]. In fact, ltl formulas can be used as assumptions as long as it is guaranteed that the environment is able to behave so as to keep the assumptions true, i.e., assumptions are environment realizable. Under these circumstances, it is possible to reduce synthesis for ltl goal under assumptions to standard synthesis for . Note that because of the guarantee of being environment realizable, no agent strategy can win by falsifying . See [2] for a discussion.
When we turn to ltl, a key observation is that even if we consider (finitetrace) ltl goals for the agent, assumptions need to be expressed considering infinite traces, since accomplishing the agent goals may require an unbounded number of environment action. So we have an assumption expressed in ltl and a goal expressed in ltl. To solve synthesis under assumptions in ltl, we could translate into ltl getting , by applying the translation of ltl into ltl in [13], and then do ltl synthesis for , see e.g. [8].
Unfortunately, while synthesis in ltl and in ltl have the same worstcase complexity, being both 2EXPTIMEcomplete [28, 15], the algorithms available for ltl synthesis are much harder in practice than those for ltl synthesis. In particular, the lack of efficient algorithms for the crucial step of automata determinization is prohibitive for finding scalable implementations [19, 18]. In spite of recent advancement in synthesis such as reducing to parity games [27], bounded synthesis based on solving iterated safety games [24, 17, 21], or recent techniques based on iterated FOND planning [7], ltl synthesis remains very challenging. In contrast, ltl synthesis is based on a translation to Deterministic Finite Automaton (dfa) [30], which can be seen as a game arena where environment and agent make their own moves. On this arena, the agent wins if a simple fixpoint condition (reachability of the dfa accepting states) is satisfied.
Hence, when we introduce assumptions, an important question arises: can we retain the simplicity of ltl synthesis? In particular, we are thinking about algorithms based on devising some sort of arena and then extracting winning strategies by relying on computing a small number of nested fixpoints (note that the reduction of ltl synthesis to parity games may generate exponentially many nested fixpoints [22]).
We consider here two different basic, but quite significant, forms of assumptions: a basic form of fairness (always eventually ), and a basic form of stability (eventually always ), where in both cases the truth value of is under the control of the environment, and hence the assumptions are trivially realizable by the environment. Note that due to the existence of ltl goals, synthesis under both kinds of assumptions does not fall under known easy forms of synthesis, such as GR(1) formulas [4]. For these kinds of assumptions, we devise a specific algorithm based on using the dfa for the ltl goal as the arena and then computing 2nested fixpoint properties over such arena. It should be noted that the kind of nested fixpoint that we compute for fairness is similar to the one in [14], but it is clear that the “fairness” stated there is different from what we claim in this paper. The “fairness” in [14] is interpreted as all effects happening fairly, therefore the assumption is hardcoded in the arena itself. Here, instead, we only require that a selected condition happens fairly, and our technique extends to deal with stability assumptions as well. We compare the new algorithm with standard ltl synthesis [27] and show empirically that this algorithm performs significantly better, in the sense that solving more cases with less time cost. Some proofs have been removed due to the lack of space.
Preliminaries
Lineartime Temporal Logic over finite traces (ltl) has the same syntax as ltl over infinite traces introduced in [29]. Given a set of propositions , the syntax of ltl formulas is defined as . Every is an atom. A literal is an atom or the negation of an atom. for “Next”, and for “Until”, are temporal operators. We make use of the standard Boolean abbreviations, such as (or) and (implies), and . Additionally, we define the following abbreviations “Weak Next” , “Eventually” and “Always” , where is for “Release”.
A trace is a sequence of propositional interpretations (sets), where () is the th interpretation of , and represents the length of . Intuitively, is interpreted as the set of propositions which are at instant . Trace is an infinite trace if , which is formally denoted as ; otherwise is a finite trace, denoted as . ltl formulas are interpreted over finite, nonempty traces. Given a finite trace and an ltl formula , we inductively define when is on at step (), written , as follows:
iff ;
iff ;
iff and ;
iff and ;
iff there exists such that and , and for all , , we have .
An ltl formula is on , denoted by , if and only if .
ltl synthesis can be viewed as a game of two players, the environment and the agent, contrasting each other. The aim is to synthesize a strategy for the agent such that no matter how the environment behaves, the combined behavior trace of both players satisfy the logical specification expressed in ltl [15].
Fair and Stable ltl Synthesis
In this paper, we focus on ltl synthesis under assumptions in two different basic forms: fairness and stability, which we call in the following fair ltl synthesis and stable ltl synthesis, respectively. In such synthesis problems, both players (environment and agent) have Boolean variables under their respective control. Here, we use to denote the set of environment variables that are uncontrollable for the agent, and the set of agent variables that are controllable for the agent. Therefore, and are disjoint.
In general, assumptions are specific forms of constraints.
Definition 1 (Environment Constraint).
An environment constraint is a Boolean formula over .
In particular, we define here two different basic, but common forms of assumptions.
Definition 2 (Fairness and Stability Assumptions).
An ltl formula is considered as a fairness assumption if it is of the form , and a stability assumption if of the form , where in both cases is an environment constraint.
A fair or stable trace can then be defined in terms of the corresponding assumption (fairness or stability).
Definition 3 (Fair and Stable Traces).
A trace is fair if and it is stable if .
Intuitively, holds infinitely often on an fair trace, while eventually holds forever on an stable trace. Note that, if trace is not fair, i.e., , then such that is stable. Similarly, if trace is not stable, i.e., , then such that is fair. Although there is a duality between fairness and stability, such duality breaks when applying these environment assumptions to the problem of ltl synthesis. This is because in addition to the assumptions, the synthesis problems also require the ltl specification to be satisfied.
We now define fair and stable ltl synthesis by making use of fair and stable traces.
Definition 4 (Fair (Stable) ltl Synthesis).
ltl formula , defined over , is fair (resp., stable) realizable if there exists a strategy , such that for an arbitrary environment trace , if is fair (resp., stable), then we can find such that is in the finite trace .
A fair (resp., stable) ltl synthesis problem, described as a tuple , consist in checking whether , defined over , is fair (resp., stable) realizable. The synthesis procedure aims to computing a strategy if realizable.
Intuitively speaking, describes the desired goal when the environment behaviors satisfy the assumption. An agent strategy for fair (resp., stable) synthesis problem is winning if it guarantees the satisfaction of the objective under the condition that the environment behaves in a way that holds infinitely often (resp., eventually holds forever). The realizability procedure of aims to answer the existence of a winning strategy and the synthesis procedure amounts for computing if it exists. In fact one can consider two variants of the synthesis problem, depending on the player who moves first, in the sense of assigning values to variables under its control first. Here we consider the environment as the firstplayer (as in planning), but a version where the agent moves first can be obtained by a small modification.
Since every ltl formula can be translated to a Deterministic Finite Automaton (dfa) that accepts exactly the same language as [13], we are able to reduce the problems of fair ltl synthesis and stable ltl synthesis to specific twoplayer dfa games, in particular, fair dfa game and stable dfa game, respectively. We start with introducing dfa games.
Games over dfa
Twoplayer games on dfa are games consisting of two players, the environment and the agent. and are disjoint sets of environment Boolean variables and agent Boolean variables, respectively. The specification of the game arena is given by a dfa = , where is the alphabet, is a set of states, is an initial state, is a transition function and is a set of accepting states.
A round of the game consists of both players setting the values of variables under their respective control. A play over records how two players set the values at each round and how the dfa proceed according to the values. Formally, a play from state is an infinite trace such that . Moreover, we also assign the environment as the firstplayer, which sets values first.
A play is considered as a winning play if it follows a certain winning condition. Different winning conditions lead to different games. In this paper, we consider two specific twoplayer games, fair dfa game and stable dfa game, both of which are described as , where is the game arena and is the environment constraint.
Fair DFA Game. Although the ultimate goal for solving a fair dfa game is to perform winning plays for the agent, since it is more straightforward to formulate the game considering the environment as the protagonist, we first define the winning condition of the environment over a play. A play over is winning for the environment with respect to a fair dfa game if the following two conditions hold:
Recurrence: is fair (that is, ),
Safety: for all ( is avoided).
Consequently, a play is winning for the agent if one of the following conditions holds:
Stability: is not fair (that is, ),
Reachability: for some ( is reached).
Stable DFA Game. As for a stable dfa game , a play over is winning for the environment if the following two conditions hold:
Stability: is stable (that is, ),
Safety: for all ( is avoided).
Consequently, a play is winning for the agent if one of the following conditions holds:
Recurrence: is not stable (that is, ),
Reachability: for some ( is reached).
Since we consider here the environment as the firstplayer, a strategy for the agent is a function , deciding the values of the controllable variables for every possible history of the uncontrollable variables. Respectively, an environment strategy is a function . A play follows a strategy (resp., a strategy ), if for all (resp., for all ).
We can now define winning states and winning strategies.
Definition 5 (Winning State and Winning Strategy).
In the game described above, is a winning state for the agent (resp., environment) if there exists strategy (resp., ) s.t. every play from that follows (resp., ) is an agent (resp., environment) winning play. Then (resp., ) is a winning strategy for the agent (resp., environment) from .
As shown in [26], both of the fair dfa game and stable dfa game described above are determined, that is, a state is a winning state for the agent if and only if is not a winning state for the environment. The realizability procedure of the game consists of checking whether there exists a winning strategy for the agent from initial state . The synthesis procedure aims to computing such a strategy.
We then show how to reduce the problems of fair ltl synthesis and stable ltl synthesis to fair dfa game and stable dfa game, respectively. Hence we can solve the dfa game, thus settling the corresponding synthesis problem.
Solution to Fair ltl Synthesis
In order to perform fair synthesis on ltl, given problem , we first translate the ltl specification into a dfa . We then view as a fair dfa game, and consider exactly the separation between environment and agent variables as in the original synthesis problem. Specifically, we assign as the environment variables and as the agent variables. Finally, we solve the fair dfa game, thus settling the fair ltl synthesis problem. The following theorem assesses the correctness of this technique.
Theorem 1.
Fair ltl synthesis problem is realizable iff fair dfa game is realizable.
Proof.
We prove the theorem in both directions.
Since is realizable for the agent, the initial state is an agent winning state with winning strategy . Therefore, a play over from following is a winning play for the agent. Moreover, for every such play from , either of the following conditions holds:
such that is not fair.
such that is fair. Since is winning for the agent, there exists such that . This implies that holds, where .
Consequently, the strategy assures that for an arbitrary environment trace , if is fair, then there is such that is in the finite trace . We conclude that is realizable.
For this direction, we assume that is realizable, then there exists a strategy that realizes . Thus consider an arbitrary environment trace , either of the following conditions holds:
is not fair, then the induced play over from that follows is winning for the agent by default.
is fair, then on the induced play over from , there exists such that is in the finite trace , in which case . Therefore, is winning for the agent.
Consequently, we conclude that fair dfa game is realizable for the agent. ∎
Fair dfa Game Solving
Winning fair dfa games means that the agent can eventually reach an “agent wins” region from which if the constraint holds, then it is possible to reach an accepting state. Given a fair dfa game , we proceed as follows: (1) Compute “agent wins” region in fair dfa game ; (2) Check realizability; (3) Return an agent winning strategy if realizable.
Since the environment winning condition is more intuitive, in order to show the solution to fair dfa game, we start by solving the RecurrenceSafety game, which considers the environment as the protagonist. The idea for winning such game is that the environment should remain in an “environment wins” region from which the constraint holds infinitely often referring to Recurrence game, meanwhile the accepting states are forever avoidable referring to Safety game. Therefore, in order to have both of Recurrence such that having holds and Safety such that avoiding accepting states , the “environment wins” region computation is defined as:
,
where ranges over and over .
The fixpoint stages for (note , for , by monotonicity) are:
,
.
Eventually, for some such that .
The fixpoint stages for with respect to (note , for , by monotonicity) are:
,
.
Finally, for some such that .
The following theorem assures that the nested fixpoint computation of collects exactly all environment winning states in fair dfa game.
Theorem 2.
For a fair dfa game and a state , we have iff is an environment winning state.
Proof.
We prove the two directions separately.
We prove by showing the contrapositive. If a state , then must be removed from at stage , therefore, . Then . That is, no matter what the (environment) strategy is, traces from satisfy neither of the following conditions:
holds and the trace gets trapped in without visiting accepting states such that holds, in which case is a new environment winning state;
eventually gets hold and from there we can have as true infinitely often without visiting accepting states such that holds, in which case is a new environment winning state.
Therefore, is not an environment winning state. So if is an environment winning state then holds.
If a state , then . That is, no matter what the (agent) strategy is, traces from satisfy either of the following conditions:
holds and the trace gets trapped in without visiting accepting states such that holds, in which case is a new environment winning state;
eventually gets hold and from there we can have as true infinitely often without visiting accepting states such that holds, in which case is a new environment winning state.
Thus is a winning state for the environment. ∎
Due to the determinacy of fair dfa game, the set of agent winning states can be computed by negating :
.
Theorem 3.
A fair dfa game has an agent winning strategy if and only if .
Strategy Extraction
Having completed the realizability checking procedure, this section deals with the agent winning strategy generation if is realizable. It is known that if some strategy that realizes exists, then there also exists a finitestate strategy generated by a finitestate transducer that realizes [6]. Formally, the agent winning strategy can be represented as a deterministic finite transducer based on the set , described as below.
Definition 6 (Deterministic Finite Transducer).
Given a fair dfa game , where , a deterministic finite transducer of such game is defined as follows:
is the set of agent winning states s.t. ;
is the transition function such that and ;
is the output function such that at an agent winning state with assignment , returns an assignment leading to an agent winning play.
The transducer generates in the sense that for every , we have , with the usual extension of to words over from . Note that there are many possible choices for the output function . The transducer defines a winning strategy by restricting to return only one possible setting of .
We extract the output function for the game from the approximates for assuming to be , from where no matter what the environment strategy is, traces have to always get hold. Thus, we consider: with approximates defined as:
,
.
Define an output function as follows: for , for all possible values , set to be such that holds for . Consider a deterministic finite transducer defined in the sense that constructing as described above, the following theorem guarantees that generates an agent winning strategy .
Theorem 4.
Strategy with is a winning strategy for the agent.
Solution to Stable ltl Synthesis
Solving stable ltl synthesis problem relies on solving the stable dfa game , where is the corresponding dfa of . The following theorem guarantees the correctness of such reduction.
Theorem 5.
Stable ltl synthesis problem is realizable iff stable dfa game is realizable.
Proof.
We prove the theorem in both directions.
Since is realizable for the agent, the initial state is an agent winning state with winning strategy . Therefore, a play over from following is a winning play for the agent. Moreover, for every such play from , either of the following conditions holds:
such that is not stable.
such that is stable. Since is winning for the agent, there exists such that . Therefore, holds, where .
Consequently, the strategy assures that for an arbitrary environment trace , if is stable, then there exists such that is in finite trace . Thus is realizable.
For this direction, we assume that is realizable, then there exists a strategy that realizes . Thus consider an arbitrary environment trace , either of the following conditions holds:
is not stable, then the induced play over from that follows is winning for the agent by default.
is stable, then on the induced play over from , there exists such that is in the finite trace , in which case . Therefore, is winning for the agent.
Consequently, we conclude that stable dfa game is realizable for the agent. ∎
Stable dfa Game Solving
Despite the duality between fairness and stability, solving the stable dfa game here cannot directly dualize the solution to fair dfa game. This is because the computation here involves a StabilitySafety game, which is not dual to the RecurrenceSafety game in fair dfa game solving. In order to deal with stable dfa game, we again first consider the environment as the protagonist. We compute the set of environment winning states as follows:
,
where ranges over and over .
The following theorem assures that the nested fixpoint computation of collects exactly all environment winning states in stable dfa game.
Theorem 6.
For a stable dfa game and a state , we have iff is an environment winning state.
Correspondingly, since stable dfa game is determined, the set of agent winning states can be computed as follows:
.
Theorem 7.
A stable dfa game has an agent winning strategy if and only if .
Strategy Extraction
Here, the agent winning strategy can also be represented as a deterministic finite transducer in terms of the set of agent winning states such that .
We extract the output function for the game from the approximates for assuming to be , from where no matter what the environment strategy is, traces cannot always get hold. Thus, we consider the fixpoint computation as follows:
.
Define an output function s.t. for , for all possible values , set to be s.t. holds for . The following theorem guarantees that generates an agent winning strategy .
Theorem 8.
Strategy with is a winning strategy for the agent.
Evaluation
We observe that a straightforward approach to ltl synthesis under assumptions can be obtained by a reduction to standard ltl synthesis, which allows us to utilize tools for ltl synthesis to solve the fair (or stable) ltl synthesis problem. In this section, we first revisit the reduction to standard ltl synthesis, and then show an experimental comparison with the approach proposed earlier in this paper.
Reduction to ltl Synthesis. The insight of reducing ltl synthesis under assumptions to ltl synthesis comes from the reduction in [33] for general ltl synthesis, and in [8] for constraint ltl synthesis, where the constraint describes the desired environment behaviors, under which the goal is to satisfy the given ltl specification. Both reductions adopt the translation rules in [13] to polynomially transform an ltl formula over into an ltl formula over , retaining the satisfiability equivalence, where proposition indicates the last instance of the finite trace. Such translation bridges the gap between ltl over finite traces and ltl over infinite traces. Based on the translation from ltl to ltl, we then reduce fair (resp., stable) ltl synthesis problem to ltl synthesis problem (resp., ).
Implementation.
Based on the ltl synthesis tool Syft
Experimental Methodology
Benchmarks.
We collected 1200 formulas consisting of two classes of benchmarks: 1000 randomly conjuncted ltl formulas over 100 basic cases, generated in the style described in [33], the length of which, indicating the number of conjuncts, ranges form 1 to 5. The assumption (either fairness or stability) is assigned by randomly selecting one variable from all environment variables; 200 ltl synthesis benchmarks with assumptions generated from a scalable counter game, described as follows:
There is an bit binary counter. At each round, the environment chooses whether to increment the counter or not. The agent can choose to grant the request or ignore it.
The goal is to get the counter having all bits set to , so the counter reaches the maximal value.
The fairness assumption is to have the environment infinitely request the counter to be incremented.
The stability assumption is to have the environment eventually keep requesting the counter to be incremented.
We reduce solving the counter game above to solving ltl synthesis with assumptions. First, we have agent variables denoting the value of counter bits. We also introduce another agent variables representing the carry bits. In addition, we have an environment variable representing the environment making an increment request or not, and as is considered as the agent granting the request. We then formulate the counter game into ltl formula as follows:
The ltl formula is then , and the constraint is . Obviously, such counter game only returns realizable cases, since a winning strategy for the agent is to grant all increment requests.
In order to get unrealizable cases, we can make some modifications on the counter game above. One possibility is to have the counter increment by 2 if the agent chooses to grant the request sent by the environment. Such modification leads to no winning strategy for the agent, since the maximal counter value of having each bit as is odd. However, incrementing by 2 at each time will never reach an odd value. Therefore, for bit such that , we keep the same formulation. While for bit , we change as follows:
Therefore, we have 200 counter game benchmarks in total, with the number of counter bits ranging from 1 to 100, and both realizable and unrealizable cases for each .
Experiment Setup. All tests were ran on a computer cluster. Each test took an exclusive access to a node with Intel(R) Xeon(R) CPU E52650 v2 processors running at 2.60GHz. Time out was set to 1000 seconds.
Correctness. Our implementation was verified by comparing the results returned by FSyft and StSyft with those from Strix. No inconsistency encountered for the solved cases.
Experimental Results.
We evaluated the efficiency of FSyft and StSyft in terms of the number of solved cases and total time cost. We compared these two tools against Strix by performing an endtoend comparison experiment. Therefore, both of the dfa construction time and the fixpoint computation time were counted for FSyft and StSyft. For Strix, we counted the running time from feeding the corresponding ltl formula to Strix to receiving the result. Both comparison on two classes of benchmarks show the advantage of the fixpointbased technique proposed in this paper as an effective method for both of fair ltl synthesis and stable ltl synthesis
Randomly Conjuncted Benchmarks. Figure 1 and Figure 2 show the number of solved cases as the given time increases on fair ltl synthesis and stable ltl synthesis, respectively. As shown in the figures, both of FSyft and StSyft are able to handle almost all cases (1000 in total for each), while Strix only solves a small fraction of the cases that FSyft and StSyft can solve. Moreover, as presented there, half of the cases that can be solved by FSyft and StSyft, around 400, are finished in less than 0.1 second, while Strix is unable to solve any cases given such time limit.
Counter Game. Figure 3 and Figure 4 show the running time of all tools on the counter game benchmarks. Since all of them got failed on cases with counter bits , here we only show realizable/unrealizale cases with counter bits , so we have 20 cases for each synthesis problem. The xlabels crea/unrean indicate the realizability and the number of counter bits of each case. Both of FSyft and StSyft are able to deal with cases with , while Strix only solves cases with up to 7, either stable ltl synthesis or fair ltl synthesis. For those common solved cases, both of FSyft and StSyft take much less time than Strix.
Conclusions
In this paper we presented a fixpointbased technique for ltl synthesis with assumptions for basic forms of fairness and stability, which is quite effective, as our experiment shows. Our technique can be summarized as follows: use the dfa for the ltl formula as the arena to play a game for the environment whose winning condition is to avoid reaching the accepting states while making the assumption true. Note that for a general ltl assumption (see [2]), we can transform such an assumption into a parity automaton, take the Cartesian product with the dfa and play the parity/reachability game over the resulting arena. Comparing this possible approach to the reduction to ltl synthesis is a subject for future work.
Acknowledgments. Work supported in part by European Research Council under the European Unionâs Horizon 2020 Programme through the ERC Advanced Grant WhiteMec (No. 834228), NSF grants IIS1527668, CCF1704883, and IIS1830549, NSFC Projects No. 61572197, No. 61632005 and No. 61532019.
Appendix A Appendix
Due to the lack of space, we move some proofs and the details of the reduction from fair ltl synthesis and stable ltl synthesis to standard ltl synthesis in this appendix.
For better readability, we redefine twoplayers dfa games here. Twoplayer games on dfa are games consisting of two players, the environment and the agent. and are disjoint sets of environment Boolean variables and agent Boolean variables, respectively. The specification of the game arena is given by a dfa = , where

is the alphabet;

is a set of states;

is an initial state;

is a transition function;

is a set of accepting states.
Here, we consider two specific twoplayer games, fair dfa game and stable dfa game, both of which are described as , where is the game arena and is the environment constraint, which is a Boolean formula over .
Fair ltl Synthesis
Due to the determinacy of fair dfa game, the set of agent winning states can be computed by negating :
.
The following theorem guarantees the correctness of the set of agent winning states computation .
Theorem 11.
A fair dfa game has an agent winning strategy if and only if .
Proof.
Since is the dual formula of , for a state , we have if and only if such that is not a winning state for the environment, in which case is an agent winning state with winning strategy . Therefore, for a state , we have if and only if is an agent winning state. Moreover, fair dfa game is realizable if and only if the initial state is an agent winning state. Consequently, we conclude that fair dfa game is realizable with agent winning strategy if and only if . ∎
Define an output function as follows: for , for all possible values , set to be such that holds for . Consider a deterministic finite transducer defined in the sense that constructing so as described above, the following theorem guarantees that generates an agent winning strategy . The following theorem guarantees that deterministic finite transducer is able to generate a winning strategy for the agent.
Theorem 12.
Strategy with is a winning strategy for the agent.
Proof.
Consider an arbitrary environment trace , the corresponding play over that follows is . We now prove that is a winning play for the agent. For every state along the play , the construction of ensures that, no matter how the environment sets , returns such that holds. Thus we either have holds, or visits . At the same time, keeps in . The first possibility keeps the stability condition and the latter one retains the reachability condition, by inductive hypothesis, both of them give a winning play. Therefore, is a winning strategy for the agent. ∎
Stable ltl Synthesis
In stable dfa game , we compute the set of environment winning states as follows: