Two Variable vs. Linear Temporal Logic in Model Checking and Games

Two Variable vs. Linear Temporal Logic in Model Checking and Games

Michael Benedikt Department of Computer Science, University of Oxford, United Kingdom {michael.benedikt, rastislav.lenhardt, jbw}@cs.ox.ac.uk Rastislav Lenhardt  and  James Worrell
Abstract.

Model checking linear-time properties expressed in first-order logic has non-elementary complexity, and thus various restricted logical languages are employed. In this paper we consider two such restricted specification logics, linear temporal logic (LTL) and two-variable first-order logic (). LTL is more expressive but can be more succinct, and hence it is not clear which should be easier to verify. We take a comprehensive look at the issue, giving a comparison of verification problems for , LTL, and various sub-logics thereof across a wide range of models. In particular, we look at unary temporal logic (UTL), a subset of LTL that is expressively equivalent to ; we also consider the stutter-free fragment of , obtained by omitting the successor relation, and the expressively equivalent fragment of UTL, obtained by omitting the next and previous connectives.

We give three logic-to-automata translations which can be used to give upper bounds for and UTL and various sub-logics. We apply these to get new bounds for both non-deterministic systems (hierarchical and recursive state machines, games) and for probabilistic systems (Markov chains, recursive Markov chains, and Markov decision processes). We couple these with matching lower-bound arguments.

Next, we look at combining verification techniques with those for LTL. We present here a language that subsumes both and LTL, and inherits the model checking properties of both languages. Our results give both a unified approach to understanding the behaviour of and LTL, along with a nearly comprehensive picture of the complexity of verification for these logics and their sublogics.

Key words and phrases:
Finite Model Theory, Verification, Automata
copyright: ©:

\@sect

section1[Introduction]Introduction The complexity of verification problems clearly depends on the specification language for describing properties. Arguably the most important such language is Linear Temporal Logic (LTL). LTL has a simple syntax, one can verify LTL properties over Kripke structures in polynomial space, and one can check satisfiability also in polynomial space. Moreover, Kamp [Kam68] has shown that LTL has the same expressiveness as first-order logic over words. For example, the first-order property “after we are born, we live until we die”:

is expressed in LTL by the formula .

In contrast with LTL, model checking first-order queries has non-elementary complexity [Sto74]—thus LTL could be thought of as a tractable syntactic fragment of FO. Another approach to obtaining tractability within first-order logic is by maintaining first-order syntax, but restricting to two-variable formulas. The resulting specification language has also been shown to have dramatically lower complexity than full first-order logic. In particular, Etessami, Vardi and Wilke [EVW02] showed that satisfiability for is NEXPTIME-complete and that is strictly less expressive than FO (and thus less expressive than LTL also). Indeed, [EVW02] shows that has the same expressive power as Unary Temporal Logic (UTL): the fragment of LTL with only the unary operators “previous”, “next”, “sometime in the past”, “sometime in the future”. Consider the example above. We have shown that it can be expressed in LTL, but it is easy to show that it cannot be expressed in UTL, and therefore cannot be expressed in .

Although is less expressive than LTL, there are some properties that are significantly easier to express in than in LTL. Consider the property that two -bit identifiers agree:

It is easy to show that there is an exponential blow-up in transforming the above formula into an equivalent LTL formula. We thus have three languages , and , with and equally expressive, more expressive, and with incomparable in succinctness with LTL.

Are verification tasks easier to perform in LTL, or in ? This is the main question we address in this paper. There are well-known examples of problems that are easier in LTL than in : in particular satisfiability, which is PSPACE-complete for LTL and NEXPTIME-complete for  [EVW02]. We will show that there are also tasks where is more tractable than LTL.

Our main contribution is a uniform approach to the verification of via automata. We show that translations to the appropriate automata can give optimal bounds for verification of on both non-deterministic and probabilistic structures. We also show that such translations allow us to understand the verification of the fragment of formed by removing the successor relation from the signature, denoted . It turns out, somewhat surprisingly, that for this fragment we can get the same complexity upper bounds for verification as for the simplest temporal logic—. For our translations from to automata, we make use of a key result from Weis [Wei11], showing that models of formulas realise only a polynomial number of types. We extend this “few types” result from finite to infinite words and use it to characterise the structure of automata for .

The outcome of our translations is a comprehensive analysis of the complexity of and UTL verification problems, together with those for the respective stutter-free fragments and . We begin with model checking problems for Kripke structures and for recursive state machines (RSMs), which we compare to known results for LTL on these models. We then turn to two-player games, considering the complexity of the problem of determining which player has a strategy to ensure that a given formula is satisfied. We then move from non-deterministic systems to probabilistic systems. We start with Markov chains and recursive Markov chains, the analogs of Kripke structures and RSMs in the probabilistic case. Finally we consider one-player stochastic games, looking at the question of whether the player can devise a strategy that is winning with a given probability.

Towards the end of the paper, we consider extensions of , and in particular how verification techniques can be combined with those for Linear Temporal Logic (LTL). We present here a language that we denote , subsuming both and LTL. We show that the complexity of verification problems for can be attacked by our automata-theoretic methods, and indeed reduces to verification of and LTL individually. As a result we show that the worst-case complexity of probabilistic verification, as well as non-deterministic verification, for is (roughly speaking) the maximum of the complexity for and LTL.

This paper expands on results presented in two conference papers, [BLW11, BLW12].

Organization: Section Two Variable vs. Linear Temporal Logic in Model Checking and Games contains preliminaries, while Section Two Variable vs. Linear Temporal Logic in Model Checking and Games gives fundamental results on the model theory of and its relation to that will be used in the remainder of the paper. Section Two Variable vs. Linear Temporal Logic in Model Checking and Games presents the logic-to-automata translations used in our upper bounds. The first is a translation of a given formula to a large disjoint union of Büchi automata with certain structural restrictions. This can also be used to give a translation from a given formula to an (still larger) union of Büchi automata. The second does something similar for formulas. The last translation maps and formulas to deterministic parity automata, which is useful for certain problems involving games.

Section Two Variable vs. Linear Temporal Logic in Model Checking and Games gives upper and lower bounds for non-deterministic systems, while Section Two Variable vs. Linear Temporal Logic in Model Checking and Games is concerned with probabilistic systems. In Section Two Variable vs. Linear Temporal Logic in Model Checking and Games we consider model checking of , which subsumes both and LTL, and finally in Section Two Variable vs. Linear Temporal Logic in Model Checking and Games we consider the impact of extending all the previous logics with let definitions.


\@sect

section1[Logic, Automata and Complexity Classes]Logic, Automata and Complexity Classes We consider a first-order signature with set of unary predicates and binary predicates (less than) and (successor). Fixing two distinct variables and , we denote by the set of first-order formulas over the above signature involving only the variables and . We denote by the sublogic in which the binary predicate is not used. We write for a formula in which only the variable occurs free.

In this paper we are interested in interpretations of on infinite words. An -word over the powerset alphabet represents a first-order structure extending , in which predicate is interpreted by the set and the binary predicates and have the obvious interpretations.

We also consider Linear Temporal Logic on -words. The formulas of are built from atomic propositions using Boolean connectives and the temporal operators (next), (previously), (eventually),  (sometime in the past), (until), and (since). Formally, is defined by the following grammar:

where are propositional variables. Unary temporal logic () denotes the subset without and , while denotes the stutter-free subset of without and . We use as an abbreviation for .

Let be the suffix of -word . We define the semantics of inductively on the structure of the formulas as follows:

  1. iff atomic prop. holds at position of

  2. iff and

  3. iff it is not the case that

  4. iff

  5. iff

  6. iff s.t. and , we have

  7. iff s.t. and , we have

  8. iff

  9. iff

It is well known that over -words has the same expressiveness as first-order logic, and has the same expressiveness as . Moreover, while is less expressive than , it can be exponentially more succinct [EVW02] – for concrete examples of these facts, see the introduction.



Figure 1. Expressiveness Diagram

We can combine the succinctness of and the expressiveness of by extending the former with the temporal operators and (applied to formulas with at most one free variable). We call the resulting logic . The syntax of divides formulas into two syntactic classes: temporal formulas and first-order formulas. Temporal formulas are given by the grammar

where is an atomic proposition and is a first-order formula with one free variable. First-order formulas are given by the grammar

where is a temporal formula. Here the first-order formula asserts that the temporal formula holds at position . The temporal operators , , and can all be introduced as derived operators. An example of formula is:

The relative expressiveness of the logics defined thus far is illustrated in Figure 1.

Finally, we consider an extension of with let definitions. We inductively define the formulas and the unary predicate subformulas that occur free in such a formula. The atomic formulas of are as in , with the formula occurring freely in itself. The constructors include all those of , with the set of free subformula occurrences being preserved by all of these constructors.

There is one new formula constructor of the form:

where is a unary predicate, is an formula in which is the only free variable and no occurrence of predicate is free, and is an arbitrary formula. A subformula occurs freely in iff it occurs freely in or it occurs freely in and the predicate is not .

The semantics of is defined via a translation function to , with the only non-trivial rule being:

where denotes the formula obtained by substituting variable for all free occurrences of in , and denotes substitution of any free occurrence of the form in and every occurrence of by . We let be the extension of by the operator above, and similarly define , , etc.

For a temporal logic formula or an formula with one free variable, we denote by the set of infinite words that satisfy at the initial position. The quantifier depth of an formula is denoted and the operator depth of a UTL formula is denoted . In either case the length of the formula is denoted .

The notion of a subformula of an formula is defined as usual. For an formula , let denote the set of subformulas of the equivalent formula , where is the translation function defined above.

Lemma 0.1.

Given an formula , is linear in .

Proof.

Notice that if , then . Then by structural induction it holds that for a -formula , has size at most .        

Büchi Automata. Our results will be obtained via transforming formulas to automata that accept -words. We will be most concerned with generalised Büchi automata (GBA). A GBA is a tuple with alphabet , set of states , set of initial states , transition function and set of sets of final states . The accepting condition is that for each there is a state which is visited infinitely often. We can have labels either on states or on transitions, but both models are equivalent. For more details, see [VW86]. We will consider two important classes of Büchi automata: the automaton is said to be deterministic in the limit if all states reachable from accepting states are deterministic; is unambiguous if for each state s each word is accepted along at most one run that starts at .

Deterministic Parity Automata. For some model checking problems, we will need to work with deterministic automata. In particular, we will use deterministic parity automata. A deterministic parity automaton is a tuple with alphabet , set of states , an initial state , transition function and a priority function mapping each state to a natural number. The transition function maps each state and symbol of the alphabet exactly to one new state. A run of such an automaton on input -word induces an infinite sequence of priorities. The acceptance condition is that the highest infinitely often occurring priority in this sequence is even.

Complexity Classes. Our complexity bounds involve counting classes. #P is the class of functions for which there is a non-deterministic polynomial-time Turing Machine such that is the number of accepting computation paths of on input . A complete problem for #P is #SAT, the problem of counting the number of satisfying assignments of a given boolean formula. We will be considering computations of probabilities, not integers, so our problems will technically not be in #P; but some of them will have representations computable in the related class , and will be -hard. For brevity, we will sometimes abuse notation by saying that such probability computation problems are -complete. The class of functions #EXP is defined analogously to #P, except with a non-deterministic exponential-time machine. We will deal with a decision version of #EXP, PEXP, the set of problems solvable by nondeterministic Turing machine in exponential time, where the acceptance condition is that more than a half of computation paths accept  [BFT98].

Notation: In our complexity bounds, we will often write to denote a fixed but arbitrary polynomial.


\@sect

section1[ model theory and succinctness] model theory and succinctness We now discuss the model theory of , summarizing and slightly extending the material presented in Etessami, Vardi, and Wilke  [EVW02] and in Weis and Immerman [WI09].

Recall that we will consider strings over alphabet , where is the set of unary predicates appearing in the input formula. We start by recalling the small-model property of that underlies the NEXPTIME satisfiability result of Etessami, Vardi, and Wilke [EVW02], it is also implicit in Theorem 6.2 of [WI09].

The domain of a word is the set of positions in . The range of is the set of letters occurring in . Write also for the set of letters that occur infinitely often in .

Given a finite or infinite word , a position , and , we define the -type of at position to be the set of formulas

Given and positions and , write if and only if . Furthermore, we write for two strings , if for all -formulas of quantifier depth at most we have iff .

The small model property of [EVW02] can then be stated as follows:

Proposition 0.2 ([Evw02]).

Let . Then (i) For any string and positive integer there exists such that and ; (ii) for any infinite string and positive integer there are finite strings and , with , such that .

For completeness, we give a constructive proof of Proposition 0.2, which will be used in one of our translations of to automata. This is Lemma 0.10 at the end of this section. For this it is convenient to use the following inductive characterisation of , which is proven in [EVW02] by a straightforward induction:

Proposition 0.3 ([Evw02]).

Let . Then if and only if (i) , (ii) , and (iii) .

The next proposition states that we can collapse any two positions in a string that have the same -type without affecting the -type of the string.

Proposition 0.4 ([Evw02]).

Let and let be such that . Writing , we have .

From these two propositions it follows that every finite string is equivalent under to a string of length exponential in and .

Proposition 0.5.

Given a nonnegative integer , for all strings there exists a string such that and is bounded by .

Proof.

We prove by induction on that the set of -types occurring along has size at most .

The base case is clear.

For the induction step, assume that the number of -types occurring along is at most . Define a boundary point in to be the position of the first or last occurrence of a given -type. Then there are at most boundary points. But by Proposition 0.3 the -type at a given position in is determined by , the set of boundary points strictly less than , and the set of boundary points strictly greater than . Thus the number of -types along is at most

(0.1)

By Proposition 0.4, given any string in which there are two distinct positions with the same -type there exists a shorter string with . From the bound (0.1) on the number of boundary points, we conclude that there exists a string such that and .        

The relation is also easy to compute:

Proposition 0.6.

Given of length at most we can compute whether in time at most .

Proof.

For we successively pass along labelling each position with its -type . Each rank requires two passes: we pass leftward through computing the set of -types to the left of each position, then we pass rightward computing the set of -types to the right of each position. This requires passes, with each pass taking time linear in and at most quadratic in the number of -types that occur along . The bound now follows using the estimate of the number of types given in Proposition 0.5.        

Combining Propositions 0.5 and 0.6 we get:

Corollary 0.7.

Given there exists a set of representative strings such that each has and for each string there exists a unique such that . Moreover can be computed from in time .

The following result is classical, and can be proven using games.

Proposition 0.8.

Given and , for all if and then .

From the above we infer that the equivalence class of an infinite string under is determined by a prefix of the string and the set of letters appearing infinitely often within it.

Proposition 0.9.

Fix . Given , there exists such that for all and any word with it holds that .

Proof.

Define a strictly increasing sequence of integers inductively as follows.

Let be such that for all letter occurs infinitely often in . For let be such that . Now define .

Let and let for some such that . We claim that for all :

  1. if then ;

  2. if then if .

This claim entails the proposition. We prove the claim by induction on . The base case is obvious.

The induction step for Clause 1 is as follows. Suppose that ; we must show that . Certainly since and agree in the first letters. Similarly for all we have by Parts 1 and 2 of the induction hypothesis. Now for all there exists such that and hence by Part 2 of the induction hypothesis . We conclude that by Proposition 0.3.

The induction step for Clause 2 is as follows. Suppose that and ; we must show that . We will again use Proposition 0.3. Certainly for all there exists such that and hence . Now let . If then , and hence . Otherwise suppose . By definition of there exists , such that . Then by Clause 2 of the induction hypothesis.        

Combining Proposition 0.8 and Proposition 0.9, we complete the proof of Proposition 0.2, giving a slight strengthening of the conclusion for infinite words.

Lemma 0.10.

For any string and positive integer there exists with such that for infinitely many prefixes of , and , where is a list of the letters occurring infinitely often in .


\@sect

subsection2[ and temporal logic] and temporal logic

We now examine the relationship between and . Again we will be summarizing previous results while adding some new ones about the complexity of translation.

As mentioned previously, Etessami, Vardi and Wilke [EVW02] have studied the expressiveness and complexity of on words. They show that has the same expressiveness as unary temporal logic , giving a linear translation of into , and an exponential translation in the reverse direction.

Lemma 0.11 ([Evw02]).

Every formula can be converted to an equivalent formula with and . The translation runs in time polynomial in the size of the output.

With regard to complexity, [EVW02] shows that satisfiability for over finite words or -words is NEXP-complete. The NEXP upper bound follows immediately from their “small model” theorem (see Proposition 0.2 stated earlier). NEXP-hardness is by reduction from a tiling problem. This reduction requires either the use of the successor predicate, or consideration of models where an arbitrary Boolean combination of predicates can hold, that is, they consider words over an alphabet of the form .

The NEXP-hardness result for does not carry over from satisfiability to model checking since the collection of alphabet symbols that can appear in a word generated by the system being checked is bounded by the size of the system. However the complexity of model checking is polynomially related to the complexity of satisfiability when the latter is measured as a function of both formula size and alphabet size. Hence in the rest of the section we will deal with words over alphabet , i.e., in which a unique proposition holds in each position. We call this the unary alphabet restriction.

One obvious approach to obtaining upper bounds for model checking would be to give a polynomial translation to , and use logic-to-automata translation for . Without the unary alphabet restriction an exponential blow-up in translating from to was shown necessary by Etessami, Vardi, and Wilke:

Proposition 0.12 ([Evw02]).

There is a sequence of sentences over , of size such that the shortest temporal logic formula equivalent to has size .

The sequence given in [EVW02] to prove the above theorem is

In particular, their proof does not apply under the unary alphabet restriction. However below we show that the exponential blow-up is necessary even in this restricted setting. Our proof is indirect; it uses the following result about extensions of with let definitions:

Lemma 0.13.

There is a sequence of sentences mentioning predicates such that the shortest model of under the unary alphabet restriction has size .

Proof.

We define as follows.

The body of the nested sequence of let definitions states that for all and for all there exists such that the vector of formulas has the same truth value as the vector in all but position . Hence the vector must take all possible truth values as ranges over all positions in the word, i.e., any model of must have length at least .

We now claim that is satisfiable. To show this, recursively define a sequence of words over alphabet by and , where . Finally write . Then the vector of truth values counts down from to in binary as one moves along .        

In contrast, we show that basic temporal logic enhanced with let definitions has the small model property:

Lemma 0.14.

There is a polynomial such that every satisfiable formula has a model of size .

Proof.

In [EVW02, Section 5], Etessami, Vardi, and Wilke prove a small model property for , which follows the same lines as the one given for , but with polynomial rather than exponential bounds on sizes. Instead of using types based on quantifier-rank, the notion of type is based on the nesting of modalities; they thus look at modal -type, where is the nesting of modalities in . It was shown how to collapse infinite -words in order to get ”smaller” -words with essentially the same type structure. Then in Lemma of [EVW02] it is shown that for each there are strings , such that the type of at position is equal to the type of at position and the length of both and is less than , where is number of types occurring along (that is, a polynomial version to Proposition 0.2).

The type is determined by the predicate and the combination of temporal subformulas of holding at the given position. Each temporal subformula, i.e. subformula which starts with  or , can change its truth value at most once along the infinite word. Therefore there are at most polynomially many (in and in number of temporal subformulas of ) different combinations and so also types along .

Lemma 0.1 tells us that number of temporal subformulas of is linear in , and therefore the number of types occurring along any word is polynomial in . Thus applying the above-mentioned type-collapsing argument of [EVW02] we conclude that there is a polynomial size model of .        

The small model property for will allow the lifting of NP model-checking results to this language. Most relevant to our discussion of succinctness, it can be combined with the previous result to show that is succinct with respect to :

Proposition 0.15.

Even assuming the unary alphabet restriction, there is no polynomial translation from formulas to equivalent -formulas.

Proof.

Proof by contradiction. Assuming there were such a polynomial translation, we could apply it locally to the body of every let definition in an formula. This would allow us to translate an formula to a formula of polynomial size. Therefore it would follow from Lemma 0.14 that every formula that is satisfiable has a polynomial sized model, which is a contradiction of Lemma 0.13.        

Proposition 0.15 shows that we cannot obtain better bounds for merely by translation to . Weis [Wei11] showed an NP-bound on satisfiability of under the unary alphabet restriction (compared to NEXP-completeness of satisfiability in the general case). His approach is to show that models realise only polynomially many types. We will later show that the approach of Weis can be extended to obtain complexity bounds for model checking that are as low as one could hope, i.e., that match the complexity bounds for the simplest temporal logic, . We do so by building sufficiently small unambiguous Büchi automata for formulas.


\@sect

section1[Translations]Translations

This section contains a key contribution of this paper—three logic-to-automata translations for , , and . We will later use these translations to obtain upper complexity bounds for model checking both non-deterministic and probabilistic systems. As we will show, for most of the problems it is sufficient to translate a given formula to an unambiguous Büchi automaton. Our first translation produces such an automaton from a given formula. This is then lifted to full via a standard syntactic transformation from to . Our second translation goes directly from the stuffer-free fragment to unambiguous Büchi automata, and is used to obtain optimal bounds for this fragment. Our third translation constructs a deterministic parity automaton from an formula. Having a deterministic automaton is necessary for solving two-player games and quantitative model checking of Markov decision processes.


\@sect

subsection2[Translation I: From UTL to unambiguous Büchi automata]Translation I: From UTL to unambiguous Büchi automata We begin with a translation that takes formulas to Büchi automata. Combining this with the standard syntactic transformation of to , we obtain a translation from to Büchi automata.

Recall from the preliminaries section that a Büchi automaton is said to be deterministic in the limit if all accepting states and their descendants are deterministic, and that is unambiguous if each word has at most one accepting run.

We will aim at the following result:

Theorem 0.16.

Let be a UTL formula over set of propositions with operator depth with respect to and . Given an alphabet , there is a family of at most Büchi automata such that (i)  is the disjoint union of the languages ; (ii)  has at most states; (iii)  is unambiguous and deterministic in the limit; (iv) there is a polynomial-time procedure that outputs given input and index .

We first outline the construction of the family . Let be a formula of over set of atomic propositions . Following Wolper’s construction [Wol01], define , the closure of , to consist of all subformulas of (including ) and their negations, where we identify with . Furthermore, say that is a subformula type if (i) for each formula precisely one of and is a member of ; (ii)  implies ; (iii)  iff and . Given subformula types and , write if and agree on all formulas whose outermost connective is a temporal operator, i.e., for all formulas we have iff , and iff . Note that these types are different from the types based on modal depth considered before.

Fix an alphabet and write for the set of subformula types with . In subsequent applications will arise as the set of propositional labels in a structure to be model checked. Following [Wol01] we define a generalised Büchi automaton such that . The set of states is , with the set of initial states comprising those such that (i)  and (ii)  if and only if for any formula . The state labelling function is defined by . The transition relation consists of those pairs such that

  1. iff either or ;

  2. and implies ;

  3. implies .

The collection of accepting sets is , where .

A run of on a word yields a function . Moreover it can be shown that if the run is accepting then for all formulas ,  [Wol01, Lemma 2]. But since contains each subformula or its negation, we have if and only if for all . We conclude that is unambiguous and accepts the language . The following lemma summarises some structural properties of the automaton .

Lemma 0.17.

Consider the automaton as a directed graph with set of vertices and set of edges . Then (i) states and are in the same strongly connected component iff ; (ii) each strongly connected component has size at most ; (iii) the dag of strongly connected components has depth at most and outdegree at most ; (iv) is deterministic within each strongly connected component, i.e., given transitions and with and in the same strongly connected component, we have if and only if .

Proof.

(i) If then by definition of the transition relation we have that . Thus and are in the same connected component. Conversely, suppose that and are in the same connected component. By clauses (i) and (iii) in the definition of the transition relation we have that iff and likewise iff . But for each formula either contains or its negation, and similarly for ; it follows that .

(ii) If , then if and only if . Thus the number of states in an SCC is at most the number of labels.

(iii) Suppose that is an edge connecting two distinct SCC’s, i.e., . Then there is a subformula such that . Note that lies in all states reachable from under . Since there at most such subformulas, we conclude that the depth of the DAG of SCC’s is at most .

(iv) This follows immediately from (i).        

We proceed to the proof of Theorem 0.16.

Proof.

We first treat the case , i.e., does not mention or .

Let be the automaton corresponding to , as defined above. For each path of SCC’s in the SCC dag of we define a sub-automaton as follows. has set of states ; its set of initial states is ; its transition relation is , i.e., the transition relation of restricted to ; its collection of accepting states is .

It follows from observations (ii) and (iii) in Lemma 0.17 that has at most states, and from observation (iii) that there are at most such automata. Since is unambiguous, each accepting run of yields an accepting run of for a unique path , and so the partition .

Finally, is deterministic in the limit since all accepting states lie in a bottom strongly connected component, and all states in such a component are deterministic by Lemma 0.17(iv). If we convert from a generalised Büchi automaton to an equivalent Büchi automaton (using the construction from [Wol01]), then the resulting automaton remains unambiguous and deterministic in the limit. This transformation touches only the bottom strongly connected component of , whose size will become at most quadratic.

This completes the proof in case . The general case can be handled by reduction to this case. A UTL formula can be transformed to a normal form such that all next-time and last-time operators are pushed inside the other Boolean and temporal operators. Now the formula can be regarded as a formula over an extended set of propositions . Applying the case to we obtain a family of automata over alphabet such that , is unambiguous and deterministic in the limit, and has at most states.

Now we can construct a deterministic transducer with states that transforms (in the natural way) an -word over alphabet into an <