The Wadge Hierarchy of Deterministic Tree Languages
Abstract
We provide a complete description of the Wadge hierarchy for deterministically recognisable sets of infinite trees. In particular we give an elementary procedure to decide if one deterministic tree language is continuously reducible to another. This extends Wagner’s results on the hierarchy of regular languages of words to the case of trees.
4 (4:15) 2008 1–44 Nov. 2, 2006 Dec. 23, 2008
F. Murlak]Filip Murlak
F.4.3, F.4.1, F.1.1, F.1.3
*An extended abstract of this paper was presented at ICALP’06 in Venice, Italy.
1 Introduction
Two measures of complexity of recognisable languages of infinite words or trees have been considered in literature: the index hierarchy, which reflects the combinatorial complexity of the recognising automaton and is closely related to calculus, and the Wadge hierarchy, which is the refinement of the Borel/projective hierarchy that gives the deepest insight into the topological complexity of languages. Klaus Wagner was the first to discover remarkable relations between the two hierarchies for finitestate recognisable (regular) sets of infinite words [14]. Subsequently, decision procedures determining an regular language’s position in both hierarchies were given [4, 7, 15].
For tree automata the index problem is only solved when the input is a deterministic automaton [9, 13]. As for topological complexity of recognisable tree languages, it goes much higher than that of regular languages, which are all . Indeed, coBüchi automata over trees may recognise complete languages [8], and Skurczyński [12] proved that there are even weakly recognisable tree languages in every finite level of the Borel hierarchy. This may suggest that in the tree case the topological and combinatorial complexities diverge. On the other hand, the investigations of the Borel/projective hierarchy of deterministic languages [5, 8] reveal some interesting connections with the index hierarchy.
Wagner’s results [14, 15], giving rise to what is now called the Wagner hierarchy (see [10]), inspire the search for a complete picture of the two hierarchies and the relations between them for recognisable tree languages. In this paper we concentrate on the Wadge hierarchy of deterministic tree languages: we give a full description of the Wadgeequivalence classes forming the hierarchy, together with a procedure calculating the equivalence class of a given deterministic language. In particular, we show that the hierarchy has the height , which should be compared with for regular languages [15], for deterministic contextfree languages [1], for languages recognised by deterministic Turing machines [11], or an unknown ordinal for languages recognised by nondeterministic Turing machines, and the same ordinal for nondeterministic contextfree languages [2].
The key notion of our argument is an adaptation of the Wadge game to tree languages, redefined entirely in terms of automata. Using this tool we construct a collection of canonical automata representing the Wadge degrees of all deterministic tree languages. Then we provide a procedure calculating the canonical form of a given deterministic automaton, which runs within the time of finding the productive states of the automaton (the exact complexity of this problem is unknown, but not worse than exponential).
2 Automata
We use the symbol to denote the set of natural numbers . For an alphabet , is the set of finite words over and is the set of infinite words over . The concatenation of words and will be denoted by , and the empty word by . The concatenation is naturally generalised for infinite sequences of finite words . The concatenation of sets , is .
A tree is any subset of closed under the prefix relation. An element of a tree is usually called a node. A leaf is any node of a tree which is not a (strict) prefix of some other node. A labelled tree (or a tree over ) is a function such that is a tree. For we define as a subtree of rooted in , i. e., , .
A full ary labeled tree is a function . The symbol will denote the set of full binary trees over . From now on, if not stated otherwise, a “tree” will mean a full binary tree over some alphabet.
Out of a variety of acceptance conditions for automata on infinite structures, we choose the parity condition. A nondeterministic parity automaton on words can be presented as a tuple , where is a finite input alphabet, is a finite set of states, is the transition relation, and is the initial state. The meaning of the function will be explained later. Instead of one usually writes . A run of an automaton on a word is a word such that and if , , and , then . A run is accepting if the highest rank repeating infinitely often in is even; otherwise is rejecting. A word is accepted by if there exists an accepting run on it. The language recognised by , denoted is the set of words accepted by . An automaton is deterministic if its relation of transition is a total function . Note that a deterministic automaton has a unique run (accepting or not) on every word. We call a language deterministic if it is recognised by a deterministic automaton.
A nondeterministic automaton on trees is a tuple , the only difference being that . Like before, means . We write if there exists a state such that . Similarly for . A run of on a tree is a tree such that and if , , and , then . A path of the run is accepting if the highest rank repeating infinitely often in is even; otherwise is rejecting. A run is called accepting if all its paths are accepting. If at least one of them is rejecting, so is the whole run. An automaton is called deterministic if its transition relation is a total function .
By we denote the automaton with the initial state set to . A state is allaccepting if accepts all trees, and allrejecting if rejects all trees. A state (a transition) is called productive if it is used in some accepting run. Observe that being productive is more than just not being allrejecting. A state is productive if and only if it is not allrejecting and there is a path such that , , and is not allrejecting for .
Without loss of generality we may assume that all states in are productive save for one allrejecting state and that all transitions are either productive or are of the form . The reader should keep in mind that this assumption has influence on the complexity of our algorithms. Transforming a given automaton into such a form of course needs calculating the productive states, which is equivalent to deciding a language’s emptiness. The latter problem is known to be in and has no polynomial time solutions yet. Therefore, we can only claim that our algorithms are polynomial for the automata that underwent the above preprocessing. We will try to mention it whenever it is particularly important.
The Mostowski–Rabin index of an automaton is a pair
An automaton with index is often called a automaton. Scaling down the function if necessary, one may assume that is either 0 or 1. Thus, the indices are elements of . For an index we shall denote by the dual index, i. e., , . Let us define an ordering of indices with the following formula
In other words, one index is smaller than another if and only if it uses less ranks. This means that dual indices are not comparable. The Mostowski–Rabin index hierarchy for a certain class of automata consists of ascending sets (levels) of languages recognised by automata (see Fig. 1).
The fundamental question about the hierarchy is the strictness, i. e., the existence of languages recognised by a automaton, but not by a automaton. The strictness of the hierarchy for deterministic automata follows easily from the strictness of the hierarchy for deterministic word automata [15]: if a word language needs at least the index , so does the language of trees that have a word from on the leftmost branch. The index hierarchy for nondeterministic automata is also strict [6]. In fact, the languages showing the strictness may be chosen deterministic: one example is the family of the languages of trees over the alphabet satisfying the parity condition on each path.
The second important question one may ask about the index hierarchy is how to determine the exact position of a given language. This is known as the index problem.
Given a deterministic language, one may ask about its deterministic index, i. e., the exact position in the index hierarchy of deterministic automata (deterministic index hierarchy). This question can be answered effectively. Here we follow the method introduced by Niwiński and Walukiewicz [7].
A path in an automaton is a sequence of states and transitions:
A loop is a path starting and ending in the same state, . A loop is called accepting if is even. Otherwise it is rejecting. A loop is a loop with the highest rank on it equal to . A sequence of loops in an automaton is called an alternating chain if the highest rank appearing on has the same parity as and it is higher then the highest rank on for . A flower is an alternating chain such that all loops have a common state (see Fig. 2).
Niwiński and Walukiewicz use flowers in their solution of the index problem for deterministic word automata.
[Niwiński, Walukiewicz [7]] A deterministic automaton on words is equivalent to a deterministic automaton iff it does not contain a flower. \qed
For a tree language over , let denote the language of generalised paths of ,
A deterministic tree automaton , can be treated as a deterministic word automaton recognising . Simply for , take , where . Conversely, given a deterministic word automaton recognising , one may interpret it as a tree automaton, obtaining thus a deterministic automaton recognising . Hence, applying Theorem 2 one gets the following result.
For a deterministic tree automaton the language is recognised by a deterministic automaton iff does not contain a flower. \qed
In [5] it is shown how to compute the weak deterministic index of a given deterministic language. An automaton is called weak if the ranks may only decrease during the run, i. e., if , then . The weak deterministic index problem is to compute a weak deterministic automaton with minimal index recognising a given language. The procedure in [5] is again based on the method of difficult patterns used in Theorem 2 and Proposition 2. We need the simplest pattern exceeding the capability of weak deterministic automata. Just like in the case of the deterministic index, it seems natural to look for a generic pattern capturing all the power of . Intuitively, we need to enforce the alternation of ranks provided by . Let a weak flower be a sequence of loops such that is reachable from , and is accepting iff is even (see Fig. 3).
[[5]] A deterministic automaton is equivalent to a weak deterministic automaton iff it does not contain a weak flower. \qed
For a deterministic language one may also want to calculate its nondeterministic index, i. e., the position in the hierarchy of nondeterministic automata. This may be lower than the deterministic index, due to greater expressive power of nondeterministic automata. Consider for example the language consisting of trees whose leftmost paths are in a regular word language . It can be recognised by a nondeterministic automaton, but its deterministic index is equal to the deterministic index of , which can be arbitrarily high.
The problem transpired to be rather difficult and has only just been solved in [9]. Decidability of the general index problem for nondeterministic automata is one of the most important open questions in the field.
3 Topology
We start with a short recollection of elementary notions of descriptive set theory. For further information see [3].
Let be the set of infinite binary sequences with a metric given by the formula
and be the set of infinite binary trees over with a metric
Both and , with the topologies induced by the above metrics, are Polish spaces (complete metric spaces with countable dense subsets). In fact, both of them are homeomorphic to the Cantor discontinuum.
The class of Borel sets of a topological space is the closure of the class of open sets of by complementation and countable sums. Within this class one builds so called Borel hierarchy. The initial (finite) levels of the Borel hierarchy are defined as follows:

– open subsets of ,

– complements of sets from ,

– countable unions of sets from .
For example, are closed sets, are sets, and are sets. By convention, and .
Even more general classes of sets from the projective hierarchy. We will not go beyond its lowest level:

– analytical subsets of , i. e., projections of Borel subsets of with product topology,

– complements of sets from .
Whenever the space is determined by the context, we omit it in the notation above and write simply , , and so on.
Let be a continuous map of topological spaces. One says that is a reduction of to , if . Note that if is in a certain class of the above hierarchies, so is . For any class a set is hard, if for any set there exists a reduction of to . The topological hierarchy is strict for Polish spaces, so if a set is hard, it cannot be in any lower class. If a hard set is also an element of , then it is complete.
In 2002 Niwiński and Walukiewicz discovered a surprising dichotomy in the topological complexity of deterministic tree languages: a deterministic tree language has either a very low Borel rank or it is not Borel at all (see Fig. 4). We say that an automaton admits a split if there are two loops and such that the highest ranks occurring on them are of different parity and the higher one is odd.
[Niwiński, Walukiewicz [8]] For a deterministic automaton , is on the level of the Borel hierarchy iff does not admit split; otherwise is complete (hence nonBorel). \qed
An important tool used in the proof of the Gap Theorem is the technique of difficult patterns. In the topological setting the general recipe goes like this: for a given class identify a pattern that can be “unravelled” to a language complete for this class; if an automaton does not contain the pattern, then should be in the dual class. In the proof of the Gap Theorem, the split pattern is “unravelled” into the language of trees having only finitely many 1’s on each path. This language is complete (via a reduction of the set of wellfounded trees).
In [5] a similar characterisation was obtained for the remaining classes from the above hierarchy. Before we formulate these result, let us introduce one of the most important technical notions of this study. A state is replicated by a loop if there exist a path such that . We will say that a flower is replicated by a loop if it contains a state replicated by . The phenomenon of replication is the main difference between trees and words. We will use it constantly to construct hard languages that have no counterparts among word languages. Some of them occur in the proposition below.
[Murlak [5]] Let be a deterministic automaton.

iff does not contain a weak flower.

iff does not contain a weak flower.

iff does not contain a flower nor a weak flower replicated by an accepting loop.

iff does not contain a flower.

iff does not contain a flower replicated by an accepting loop. \qed
4 The Main Result
The notion of continuous reduction defined in Sect. 3 yields a preordering on sets. Let and be topological spaces, and let , . We write (to be read “ is Wadge reducible to ”), if there exists a continuous reduction of to , i. e., a continuous function such that . We say that is Wadge equivalent to , in symbols , if and . Similarly we write if and . The Wadge ordering is the ordering induced by on the classes of subsets of Polish spaces. The Wadge ordering restricted to Borel sets is called the Wadge hierarchy.
In this study we only work with the spaces and . Since we only consider finite , these spaces are homeomorphic with the Cantor discontinuum as long as . In particular, all the languages we consider are Wadge equivalent to subsets of . Note however that the homeomorphism need not preserve recognisability. In fact, no homeomorphism from to does: the Borel hierarchy for regular tree languages is infinite, but for words it collapses on . In other words, there are regular tree languages (even weak, or deterministic), which are not Wadge equivalent to regular word languages. Conversely, each regular word language is Wadge equivalent to a deterministic tree language consisting of trees which have a word from on the leftmost branch. As a consequence, the height of the Wadge ordering of regular word languages gives us a lower bound for the case of deterministic tree languages, and this is essentially everything we can conclude from the word case.
The starting point of this study is the Wadge reducibility problem.
Problem: Wadge reducibility Input: Deterministic tree automata and Question: ?
An analogous problem for word automata can be solved fairly easy by constructing a tree automaton recognising Duplicator’s winning strategies (to be defined in the next section). This method however does not carry over to trees. One might still try to solve the Wadge reducibility problem directly by comparing carefully the structure of two given automata, but we have chosen a different approach. We will provide a family of canonical deterministic tree automata such that

given , it is decidable if ,

for each deterministic tree automaton there exists exactly one such that , and this can be computed effectively for a given .
The decidability of the Wadge reducibility problem follows easily from the existence of such a family: given two deterministic automata and , we compute and such that and , and check if .
More precisely, we prove the following theorem. {thm} There exists a family of deterministic tree automata
with , such that

for , whenever the respective automata are defined, we have
where means , and and are incomparable,

for each deterministic tree automaton there exists exactly one automaton such that and it is computable, i.e., there exists an algorithm computing for a given a pair such that .
The family satisfies the conditions postulated for the family of canonical automata : for ordinals presented as arithmetical expressions over in Cantor normal form the ordinal order is decidable, so we can take as the indexing set of .
Observe that the pair computed for a given can be seen as a name of the class of . Hence, the set together with the order defined in the statement of theorem provides a complete effective description of the Wadge hierarchy restricted to deterministic tree languages. One thing that follows is that the height of the hierarchy is .
The remaining part of the paper is in fact a single long proof. We start by reformulating the classical criterion of reducibility via Wadge games in terms of automata (Sect. 5). This will be the main tool of the whole argument. Then we define four ways of composing automata: sequential composition , replication , parallel composition , and alternative (Sect. 6). Using the first three operations we construct the canonical automata, all but top three ones (Sect. 7). Next, to rehearse our proof method, we reformulate and prove Wagner’s results in terms of canonical automata (Sect. 8). Finally, after some preparatory remarks (Sect. 9), we prove the first part of Theorem 4, modulo three missing canonical automata.
Next, we need to show that our family our family contains all deterministic tree automata up to Wadge equivalence of the recognised languages. Once again we turn to the methodology of patterns used in Sect. 2 and Sect. 3. We introduce a fundamental notion of admittance, which formalises what it means to contain an automaton as a pattern (Sect. 11). Then we generalise to replication in order to define the remaining three canonical automata, and rephrase the results on the Borel hierarchy and the Wagner hierarchy in terms of admittance of canonical automata (Sect. 12). Basing on these results, we show that the family of canonical automata is closed by the composition operations (Sect. 13), and prove the Completeness Theorem asserting that (up to Wadge equivalence) each deterministic automaton may be obtained as an iterated composition of and (Sect. 14). As a consequence, each deterministic automaton is equivalent to a canonical one. From the proof of the Completeness Theorem we extract an algorithm calculating the equivalent canonical automata, which concludes the proof of Theorem 4.
5 Games and Automata
A classical criterion for reducibility is based on the notion of Wadge games. Let us introduce a tree version of Wadge games (see [10] for word version). By the th level of a tree we understand the set of nodes . The 1st level consists of the root, the 2nd level consists of all the children of the root, etc. For any pair of tree languages the game is played by Spoiler and Duplicator. Each player builds a tree, and respectively. In every round, first Spoiler adds some levels to and then Duplicator can either add some levels to or skip a round (not forever). The result of the play is a pair of full binary trees. Duplicator wins the play if . We say that Spoiler is in charge of , and Duplicator is in charge of .
Just like for the classical Wadge games, a winning strategy for Duplicator can be easily transformed into a continuous reduction, and vice versa.
Duplicator has a winning strategy in iff .
A strategy for Duplicator defines a reduction in an obvious way. Conversely, suppose there exist a reduction . It follows that there exist a sequence (without loss of generality, strictly increasing) such that the level of depends only on the levels of . Then the strategy for Duplicator is the following: if the number of the round is , play the th level of according to ; otherwise skip. \qed
We would like to point out that Wadge games are much less interactive than classical games. The move made by one player has no influence on the possible moves of the other. Of course, if one wants to win, one has to react to the opponent’s actions, but the responses need not be immediate. As long as the player keeps putting some new letters, he may postpone the real reaction until he knows more about the opponent’s plans. Because of that, we will often speak about strategies for some language without considering the opponent and even without saying if the player in charge of the language is Spoiler or Duplicator.
Since we only want to work with deterministically recognisable languages, let us redefine the games in terms of automata. Let , be deterministic tree automata. The automata game starts with one token put in the initial state of each automaton. In every round players perform a finite number of the actions described below.

Fire a transition: for a token placed in a state choose a transition , take the old token away from and put new tokens in and .

Remove: remove a token placed in a state different from .
Spoiler plays on and must perform one of these actions at least for all the tokens produced in the previous round. Duplicator plays on and is allowed to postpone performing an action for a token, but not forever. Let us first consider plays in which the players never remove tokens. The paths visited by the tokens of each player define a run of the respective automaton. We say that Duplicator wins a play if both runs are accepting or both are rejecting. Now, removing a token from a state is interpreted as plugging in an accepting subrun in the corresponding node of the constructed run. So, Duplicator wins if the runs obtained by plugging in an accepting subrun for every removed token are both accepting or both rejecting.
Observe that removing tokens in fact does not give any extra power to the players: instead of actually removing a token, a player may easily pick an accepting subrun, and in future keep realising it level by level in the constructed run. The only reason for adding this feature in the game is that it simplifies the strategies. In a typical strategy, while some tokens have a significant role to play, most are just moved along a trivially accepting path. It is convenient to remove them right off and keep concentrated on the real actors of the play.
We will write if Duplicator has a winning strategy in . Like for languages, define iff and . Finally, let iff and .
For all deterministic tree automata and ,
First consider a modified Wadge game , where players are allowed to build their trees in an arbitrary way provided that the nodes played always form one connected tree, and in every round Spoiler must provide both children for all the nodes that were leaves in the previous round. It is very easy to see that Duplicator has a winning strategy in iff he has a winning strategy in .
Suppose that Duplicator has a winning strategy in . We will show that Duplicator has a winning strategy in , and hence . What Duplicator should do is to simulate a play of in which an imaginary Spoiler keeps constructing the run of on the tree constructed by the real Spoiler in , and Duplicator replies according to his winning strategy that exists by hypothesis. In Duplicator should simply construct a tree such that ’s run on it is exactly Duplicator’s tree from .
Let us move to the converse implication. Now, Duplicator should simulate a play in the game in which Spoiler keeps constructing a tree such that ’s run on it is exactly the tree constructed by the real Spoiler in , and Duplicator replies according to his winning strategy. In Duplicator should keep constructing the run of on constructed in the simulated play. \qed
As a corollary we have that all automata recognising a given language have the same “game power”.
For deterministic tree automata and , if , then . \qed
Classically, in automata theory we are interested in the language recognised by an automaton. One language may be recognised by many automata and we usually pick the automaton that fits best our purposes. Here, the approach is entirely different. We are not interested in the language itself, but in its Wadge equivalence class. This, as it turns out, is reflected in the general structure of the automaton. Hence, our main point of interest will be that structure.
We will frequently modify an automaton in a way that does change the recognised language, but not its class. One typical thing we need to do with an automaton, is to treat it as an automaton over an extended alphabet in such a way, that the new recognised language is Wadge equivalent to the original one. This has to be done with some care, since the automaton is required to have transitions by each letter from every state. Suppose we want to extend the input alphabet by a fresh letter . Let us construct an automaton . First, if has the allrejecting state , add a transition . Then add an allaccepting state with transitions for each (if already has the state , just add a transition ). Then for each , add a transition .
For every deterministic tree automaton over and a letter , .
It is obvious that : since contains all transitions of , a trivial winning strategy for Duplicator in is to copy Spoiler’s actions. Let us see that new transitions do not give any real power. Consider . While Spoiler uses old transitions, Duplicator may again copy his actions. The only difficulty lies in responding to a move that uses a new transition. Suppose Spoiler does use a new transition. If Spoiler fires a transition for a token in a state , Duplicator simply removes the corresponding token in , and ignores the further behaviour of and all his descendants. The only other possibility is that Spoiler fires . Then for the corresponding token Duplicator should fire for some . The described strategy is clearly winning for Duplicator. \qed
An automaton for us is not as much a recognising device, as a device to carry out strategies. Therefore even two automata with substantially different structure may be equivalent, as long as they enable us to use the same set of strategies. A typical thing we will be doing, is to replace a part of an automaton with a different part that gives the same strategical possibilities. Recall that by we denote the automaton with the initial state changed to . For let denote the automaton obtained from a copy of and a copy of by replacing each ’s transition of the form with . Note that is equivalent to .
[Substitution Lemma] Let , , be deterministic automata with pairwise disjoint sets of states, and let be a state of . If , then .
Consider the game and the following strategy for Duplicator. In Duplicator copies Spoiler’s actions. If some Spoiler’s token enters the automaton , Duplicator should put its counterpart in the initial state of , and then and its descendants should use Duplicator’s winning strategy from against and its descendants.
Let us see that this strategy is winning. Suppose first that Spoiler’s run is rejecting. Then there is a rejecting path, say . If on the computation stays in , in Duplicator’s run is also rejecting. Suppose enters . Let be the first node of in which the computation is in . The subrun of Spoiler’s run rooted in is a rejecting run of . Since Duplicator was applying a winning strategy form , the subrun of Duplicator’s run rooted in is also rejecting. In either case, Duplicator’s run is rejecting.
Now assume that Spoiler’s run is accepting, and let us see that so is Duplicator’s. All paths staying in are accepting, because they are identical to the paths in Spoiler’s run. For every in which the computation enters , the subrun rooted in is accepting thanks to the winning strategy form used to construct it. \qed
6 Operations
It this section we introduce four operations that will be used to construct canonical automata representing Wadge degrees of deterministic tree languages.
The first operation yields an automaton that lets a player choose between and . For two deterministic tree automata and over , the alternative (see Fig. 5) is an automaton with the input alphabet consisting of disjoint copies of and over the extended alphabet , and , and a fresh initial state with transitions
(only if put ). By Lemma 5, is a congruence with respect to . Furthermore, is associative and commutative up to . Multiple alternatives are performed from left to right:
The parallel composition is defined analogously, only now we extend the alphabet only by and add transitions
(only if or , put ). Note that, while in the computation must choose between and , here it continues in both. Again, is a congruence with respect to . The language is Wadge equivalent to and is associative and commutative up to . Multiple parallel compositions are performed from left to right, and for the symbol denotes .
To obtain the replication , extend the alphabet again by , set , and add and transitions
Like for two previous operations, is a congruence with respect to .
The last operation we define produces out of and an automaton that behaves as , but in at most one point (on the leftmost path) may switch to . A state is leftmost if no path connecting the initial state with uses a right transition. In other words, leftmost states are those which can only occur in the leftmost path of a run. Note that an automaton may have no leftmost states. Furthermore, a leftmost state cannot be reachable from a nonleftmost state. In particular, if an automaton has any leftmost states at all, the initial state has to be leftmost. For deterministic tree automata and over , the sequential composition (see Fig. 6)
is an automaton with the input alphabet , where is a fresh letter. It is constructed by taking copies of and over the extended alphabet and replacing the transition with for each leftmost state and . Like for and , we perform the multiple sequential compositions from left to right. For we often use an abbreviation . Observe that if has a leftmost state, then a state in is leftmost iff it is a leftmost state of or a leftmost state of . It follows that the class of a multiple sequential composition does not depend on the way we put parentheses. An analog of for word automata defines an operation on classes, but for tree automata this is no longer true. We will also see later that is not commutative even up to .
The priority of the operations is . For instance . Nevertheless, we usually use parentheses to make the expressions easier to read.
Finally, let us define the basic building blocks, to which we will apply the operations defined above. The canonical flower (see Fig. 7)
is an automaton with the input alphabet , the states where the initial state is and , and transitions
for and . A flower is nontrivial if .
In the definitions above we often use an allaccepting state . This is in fact a way of saying that a transition is of no importance when it comes to possible strategies: a token moved to has no use later in the play. Therefore, we may assume that players remove their tokens instead of putting them to . In particular, when a transition is of the form , it is convenient to treat it as a “left only” transition in which no new token is created, only the old token is moved from to . Consequently, when analysing games on automata, we will ignore the transitions to .
7 Canonical Automata
For convenience, in this section we put together the definitions of all canonical automata (save for three which will be defined much later) together with some very simple properties. More explanations and intuitions come along with the proofs in the next three sections.
For each we define the canonical automaton . The automata and will only be defined for and with , , . All the defined automata have at least one leftmost state, so the operation is always nontrivial.
Let , , and . For define
For we only define . Let and for . For every from the considered range we have a unique presentation , with , and . For define
and for set
Now consider . For let , and . For every from the considered range we have a unique presentation with and . Let , with and . For and let
for