The Wadge Hierarchy of Deterministic Tree Languages{}^{{*}}

The Wadge Hierarchy of Deterministic Tree Languages

Filip Murlak Institute of Informatics, University of Warsaw, ul. Banacha 2, 02–097 Warszawa, Poland fmurlak@mimuw.edu.pl
Abstract.

We provide a complete description of the Wadge hierarchy for deterministically recognisable sets of infinite trees. In particular we give an elementary procedure to decide if one deterministic tree language is continuously reducible to another. This extends Wagner’s results on the hierarchy of -regular languages of words to the case of trees.

Key words and phrases:
Wadge hierarchy, deterministic automata, infinite trees, decidability
Supported by KBN Grant 4 T11C 042 25.
copyright: ©:

\@sect

section1[Introduction]Introduction

Two measures of complexity of recognisable languages of infinite words or trees have been considered in literature: the index hierarchy, which reflects the combinatorial complexity of the recognising automaton and is closely related to -calculus, and the Wadge hierarchy, which is the refinement of the Borel/projective hierarchy that gives the deepest insight into the topological complexity of languages. Klaus Wagner was the first to discover remarkable relations between the two hierarchies for finite-state recognisable (-regular) sets of infinite words [wagner0]. Subsequently, decision procedures determining an -regular language’s position in both hierarchies were given [kupferman, kwiatek, wagner].

For tree automata the index problem is only solved when the input is a deterministic automaton [hie, urban]. As for topological complexity of recognisable tree languages, it goes much higher than that of -regular languages, which are all . Indeed, co-Büchi automata over trees may recognise -complete languages [gap], and Skurczyński [skurcz] proved that there are even weakly recognisable tree languages in every finite level of the Borel hierarchy. This may suggest that in the tree case the topological and combinatorial complexities diverge. On the other hand, the investigations of the Borel/projective hierarchy of deterministic languages [split, gap] reveal some interesting connections with the index hierarchy.

Wagner’s results [wagner0, wagner], giving rise to what is now called the Wagner hierarchy (see [perrin]), inspire the search for a complete picture of the two hierarchies and the relations between them for recognisable tree languages. In this paper we concentrate on the Wadge hierarchy of deterministic tree languages: we give a full description of the Wadge-equivalence classes forming the hierarchy, together with a procedure calculating the equivalence class of a given deterministic language. In particular, we show that the hierarchy has the height , which should be compared with for regular -languages [wagner], for deterministic context-free -languages [contextfree], for -languages recognised by deterministic Turing machines [selivanov], or an unknown ordinal for -languages recognised by nondeterministic Turing machines, and the same ordinal for nondeterministic context-free languages [finkel].

The key notion of our argument is an adaptation of the Wadge game to tree languages, redefined entirely in terms of automata. Using this tool we construct a collection of canonical automata representing the Wadge degrees of all deterministic tree languages. Then we provide a procedure calculating the canonical form of a given deterministic automaton, which runs within the time of finding the productive states of the automaton (the exact complexity of this problem is unknown, but not worse than exponential).


\@sect

section1[Automata]Automata

We use the symbol to denote the set of natural numbers . For an alphabet , is the set of finite words over and is the set of infinite words over . The concatenation of words and will be denoted by , and the empty word by . The concatenation is naturally generalised for infinite sequences of finite words . The concatenation of sets , is .

A tree is any subset of closed under the prefix relation. An element of a tree is usually called a node. A leaf is any node of a tree which is not a (strict) prefix of some other node. A -labelled tree (or a tree over ) is a function such that is a tree. For we define as a subtree of rooted in , i. e., , .

A full -ary -labeled tree is a function . The symbol will denote the set of full binary trees over . From now on, if not stated otherwise, a “tree” will mean a full binary tree over some alphabet.

Out of a variety of acceptance conditions for automata on infinite structures, we choose the parity condition. A nondeterministic parity automaton on words can be presented as a tuple , where is a finite input alphabet, is a finite set of states, is the transition relation, and is the initial state. The meaning of the function will be explained later. Instead of one usually writes . A run of an automaton on a word is a word such that and if , , and , then . A run is accepting if the highest rank repeating infinitely often in is even; otherwise is rejecting. A word is accepted by if there exists an accepting run on it. The language recognised by , denoted is the set of words accepted by . An automaton is deterministic if its relation of transition is a total function . Note that a deterministic automaton has a unique run (accepting or not) on every word. We call a language deterministic if it is recognised by a deterministic automaton.

A nondeterministic automaton on trees is a tuple , the only difference being that . Like before, means . We write if there exists a state such that . Similarly for . A run of on a tree is a tree such that and if , , and , then . A path of the run is accepting if the highest rank repeating infinitely often in is even; otherwise is rejecting. A run is called accepting if all its paths are accepting. If at least one of them is rejecting, so is the whole run. An automaton is called deterministic if its transition relation is a total function .

By we denote the automaton with the initial state set to . A state is all-accepting if accepts all trees, and all-rejecting if rejects all trees. A state (a transition) is called productive if it is used in some accepting run. Observe that being productive is more than just not being all-rejecting. A state is productive if and only if it is not all-rejecting and there is a path such that , , and is not all-rejecting for .

Without loss of generality we may assume that all states in are productive save for one all-rejecting state and that all transitions are either productive or are of the form . The reader should keep in mind that this assumption has influence on the complexity of our algorithms. Transforming a given automaton into such a form of course needs calculating the productive states, which is equivalent to deciding a language’s emptiness. The latter problem is known to be in and has no polynomial time solutions yet. Therefore, we can only claim that our algorithms are polynomial for the automata that underwent the above preprocessing. We will try to mention it whenever it is particularly important.

The Mostowski–Rabin index of an automaton is a pair

An automaton with index is often called a -automaton. Scaling down the function if necessary, one may assume that is either 0 or 1. Thus, the indices are elements of . For an index we shall denote by the dual index, i. e., , . Let us define an ordering of indices with the following formula

In other words, one index is smaller than another if and only if it uses less ranks. This means that dual indices are not comparable. The Mostowski–Rabin index hierarchy for a certain class of automata consists of ascending sets (levels) of languages recognised by -automata (see Fig. 1).

Figure 1. The Mostowski–Rabin index hierarchy.

The fundamental question about the hierarchy is the strictness, i. e., the existence of languages recognised by a -automaton, but not by a -automaton. The strictness of the hierarchy for deterministic automata follows easily from the strictness of the hierarchy for deterministic word automata [wagner]: if a word language needs at least the index , so does the language of trees that have a word from on the leftmost branch. The index hierarchy for nondeterministic automata is also strict [klony]. In fact, the languages showing the strictness may be chosen deterministic: one example is the family of the languages of trees over the alphabet satisfying the parity condition on each path.

The second important question one may ask about the index hierarchy is how to determine the exact position of a given language. This is known as the index problem.

Given a deterministic language, one may ask about its deterministic index, i. e., the exact position in the index hierarchy of deterministic automata (deterministic index hierarchy). This question can be answered effectively. Here we follow the method introduced by Niwiński and Walukiewicz [kwiatek].

A path in an automaton is a sequence of states and transitions:

A loop is a path starting and ending in the same state, . A loop is called accepting if is even. Otherwise it is rejecting. A -loop is a loop with the highest rank on it equal to . A sequence of loops in an automaton is called an alternating chain if the highest rank appearing on has the same parity as and it is higher then the highest rank on for . A -flower is an alternating chain such that all loops have a common state (see Fig. 2). 111This is a slight modification of the original definition from [kwiatek].

Figure 2. -flower.

Niwiński and Walukiewicz use flowers in their solution of the index problem for deterministic word automata.

Theorem 0.1 (Niwiński, Walukiewicz [kwiatek]).

A deterministic automaton on words is equivalent to a deterministic -automaton iff it does not contain a -flower.        

For a tree language over , let denote the language of generalised paths of ,

A deterministic tree automaton , can be treated as a deterministic word automaton recognising . Simply for , take , where . Conversely, given a deterministic word automaton recognising , one may interpret it as a tree automaton, obtaining thus a deterministic automaton recognising . Hence, applying Theorem 0.1 one gets the following result.

Proposition 0.2.

For a deterministic tree automaton the language is recognised by a deterministic -automaton iff does not contain a -flower.        

In [split] it is shown how to compute the weak deterministic index of a given deterministic language. An automaton is called weak if the ranks may only decrease during the run, i. e., if , then . The weak deterministic index problem is to compute a weak deterministic automaton with minimal index recognising a given language. The procedure in [split] is again based on the method of difficult patterns used in Theorem 0.1 and Proposition 0.2. We need the simplest pattern exceeding the capability of weak deterministic -automata. Just like in the case of the deterministic index, it seems natural to look for a generic pattern capturing all the power of . Intuitively, we need to enforce the alternation of ranks provided by . Let a weak -flower be a sequence of loops such that is reachable from , and is accepting iff is even (see Fig. 3).

Figure 3. A weak -flower.
Proposition 0.3 ([split]).

A deterministic automaton is equivalent to a weak deterministic -automaton iff it does not contain a weak -flower.        

For a deterministic language one may also want to calculate its nondeterministic index, i. e., the position in the hierarchy of nondeterministic automata. This may be lower than the deterministic index, due to greater expressive power of nondeterministic automata. Consider for example the language consisting of trees whose leftmost paths are in a regular word language . It can be recognised by a nondeterministic -automaton, but its deterministic index is equal to the deterministic index of , which can be arbitrarily high.

The problem transpired to be rather difficult and has only just been solved in [hie]. Decidability of the general index problem for nondeterministic automata is one of the most important open questions in the field.


\@sect

section1[Topology]Topology

We start with a short recollection of elementary notions of descriptive set theory. For further information see [kechris].

Let be the set of infinite binary sequences with a metric given by the formula

and  be the set of infinite binary trees over with a metric

Both and , with the topologies induced by the above metrics, are Polish spaces (complete metric spaces with countable dense subsets). In fact, both of them are homeomorphic to the Cantor discontinuum.

The class of Borel sets of a topological space is the closure of the class of open sets of by complementation and countable sums. Within this class one builds so called Borel hierarchy. The initial (finite) levels of the Borel hierarchy are defined as follows:

  1. – open subsets of ,

  2. – complements of sets from ,

  3. – countable unions of sets from .

For example, are closed sets, are sets, and are sets. By convention, and .

Even more general classes of sets from the projective hierarchy. We will not go beyond its lowest level:

  1. analytical subsets of , i. e., projections of Borel subsets of with product topology,

  2. – complements of sets from .

Whenever the space is determined by the context, we omit it in the notation above and write simply , , and so on.

Let be a continuous map of topological spaces. One says that is a reduction of to , if . Note that if is in a certain class of the above hierarchies, so is . For any class a set is -hard, if for any set there exists a reduction of to . The topological hierarchy is strict for Polish spaces, so if a set is -hard, it cannot be in any lower class. If a -hard set is also an element of , then it is -complete.

In 2002 Niwiński and Walukiewicz discovered a surprising dichotomy in the topological complexity of deterministic tree languages: a deterministic tree language has either a very low Borel rank or it is not Borel at all (see Fig. 4). We say that an automaton admits a split if there are two loops and such that the highest ranks occurring on them are of different parity and the higher one is odd.

Theorem 0.4 (Niwiński, Walukiewicz [gap]).

For a deterministic automaton , is on the level of the Borel hierarchy iff does not admit split; otherwise is -complete (hence non-Borel).        

Figure 4. The Borel hierarchy for deterministic tree languages.

An important tool used in the proof of the Gap Theorem is the technique of difficult patterns. In the topological setting the general recipe goes like this: for a given class identify a pattern that can be “unravelled” to a language complete for this class; if an automaton does not contain the pattern, then should be in the dual class. In the proof of the Gap Theorem, the split pattern is “unravelled” into the language of trees having only finitely many 1’s on each path. This language is -complete (via a reduction of the set of well-founded trees).

In [split] a similar characterisation was obtained for the remaining classes from the above hierarchy. Before we formulate these result, let us introduce one of the most important technical notions of this study. A state is replicated by a loop if there exist a path such that . We will say that a flower is replicated by a loop if it contains a state replicated by . The phenomenon of replication is the main difference between trees and words. We will use it constantly to construct hard languages that have no counterparts among word languages. Some of them occur in the proposition below.

Theorem 0.5 (Murlak [split]).

Let be a deterministic automaton.

  1. iff does not contain a weak -flower.

  2. iff does not contain a weak -flower.

  3. iff does not contain a -flower nor a weak -flower replicated by an accepting loop.

  4. iff does not contain a -flower.

  5. iff does not contain a -flower replicated by an accepting loop.        


\@sect

section1[The Main Result]The Main Result

The notion of continuous reduction defined in Sect. The Wadge Hierarchy of Deterministic Tree Languages yields a preordering on sets. Let and be topological spaces, and let , . We write (to be read “ is Wadge reducible to ”), if there exists a continuous reduction of to , i. e., a continuous function such that . We say that is Wadge equivalent to , in symbols , if and . Similarly we write if and . The Wadge ordering is the ordering induced by on the -classes of subsets of Polish spaces. The Wadge ordering restricted to Borel sets is called the Wadge hierarchy.

In this study we only work with the spaces and . Since we only consider finite , these spaces are homeomorphic with the Cantor discontinuum as long as . In particular, all the languages we consider are Wadge equivalent to subsets of . Note however that the homeomorphism need not preserve recognisability. In fact, no homeomorphism from to does: the Borel hierarchy for regular tree languages is infinite, but for words it collapses on . In other words, there are regular tree languages (even weak, or deterministic), which are not Wadge equivalent to regular word languages. Conversely, each regular word language is Wadge equivalent to a deterministic tree language consisting of trees which have a word from on the leftmost branch. As a consequence, the height of the Wadge ordering of regular word languages gives us a lower bound for the case of deterministic tree languages, and this is essentially everything we can conclude from the word case.

The starting point of this study is the Wadge reducibility problem.

Problem: Wadge reducibility Input: Deterministic tree automata and Question: ?

An analogous problem for word automata can be solved fairly easy by constructing a tree automaton recognising Duplicator’s winning strategies (to be defined in the next section). This method however does not carry over to trees. One might still try to solve the Wadge reducibility problem directly by comparing carefully the structure of two given automata, but we have chosen a different approach. We will provide a family of canonical deterministic tree automata such that

  1. given , it is decidable if ,

  2. for each deterministic tree automaton there exists exactly one such that , and this can be computed effectively for a given .

The decidability of the Wadge reducibility problem follows easily from the existence of such a family: given two deterministic automata and , we compute and such that and , and check if .

More precisely, we prove the following theorem.

Theorem 0.6.

There exists a family of deterministic tree automata

with , such that

  1. for , whenever the respective automata are defined, we have

    where means , and and are incomparable,

  2. for each deterministic tree automaton there exists exactly one automaton such that and it is computable, i.e., there exists an algorithm computing for a given a pair such that .

The family satisfies the conditions postulated for the family of canonical automata : for ordinals presented as arithmetical expressions over in Cantor normal form the ordinal order is decidable, so we can take as the indexing set of .

Observe that the pair computed for a given can be seen as a name of the -class of . Hence, the set together with the order defined in the statement of theorem provides a complete effective description of the Wadge hierarchy restricted to deterministic tree languages. One thing that follows is that the height of the hierarchy is .

The remaining part of the paper is in fact a single long proof. We start by reformulating the classical criterion of reducibility via Wadge games in terms of automata (Sect. The Wadge Hierarchy of Deterministic Tree Languages). This will be the main tool of the whole argument. Then we define four ways of composing automata: sequential composition , replication , parallel composition , and alternative (Sect. LABEL:operations). Using the first three operations we construct the canonical automata, all but top three ones (Sect. LABEL:canonicalautomata). Next, to rehearse our proof method, we reformulate and prove Wagner’s results in terms of canonical automata (Sect. LABEL:withoutbranching). Finally, after some preparatory remarks (Sect. LABEL:theuseofreplication), we prove the first part of Theorem 0.6, modulo three missing canonical automata.

Next, we need to show that our family our family contains all deterministic tree automata up to Wadge equivalence of the recognised languages. Once again we turn to the methodology of patterns used in Sect. The Wadge Hierarchy of Deterministic Tree Languages and Sect. The Wadge Hierarchy of Deterministic Tree Languages. We introduce a fundamental notion of admittance, which formalises what it means to contain an automaton as a pattern (Sect. LABEL:patternsinautomata). Then we generalise to -replication in order to define the remaining three canonical automata, and rephrase the results on the Borel hierarchy and the Wagner hierarchy in terms of admittance of canonical automata (Sect. LABEL:wagnerhierarchyandbeyond). Basing on these results, we show that the family of canonical automata is closed by the composition operations (Sect. LABEL:closureproperties), and prove the Completeness Theorem asserting that (up to Wadge equivalence) each deterministic automaton may be obtained as an iterated composition of and (Sect. LABEL:sect:completeness). As a consequence, each deterministic automaton is equivalent to a canonical one. From the proof of the Completeness Theorem we extract an algorithm calculating the equivalent canonical automata, which concludes the proof of Theorem 0.6.


\@sect

section1[Games and Automata]Games and Automata

A classical criterion for reducibility is based on the notion of Wadge games. Let us introduce a tree version of Wadge games (see [perrin] for word version). By the th level of a tree we understand the set of nodes . The 1st level consists of the root, the 2nd level consists of all the children of the root, etc. For any pair of tree languages the game is played by Spoiler and Duplicator. Each player builds a tree, and respectively. In every round, first Spoiler adds some levels to and then Duplicator can either add some levels to or skip a round (not forever). The result of the play is a pair of full binary trees. Duplicator wins the play if . We say that Spoiler is in charge of , and Duplicator is in charge of .

Just like for the classical Wadge games, a winning strategy for Duplicator can be easily transformed into a continuous reduction, and vice versa.

Lemma 0.7.

Duplicator has a winning strategy in iff .

Proof.

A strategy for Duplicator defines a reduction in an obvious way. Conversely, suppose there exist a reduction . It follows that there exist a sequence (without loss of generality, strictly increasing) such that the level of depends only on the levels of . Then the strategy for Duplicator is the following: if the number of the round is , play the th level of according to ; otherwise skip.        

We would like to point out that Wadge games are much less interactive than classical games. The move made by one player has no influence on the possible moves of the other. Of course, if one wants to win, one has to react to the opponent’s actions, but the responses need not be immediate. As long as the player keeps putting some new letters, he may postpone the real reaction until he knows more about the opponent’s plans. Because of that, we will often speak about strategies for some language without considering the opponent and even without saying if the player in charge of the language is Spoiler or Duplicator.

Since we only want to work with deterministically recognisable languages, let us redefine the games in terms of automata. Let , be deterministic tree automata. The automata game starts with one token put in the initial state of each automaton. In every round players perform a finite number of the actions described below.

  1. Fire a transition: for a token placed in a state choose a transition , take the old token away from and put new tokens in and .

  2. Remove: remove a token placed in a state different from .

Spoiler plays on and must perform one of these actions at least for all the tokens produced in the previous round. Duplicator plays on and is allowed to postpone performing an action for a token, but not forever. Let us first consider plays in which the players never remove tokens. The paths visited by the tokens of each player define a run of the respective automaton. We say that Duplicator wins a play if both runs are accepting or both are rejecting. Now, removing a token from a state is interpreted as plugging in an accepting subrun in the corresponding node of the constructed run. So, Duplicator wins if the runs obtained by plugging in an accepting subrun for every removed token are both accepting or both rejecting.

Observe that removing tokens in fact does not give any extra power to the players: instead of actually removing a token, a player may easily pick an accepting subrun, and in future keep realising it level by level in the constructed run. The only reason for adding this feature in the game is that it simplifies the strategies. In a typical strategy, while some tokens have a significant role to play, most are just moved along a trivially accepting path. It is convenient to remove them right off and keep concentrated on the real actors of the play.

We will write if Duplicator has a winning strategy in . Like for languages, define iff and . Finally, let iff and .

Lemma 0.8.

For all deterministic tree automata and ,

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
104108
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description