The \mathrm{FO}^{2} alternation hierarchy is decidable

The FO^2 alternation hierarchy is decidable

Abstract

We consider the two-variable fragment of first-order logic over finite words. Numerous characterizations of this class are known. Thérien and Wilke have shown that it is decidable whether a given regular language is definable in . From a practical point of view, as shown by Weis, is interesting since its satisfiability problem is in . Restricting the number of quantifier alternations yields an infinite hierarchy inside the class of -definable languages. We show that each level of this hierarchy is decidable. For this purpose, we relate each level of the hierarchy with a decidable variety of finite monoids.

Our result implies that there are many different ways of climbing up the -quantifier alternation hierarchy: deterministic and co-deterministic products, Mal’cev products with definite and reverse definite semigroups, iterated block products with -trivial monoids, and some inductively defined omega-term identities. A combinatorial tool in the process of ascension is that of condensed rankers, a refinement of the rankers of Weis and Immerman and the turtle programs of Schwentick, Thérien, and Vollmer.

\setlist

itemsep=1pt,parsep=0pt,topsep=2pt

1 Introduction

The investigation of logical fragments has a long history. McNaughton and Papert [16] showed that a language over finite words is definable in first-order logic if and only if it is star-free. Combined with Schützenberger’s characterization of star-free languages in terms of finite aperiodic monoids [22], this leads to an algorithm to decide whether a given regular language is first-order definable. Many other characterizations of this class have been given over the past 50 years, see [3] for an overview. Moreover, mainly due to its relation to linear temporal logic [7], it became relevant to a large number of application fields, such as verification.

Very often one is interested in fragments of first-order logic. From a practical point of view, the reason is that smaller fragments often yield more efficient algorithms for computational problems such as satisfiability. For example, satisfiability for is non-elementary [25], whereas the satisfiability problem for first-order logic with only two variables is in , cf. [38]. And on the theoretical side, fragments form the basis of a descriptive complexity theory inside the regular languages: the simpler a logical formula defining a language, the easier the language. Moreover, in contrast to classical complexity theory, in some cases one can actually decide whether a given language has a particular property. From both the practical and the theoretical point of view, several natural hierarchies have been considered in the literature: the quantifier alternation hierarchy inside which coincides with the Straubing-Thérien hierarchy [26, 31], the quantifier alternation hierarchy inside with a successor predicate which coincides with the dot-depth hierarchy [2, 35], the until hierarchy of temporal logic [33], and the until-since hierarchy [34]. Decidability is known for the levels of the until and the the until-since hierarchies, and only for the very first levels of the alternation hierarchies, see e.g. [4, 20].

Fragments are usually defined by restricting resources in a formula. Such resources can be the predicates which are allowed, the quantifier depth, the number of quantifier alternations, or the number of variables. When the quantifier depth is restricted, only finitely many languages are definable over a fixed alphabet: decidability of the membership problem is not an issue in this case. When restricting the number of variables which can be used (and reused), then first-order logic with three variables already has the full expressive power of , see [6, 7]. On the other hand, first-order logic with only two variables defines a proper subclass. The languages definable in have a huge number of different characterizations, see e.g. [4, 29, 30]. For example, has the same expressive power as ; the latter is a fragment of with two blocks of quantifiers [32].

Turtle programs are one of these numerous descriptions of -definable languages [23]. They are sequences of instructions of the form “go to the next -position” and “go to the previous -position”. Using the term ranker for this concept and having a stronger focus on the order of positions defined by such sequences, Weis and Immerman [39] were able to give a combinatorial characterization of the alternation hierarchy inside . Straubing [27] gave an algebraic characterization of . But neither result yields the decidability of -definability for . In some sense, this is the opposite of a previous result of the authors [14, Thm. 6.1], who give necessary and sufficient conditions which helped to decide the -hierarchy with an error of at most one. In this paper we give a new algebraic characterization of , and this characterization immediately yields decidability.

The algebraic approach to the membership problem of logical fragments has several advantages. In favorable cases, it opens the road to decidability procedures. Moreover, it allows a more semantic comparison of fragments; for example, the equality was obtained by showing that both and correspond to the same variety of finite monoids, namely  [21, 32].

Building on previous detailed knowledge of the lattice of band varieties (varieties of idempotent monoids), Trotter and Weil defined a sub-lattice of the lattice of subvarieties of  [36], which we call the --hierarchy. These varieties have many interesting properties and in particular, each (resp. ) is efficiently decidable (by a combination of results of Trotter and Weil [36], Kufleitner and Weil [10], and Straubing and Weil [28], see Section 3 for more details). Moreover, one can climb up the --hierarchy algebraically, using Mal’cev products, see [10] and Section 2 below; language-theoretically, in terms of alternated closures under deterministic and co-deterministic products [18, 14]; and combinatorially using condensed rankers, see [13, 15] and Section 2.

We relate the quantifier alternation hierarchy with the --hierarchy. More precisely, the main result of this paper is that a language is definable in if and only if it is recognized by a monoid in , thus establishing the decidability of each . This result was first conjectured in [13], where one inclusion was established. Our proof combines a technique introduced by Klíma [8] and a substitution idea [11] with algebraic and combinatorial tools inspired by [14]. The proof is by induction and the base case is Simon’s Theorem on piecewise testable languages [24].

2 Preliminaries

Let be a finite alphabet and let be the set of all finite words over . The length of a word , , is and its alphabet is . A position of is an -position if . A factorization is the -left factorization of if , and it is the -right factorization if , i.e., we factor at the first or at the last -position.

2.1 Rankers

A ranker is a nonempty word over the alphabet . It is interpreted as a sequence of instructions of the form “go to the next -position” and “go to the previous -position”. More formally, for and we let

Here, both the minimum and the maximum of the empty set are undefined. The modality is for “net-” and is for “esterday-”. For , , we set

In particular, rankers are executed (as a set of instructions) from left to right. Every ranker either defines a unique position in a word , or it is undefined on . For example, and whereas and are undefined. A ranker is condensed on if it is defined and, during the execution of , no previously visited position is overrun [14]. One can think of condensed rankers as zooming in on the position they define, see Figure 1.

Figure 1: The positions defined by in , when is condensed on

More formally , , is condensed on if there exists a chain of open intervals

such that for all the following properties are satisfied:

  • If , then .

  • If , then .

  • If , then .

  • If , then .

For example, is condensed on but not on .

The depth of a ranker is its length as a word. A block of a ranker is a maximal factor of the form or of the form . A ranker with blocks changes direction times. By we denote the class of all rankers with depth at most and with up to blocks. We write for the set of all rankers in which start with an -modality and we write for all rankers in which start with a -modality.

We define if the same rankers in are condensed on and . Similarly, if the same rankers in are condensed on and . The relations and are finite index congruences [14, Lem. 3.13].

The order type is one of , depending on whether , , or , respectively. We define if

  • the same rankers in are defined on and ,

  • for all and :  ,

  • for all and :  ,

  • for all and :  ,

  • for all and :  .

Remark 1.

For , each of the families , , and defines the class of piecewise testable languages, see e.g. [8, 24]. Recall that a language is piecewise testable if it is a Boolean combination of languages of the form (, ).

2.2 First-order Logic

We denote by the first-order logic over words interpreted as labeled linear orders. The atomic formulas are (for true), (for false), the unary predicates (one for each ), and the binary predicate for variables and . Variables range over the linearly ordered positions of a word and means that is an -position. Apart from the Boolean connectives, we allow composition of formulas using existential quantification and universal quantification for . The semantics is as usual. A sentence in is a formula without free variables. For a sentence the language defined by , denoted by , is the set of all words which model .

The fragment of first-order logic consists of all formulas which use at most two different names for the variables. This is a natural restriction, since with three variables already has the full expressive power of . A formula is in if, on every path of its parse tree, has at most blocks of alternating quantifiers.

Note that -definable languages are exactly the piecewise testable languages, cf. [27]. For , we rely on the following important result, due to Weis and Immerman [39, Thm. 4.5].

Theorem 2.

A language is definable in if and only if there exists such that is a union of -classes.

Remark 3.

The definition of above is formally different from the conditions in Weis and Immerman’s [39, Thm. 4.5]. A careful but elementary examination reveals that they are actually equivalent.

2.3 Algebra

A monoid recognizes a language if there exists a morphism such that . If is a morphism, then we set if . The join of two congruences and is the least congruence containing and . An element is idempotent if . The set of all idempotents of a monoid is denoted by . For every finite monoid  there exists such that is idempotent for all . Green’s relations , , and are an important concept to describe the structural properties of a monoid : we set (resp. , ) if (resp. , ) for some . We also define (resp. , ) if and (resp.  and , and ). A monoid is -trivial (resp. -trivial, -trivial) if (resp. , ) is the identity relation on . We define the relations , , and on as follows:

  • if and only if, for all , we have either , or .

  • if and only if, for all , we have either , or .

  • if and only if, for all such that , we have either , or .

The relations , and are congruences [9]. If is a class of finite monoids, we say that a monoid is in (resp. , ) if (resp. , ). The classes , and are called Mal’cev products and they are usually defined in terms of relational morphisms. In the present context however, the definition above will be sufficient [9], see [5]. We will need the following classes of finite monoids:

  • consists of all finite commutative monoids satisfying .

  • (resp. , ) consists of all finite -trivial (resp. -trivial, -trivial) monoids.

  • consists of all finite monoids satisfying . Monoids in are called aperiodic.

  • consists of all finite monoids satisfying .

  • ,  ,  .

It is well known that

see e.g. [19]. The --hierarchy is depicted in Figure 2.

 

 

Figure 2: The --hierarchy

2.4 The variety approach to the decidability of

Classes of finite monoids that are closed under taking submonoids, homomorphic images and finite direct products are called pseudovarieties. The classes of finite monoids , , , , and introduced above are all pseudovarieties.

If is a pseudovariety of monoids, the class of languages recognized by a monoid in is called a variety of languages. Eilenberg’s variety theorem (see e.g. [17, Annex B]) shows that varieties of languages are characterized by natural closure properties, and that the correspondence is onto. Elementary automata theory shows in addition that a language is recognized by a monoid in a pseudovariety if and only the syntactic monoid of is in . It follows that if has a decidable membership problem, then so does the corresponding variety of languages .

Simon’s Theorem on piecewise testable languages [8, 24] is an important instance of this Eilenberg correspondence: a language is recognizable by a monoid in if and only if is piecewise testable (and hence, as we already observed, if and only if is definable in ). Simon’s result implies the decidability of piecewise testability.

It immediately follows from the definition that membership in and is decidable for all  since membership in is decidable (see Corollary 10 for a more precise statement). Many additional properties of the pseudovarieties and , and of the corresponding varieties of languages were established by the authors [10, 14, 36]. We will use in particular the following results, respectively [14, Cor. 3.15] and [10, Thms. 2.1 and 3.5].

Proposition 4.

An -generated monoid is in (resp. ) if and only if there exists an integer such that is a quotient of (resp. ).

Let be a sequence of variables. For each word , we denote by the mirror image of , that is, the word obtained by reading from right to left. Let , and, for , and . Finally, let be the substitution given by

Proposition 5.

(resp. ) is the class of finite monoids satisfying and (resp. .

Straubing [27] and Kufleitner and Lauser [12, Cor. 3.4] established, by different means, that for each , the class of -definable languages forms a variety of languages, and we denote by the corresponding pseudovariety. In particular, . Our strategy to establish the decidability of -definability, is to establish the decidability of membership in .

It is to be noted that neither Straubing’s result, nor Kufleitner’s and Lauser’s result implies the decidability of . Straubing’s result is the following [27, Thm. 4].

Theorem 6.

For , , where denotes the two-sided wreath product.

We refer the reader to [27] for the definition of the two-sided wreath product, which is also called the block product in the literature. As discussed by Straubing, this exact algebraic characterization of implies the decidability of but not of the other levels of the hierarchy. Straubing however conjectured that the following holds [27, Conj. 10].

Conjecture 7 (Straubing).

Let , and, for ,

Then a monoid is in if and only if it satisfies and .

If established, this conjecture would prove the decidability of each . The authors on the other hand proved the following [14, Thm. 5.1].

Theorem 8.

If a language is recognized by a monoid in the join , then is definable in ; and if is definable in , then is recognized by a monoid in .

3 The alternation hierarchy is decidable

We tighten the connection between the alternation hierarchy within and the --hierarchy and we prove the following result.

Theorem 9.

A language is definable in if and only if it is recognizable by a monoid in .

Theorem 9 immediately yields a decidability result.

Corollary 10.

For each , it is decidable whether a given regular language is -definable. This decision can be achieved in Logspace on input the multiplication table of the syntactic monoid of , and in Pspace on input its minimal automaton.

Moreover, given a -definable language , one can compute the least integer such that is .

Proof.

We already observed that the and are decidable, and that each is described by two omega-term identities (Proposition 5). The decidability statement follows immediately. The complexity statement is a consequence of Straubing and Weil’s [28, Thm. 2.19]. The computability statement follows immediately. ∎∎

We now turn to the proof of Theorem 9. One implication was established in Theorem 8. To prove the reverse implication, we prove Proposition 11 below, which establishes that every language recognized by a monoid is a union of -classes for some integer depending on . Theorem 9 follows, in view of Theorem 2.

Proposition 11.

For every and every morphism with there exists an integer such that is contained in .

Before we embark in the proof of Proposition 11, we record several algebraic and combinatorial lemmas.

3.1 A collection of technical lemmas

Lemma 12.

Let be a finite monoid. If and , then . If and , then .

Proof.

Let such that . We have . Now, implies . Thus . The second statement is left-right symmetric. ∎∎

The following lemma illustrates an important structural property of monoids in .

Lemma 13.

Let , with and let such that and . Then .

Proof.

The map can be seen as a morphism, where the product on is the union operation. Since , we have ; let be the projection morphism. It is easily verified that there exists a morphism such that , see Figure 3.

Figure 3:

By assumption, for some , and hence . Since , we have . Applying the definition of with , it follows that and we now have

Therefore , which concludes the proof. ∎∎

A proof of the following lemma can be found in [14, Prop. 3.6 and Lem. 3.7].

Lemma 14.

Let , , .

  1. If and and are -left factorizations, then and .

  2. If and and are -right factorizations, then and .

Dual statements hold for .

Lemma 15.

Let and let and be -left factorizations. If , then and . A dual statement holds for the factors of the -right factorizations of and .

Proof.

We first show . Consider a ranker , supposing first that . Then is defined on if and only if is defined on and is for every nonempty prefix of . By definition of , this is equivalent to being defined on . If instead , then is defined on if and only if is defined on and is for every nonempty prefix of . Again, this is equivalent to being defined on since . Thus, the same rankers in are defined on and .

Now consider rankers and , which we can assume to be defined on both and . Then the order types induced by and on and are equal, since and .

The same reasoning applies if and (resp. if and , if and ) since in that case, (resp. , ). Therefore, .

We now verify that . The proof is very similar to the first part and deviates only in technical details. Consider a ranker , say, in . Then is defined on if and only if is defined on and is  for every nonempty prefix of . Again, this is equivalent to being defined on since . If instead , then is defined on if and only if is defined on and is  for every nonempty prefix of , which is equivalent to being defined on . Thus, the same rankers in are defined on and .

Now consider rankers and , both defined on and . Then the order types induced by and on and are equal, since and .

Again, a similar verification guarantees that the order types induced by and on and are equal also if and , or if and , or if and . This shows which completes the proof. ∎∎

Lemma 16.

Let and let and describe -left and -right factorizations (that is, and ). If , then .

Proof.

A ranker is defined on if and only if is defined on and