Mean-Payoff Automaton Expressions
Quantitative languages are an extension of boolean languages that assign to each word a real number. Mean-payoff automata are finite automata with numerical weights on transitions that assign to each infinite path the long-run average of the transition weights. When the mode of branching of the automaton is deterministic, nondeterministic, or alternating, the corresponding class of quantitative languages is not robust as it is not closed under the pointwise operations of max, min, sum, and numerical complement. Nondeterministic and alternating mean-payoff automata are not decidable either, as the quantitative generalization of the problems of universality and language inclusion is undecidable.
We introduce a new class of quantitative languages, defined by mean-payoff automaton expressions, which is robust and decidable: it is closed under the four pointwise operations, and we show that all decision problems are decidable for this class. Mean-payoff automaton expressions subsume deterministic mean-payoff automata, and we show that they have expressive power incomparable to nondeterministic and alternating mean-payoff automata. We also present for the first time an algorithm to compute distance between two quantitative languages, and in our case the quantitative languages are given as mean-payoff automaton expressions.
Quantitative languages are a natural generalization of boolean languages that assign to every word a real number instead of a boolean value. For instance, the value of a word (or behavior) can be interpreted as the amount of some resource (e.g., memory consumption, or power consumption) needed to produce it, or bound the long-run average available use of the resource. Thus quantitative languages can specify properties related to resource-constrained programs, and an implementation satisfies (or refines) a specification if for all words . This notion of refinement is a quantitative generalization of language inclusion, and it can be used to check for example if for each behavior, the long-run average response time of the system lies below the specified average response requirement. Hence it is crucial to identify some relevant class of quantitative languages for which this question is decidable. The other classical decision questions such as emptiness, universality, and language equivalence have also a natural quantitative extension. For example, the quantitative emptiness problem asks, given a quantitative language and a threshold , whether there exists some word such that , and the quantitative universality problem asks whether for all words . Note that universality is a special case of language inclusion (where is constant).
Weighted mean-payoff automata present a nice framework to express
such quantitative properties [CDH08].
A weighted mean-payoff automaton is a finite automaton with numerical
weights on transitions. The value of a word is the maximal value of all
runs over (if the automaton is nondeterministic, then there may be many
runs over ), and the value of a run is the long-run average of the
weights that appear along . A mean-payoff extension to alternating automata
has been studied in [CDH-FCT09].
Deterministic, nondeterministic and alternating mean-payoff automata are three
classes of mean-payoff automata with increasing expressive power.
However, none of these classes is closed under the four pointwise operations
of max, min (which generalize union and intersection respectively), numerical
Moreover, while deterministic mean-payoff automata enjoy decidability of all quantitative decision problems [CDH08], the quantitative language-inclusion problem is undecidable for nondeterministic and alternating mean-payoff automata [DDGRT10], and thus also all decision problems are undecidable for alternating mean-payoff automata. Hence although mean-payoff automata provide a nice framework to express quantitative properties, there is no known class which is both robust and decidable (see Table 1).
In this paper, we introduce a new class of quantitative languages that are defined by mean-payoff automaton expressions. An expression is either a deterministic mean-payoff automaton, or it is the max, min, or sum of two mean-payoff automaton expressions. Since deterministic mean-payoff automata are closed under complement, mean-payoff automaton expressions form a robust class that is closed under max, min, sum and complement. We show that (a) all decision problems (quantitative emptiness, universality, inclusion, and equivalence) are decidable for mean-payoff automaton expressions; (b) mean-payoff automaton expressions are incomparable in expressive power with both the nondeterministic and alternating mean-payoff automata (i.e., there are quantitative languages expressible by mean-payoff automaton expressions that are not expressible by alternating mean-payoff automata, and there are quantitative languages expressible by nondeterministic mean-payoff automata that are not expressible by mean-payoff automata expressions); and (c) the properties of cut-point languages (i.e., the sets of words with value above a certain threshold) for deterministic automata carry over to mean-payoff automaton expressions, mainly the cut-point language is -regular when the threshold is isolated (i.e., some neeighborhood around the threshold contains no word). Moreover, mean-payoff automaton expressions can express all examples in the literature of quantitative properties using mean-payoff measure [AlurDMW09, CDH09b, CGHIKPS08]. Along with the quantitative generalization of the classical decision problems, we also consider the notion of distance between two quantitative languages and , defined as . When quantitative language inclusion does not hold between an implementation and a specification , the distance is a relevant information to evaluate how close they are, as we may accept implementations that overspend the resource but we would prefer the least expensive ones. We present the first algorithm to compute the distance between two quantitative languages: we show that the distance can be computed for mean-payoff automaton expressions.
|Closure properties||Decision problems|
Our approach to show decidability of mean-payoff automaton expressions relies on the characterization and algorithmic computation of the values set of an expression , i.e. the set of all values of words according to . The value set can be viewed as an abstract representation of the quantitative language , and we show that all decision problems, cut-point language and distance computation can be solved efficiently once we have this set.
First, we present a precise characterization of the value set for quantitative languages defined by mean-payoff automaton expressions. In particular, we show that it is not sufficient to construct the convex hull of the set of the values of simple cycles in the mean-payoff automata occurring in , but we need essentially to apply an operator which given a set computes the set of points that can be obtained by taking pointwise minimum of each coordinate of points of a set . We show that while we need to compute the set to obtain the value set, and while this set is always convex, it is not always the case that (which would immediately give an algorithm to compute ). This may appear counter-intuitive because the equality holds in but we show that the equality does not hold in (Example 2).
Second, we provide algorithmic solutions to compute , for a finite set . We first present a constructive procedure that given constructs a finite set of points such that . The explicit construction presents interesting properties about the set , however the procedure itself is computationally expensive. We then present an elegant and geometric construction of as a set of linear constraints. The computation of is a new problem in computational geometry and the solutions we present could be of independent interest. Using the algorithm to compute , we show that all decision problems for mean-payoff automaton expressions are decidable. Due to lack of space, most proofs are given in the appendix.
Related works. Quantitative languages have been first studied over finite words in the context of probabilistic automata [Rabin63] and weighted automata [Wautomata]. Several works have generalized the theory of weighted automata to infinite words (see [DrosteK03, DrosteGastin07, LatticeAutomata07, Bojanczyk10] and [HandbookWA] for a survey), but none of those have considered mean-payoff conditions. Examples where the mean-payoff measure has been used to specify long-run behaviours of systems can be found in game theory [EM79, ZwickP96] and in Markov decision processes [Alfaro98]. The mean-payoff automata as a specification language have been first investigated in [CDH08, CDH09b, CDH-FCT09], and extended in [AlurDMW09] to construct a new class of (non-quantitative) languages of infinite words (the multi-threshold mean-payoff languages), obtained by applying a query to a mean-payoff language, and for which emptiness is decidable. It turns out that a richer language of queries can be expressed using mean-payoff automaton expressions (together with decidability of the emptiness problem). A detailed comparison with the results of [AlurDMW09] is given in Section 5. Moreover, we provide algorithmic solutions to the quantitative language inclusion and equivalence problems and to distance computation which have no counterpart for non-quantitative languages. Related notions of metrics have been addressed in stochastic games [AlfaroMRS07] and probabilistic processes [DesharnaisGJP99, VidalTHCC05].
2 Mean-Payoff Automaton Expressions
Quantitative languages. A quantitative language over a finite alphabet is a function . Given two quantitative languages and over , we denote by (resp., , and ) the quantitative language that assigns (resp., , , and ) to each word . The quantitative language is called the complement of . The and operators for quantitative languages correspond respectively to the least upper bound and greatest lower bound for the pointwise order such that if for all . Thus, they generalize respectively the union and intersection operators for classical boolean languages.
Weighted automata. A -weighted automaton is a tuple , where
is a finite set of states, is the initial state, and is a finite alphabet;
is a finite set of labelled transitions. We assume that is total, i.e., for all and , there exists such that ;
is a weight function, where is the set of rational numbers. We assume that rational numbers are encoded as pairs of integers in binary.
We say that is deterministic if for all and , there exists for exactly one . We sometimes call automata nondeterministic to emphasize that they are not necessarily deterministic.
Words and runs. A word is an infinite sequence of letters from . A lasso-word in is an ultimately periodic word of the form where is a finite prefix, and is nonempty. A run of over an infinite word is an infinite sequence of states and letters such that () , and () for all . We denote by the sequence of weights that occur in where for all .
Quantitative language of mean-payoff automata. The mean-payoff value (or limit-average) of a sequence of real numbers is either
Note that if we delete or insert finitely many values in an infinite sequence of numbers, its limit-average does not change, and if the sequence is ultimately periodic, then the and values coincide (and correspond to the mean of the weights on the periodic part of the sequence).
For , the quantitative language of is defined by for all . Accordingly, the automaton and its quantitative language are called or . Note that for deterministic automata, we have where is the unique run of over .
We omit the weight function when it is clear from the context, and we write when the value according to and coincide (e.g., for runs with a lasso shape).
Decision problems and distance. We consider the following classical decision problems for quantitative languages, assuming an effective presentation of quantitative languages (such as mean-payoff automata, or automaton expressions defined later). Given a quantitative language and a threshold , the quantitative emptiness problem asks whether there exists a word such that , and the quantitative universality problem asks whether for all words .
Given two quantitative languages and , the quantitative language-inclusion problem asks whether for all words , and the quantitative language-equivalence problem asks whether for all words . Note that universality is a special case of language inclusion where is constant. Finally, the distance between and is . It measures how close is an implementation as compared to a specification .
It is known that quantitative emptiness is decidable for nondeterministic mean-payoff automata [CDH08], while decidability was open for alternating mean-payoff automata, as well as for the quantitative language-inclusion problem of nondeterministic mean-payoff automata. Recent undecidability results on games with imperfect information and mean-payoff objective [DDGRT10] entail that these problems are undecidable (see Theorem 5.2).
Robust quantitative languages.
A class of quantitative languages is robust if the class is
closed under and complementation operations. The closure
properties allow quantitative languages from a robust class to be described
While nondeterministic - and -automata are closed
under the operation, they are not closed under and complement [CDH09b].
Alternating - and -automata
Mean-payoff automaton expressions. A mean-payoff automaton expression is obtained by the following grammar rule:
where is a deterministic - or -automaton. The quantitative language of a mean-payoff automaton expression is if is a deterministic automaton, and if for . By definition, the class of mean-payoff automaton expression is closed under , and . Closure under complement follows from the fact that the complement of is , the complement of is , the complement of is , and the complement of a deterministic -automaton can be defined by the same automaton with opposite weights and interpreted as a -automaton, and vice versa, since . Note that arbitrary linear combinations of deterministic mean-payoff automaton expressions (expressions such as where are rational constants) can be obtained for free since scaling the weights of a mean-payoff automaton by a positive factor results in a quantitative language scaled by the same factor.
3 The Vector Set of Mean-Payoff Automaton Expressions
Given a mean-payoff automaton expression , let be the deterministic weighted automata occurring in . The vector set of is the set of tuples of values of words according to each automaton . In this section, we characterize the vector set of mean-payoff automaton expressions, and in Section 4 we give an algorithmic procedure to compute this set. This will be useful to establish the decidability of all decision problems, and to compute the distance between mean-payoff automaton expressions. Given a vector , we denote by the -norm of .
The synchronized product of such that is the -weighted automaton such that if for all , and . In the sequel, we assume that all ’s are deterministic -automata (hence, is deterministic) and that the underlying graph of the automaton has only one strongly connected component (scc). We show later how to obtain the vector set without these restrictions.
For each (simple) cycle in , let the vector value of be the mean of the tuples labelling the edges of , denoted . To each simple cycle in corresponds a (not necessarily simple) cycle in each , and the vector value of contains the mean value of in each . We denote by the (finite) set of vector values of simple cycles in . Let be the convex hull of .
Let be a mean-payoff automaton expression. The set is the closure of the set .
The vector set of contains more values than the convex hull , as shown by the following example.
Consider the expression where and are deterministic -automata (see Figure 1). The product has two simple cycles with respective vector values (on letter ’’) and (on letter ’’). The set is the solid segment on Figure 1 and contains the vector values of all lasso-words. However, other vector values can be obtained: consider the word where and for all . It is easy to see that the value of according to is because the average number of ’s in the prefixes for odd is smaller than which tends to when . Since is a -automaton, the value of is in , and by a symmetric argument the value of is also in . Therefore the vector is in the vector set of . Note that is the pointwise minimum of and , i.e. where if and . In fact, the vector set is the whole triangular region in Figure 1, i.e. .
We generalize to finite sets of points in dimensions as follows: is the point such that is the minimum coordinate of the points in , for . For arbitrary , define . As illustrated in Example 1, the next lemma shows that the vector set is equal to .
Let be a mean-payoff automaton expression built from deterministic -automata, and such that has only one strongly connected component. Then, the vector set of is .
For a general mean-payoff automaton expression (with both deterministic - and automata, and with multi-scc underlying graph), we can use the result of Lemma 2 as follows. We replace each automaton occurring in by the automaton obtained from by replacing every weight by . The duality of and yields . In each strongly connected component of the underlying graph of , we compute (where is the set of vector values of the simple cycles in ) and apply the transformation on every coordinate where the automaton was originally a automaton. The union of the sets where ranges over the strongly connected components of gives the vector set of .
Let be a mean-payoff automaton expression built from deterministic -automata, and let be the set of strongly connected components in . For a strongly connected component let denote the set of vector values of the simple cycles in . The vector set of is .
4 Computation of for a Finite Set
It follows from Theorem 3.1 that the vector set of a mean-payoff automaton expression can be obtained as a union of sets , where is a finite set. However, the set being in general infinite, it is not immediate that is computable. In this section we consider the problem of computing for a finite set . In subsection 4.1 we present an explicit construction and in subsection 4.2 we give a geometric construction of the set as a set of linear constraints. We first present some properties of the set .
If is a convex set, then is convex.
By Lemma 3, the set is convex, and since is a monotone operator and , we have and thus . The following proposition states that in two dimensions the above sets coincide.
Let be a finite set. Then, .
We show in the following example that in three dimensions the above proposition does not hold, i.e., we show that in .
We show that in three dimension there is a finite set such that . Let with , , and . Then , , and . Therefore . Consider . We have and . Hence . We now show that does not belong to . Consider such that in . Since the third coordinate is non-negative for , and , it follows that if or , then the third coordinate of is positive. If and , then we have two cases: (a) if , then the first coordinate of is negative; and (b) if , then the second coordinate of is 1. It follows is not in . ∎
4.1 Explicit construction
Example 2 shows that in general . In this section we present an explicit construction that given a finite set constructs a finite set such that (a) and (b) . It would follow that . Since convex hull of a finite set is computable and is finite, this would give us an algorithm to compute . For simplicity, for the rest of the section we write for and for (i.e., we drop the from subscript). Recall that and let . We consider .
Let . Then, and .
Iteration of a construction . We will present a construction with the following properties: input to the construction is a finite set of points, and the output satisfies the following properties
(Condition C1). is finite and subset of .
(Condition C2). .
Before presenting the construction we first show how to iterate the construction to obtain the following result: given a finite set of points we construct a finite set of points such that .
Iterating . Consider a finite set of points , and let and . Then we have
and hence ; and
By iteration we obtain that for and as above we have
Thus for we have
By (2) above and Lemma 4, we obtain
By (1) above we have and hence . Thus we have
where the last equality follows since by Lemma 3 we have is convex. Since we have
Thus by (A) and (B) above we have . Thus given the finite set , we have the finite set such that (a) and (b) . We now present the construction to complete the result.
The construction . Given a finite set of points is obtained by adding points to in the following way:
For all , we consider all -dimensional coordinate planes supported by a point in ;
Intersect each coordinate plane with and the result is a convex polytope ;
We add the corners (or extreme points) of each polytope to .
The proof that the above construction satisfies condition C1 and C2 is given in the appendix, and thus we have the following result.
Given a finite set such that , the following assertion holds: a finite set with can be computed in time such that (a) and (b) .
4.2 Linear constraint construction
In the previous section we presented an explicit construction of a finite set of points whose convex hull gives us . The explicit construction illuminates properties of the set , however, the construction is inefficient computationally. In this subsection we present an efficient geometric construction for the computation of for a finite set . Instead of constructing a finite set such that , we represent as a finite set of linear constraints.
Consider the positive orthant anchored at the origin in , that is, the set of points with non-negative coordinates: . Similarly, the negative orthant is the set of points with non-positive coordinates, denoted as . Using vector addition, we write for the positive orthant anchored at . Similarly, we write for the negative orthant anchored at . The positive and negative orthants satisfy the following simple duality relation: iff .
Note that is an -dimensional convex polyhedron. For each , we consider the -dimensional face spanned by the coordinate axes except the one, that is, .
We say that is supported by if for every . Assuming is supported by , we can construct a set by collecting one point per -dimensional face of the orthant and get . It is also allowed that two faces contribute the same point to . Similarly, if for a subset , then the positive orthant anchored at is supported by . Hence, we get the following lemma.
Lemma 5 (Orthant Lemma)
iff is supported by .
We use the Orthant Lemma to construct . We begin by describing the set of points for which the face of the positive orthant anchored at has a non-empty intersection with . Define , the set of points of the form , where and .
Lemma 6 (Face Lemma)
Let be a point in the intersection, that is, . Using the duality relation for the -dimensional orthant, we get . By definition, is a subset of , and hence . ∎
It is now easy to describe the set defined in our problem statement.
Lemma 7 (Characterization)
By the Orthant Lemma, iff is supported by . Equivalently, for all . By the Face Lemma, this is equivalent to belonging to the common intersection of the sets . ∎
Algorithm for computation of . Following the construction, we get an algorithm that computes for a finite set of points in . Let . We first represent as intersection of half-spaces: we require at most half-spaces (linear constraints). It follows that can be expressed as linear constraints, and hence can be expressed as linear constraints. This gives us the following result.
Given a finite set of points in , we can construct in time linear constraints that represent .
5 Mean-Payoff Automaton Expressions are Decidable
Several problems on quantitative languages can be solved for the class of mean-payoff automaton expressions using the vector set. The decision problems of quantitative emptiness and universality, and quantitative language inclusion and equivalence are all decidable, as well as questions related to cut-point languages, and computing distance between mean-payoff languages.
Decision problems and distance.
From the vector set , we can compute the value set of values of words according to the quantitative language of as follows. The set is obtained by successive application of min-, max- and sum-projections where , defined by
and analogously for . For example, gives the set of word values of the mean-payoff automaton expression .
Assuming a representation of the polytopes of as a boolean combination of linear constraints, the projection is represented by the formula
where is a substitution that replaces every occurrence of by the expression . Since linear constraints over the reals admit effective elimination of existential quantification, the formula can be transformed into an equivalent boolean combination of linear constraints without existential quantification. The same applies to max- and sum-projections.
Successive applications of min-, max- and sum-projections (following the structure of the mean-payoff automaton expression ) gives the value set as a boolean combination of linear constraints, hence it is a union of intervals. From this set, it is easy to decide the quantitative emptiness problem and the quantitative universality problem: there exists a word such that if and only if , and for all words if and only if .
In the same way, we can decide the quantitative language inclusion problem “is for all words ?” by a reduction to the universality problem for the expression and threshold since mean-payoff automaton expressions are closed under sum and complement. The quantitative language equivalence problem is then obviously also decidable.
Finally, the distance between the quantitative languages of and can be computed as the largest number (in absolute value) in the value set of . As a corollary, this distance is always a rational number.
Comparison with [AlurDMW09].
The work in [AlurDMW09] considers deterministic mean-payoff automata with multiple payoffs. The weight function in such an automaton is of the form . The value of a finite sequence (where ) is the mean of the tuples , that is a -dimensional vector . The “value” associated to an infinite run (and thus also to the corresponding word, since the automaton is deterministic) is the set of accumulation points of the sequence .
In [AlurDMW09], a query language on the set of accumulation points is used to define multi-threshold mean-payoff languages.
For , let be the usual projection along the coordinate.
A query is a boolean combination of atomic threshold conditions of the form or where
and . A word is accepted if the set of accumulation points of its (unique) run satisfies the query.
Emptiness is decidable for such multi-threshold mean-payoff languages, by an argument based on the computation
of the convex hull of the vector values of the simple cycles in the automaton [AlurDMW09] (see also Lemma 1).
We have shown that this convex hull