Probabilistic Weighted Automata
Abstract
Nondeterministic weighted automata are finite automata with numerical weights on transitions. They define quantitative languages that assign to each word a real number . The value of an infinite word is computed as the maximal value of all runs over , and the value of a run as the maximum, limsup, liminf, limit average, or discounted sum of the transition weights. We introduce probabilistic weighted automata, in which the transitions are chosen in a randomized (rather than nondeterministic) fashion. Under almostsure semantics (resp. positive semantics), the value of a word is the largest real such that the runs over have value at least with probability 1 (resp. positive probability).
We study the classical questions of automata theory for probabilistic weighted automata: emptiness and universality, expressiveness, and closure under various operations on languages. For quantitative languages, emptiness and universality are defined as whether the value of some (resp. every) word exceeds a given threshold. We prove some of these questions to be decidable, and others undecidable. Regarding expressive power, we show that probabilities allow us to define a wide variety of new classes of quantitative languages, except for discountedsum automata, where probabilistic choice is no more expressive than nondeterminism. Finally, we give an almost complete picture of the closure of various classes of probabilistic weighted automata for the following pointwise operations on quantitative languages: max, min, sum, and numerical complement.
1 Introduction
In formal design, specifications describe the set of correct behaviours of a system. An implementation satisfies a specification if all its behaviours are correct. If we view a behaviour as a word, then a specification is a language, i.e., a set of words. Languages can be specified using finite automata, for which a large number of results and techniques are known; see [automata, Vardi95]. We call them boolean languages because a given behaviour is either good or bad according to the specification. Boolean languages are useful to specify functional requirements.
In a generalization of this approach, we consider quantitative languages, where each word is assigned a real number. The value of a word can be interpreted as the amount of some resource (e.g., memory or power) needed to produce it, or as a quality measurement for the corresponding behaviour [CAHS03, CAFH+06]. Therefore, quantitative languages are useful to specify nonfunctional requirements such as resource constraints, reliability properties, or levels of quality (such as quality of service).
Quantitative languages can be defined using (nondeterministic) weighted automata, i.e., finite automata with numerical weights on transitions [CulikK94, EsikK04]. In [CDH08a], we studied quantitative languages of infinite words and defined the value of an infinite word as the maximal value of all runs of an automaton over (if the automaton is nondeterministic, then there may be many runs over ). The value of a run is a function of the infinite sequence of weights that appear along . There are several natural functions to consider, such as , , , limit average, and discounted sum of weights. For example, peak power consumption can be modeled as the maximum of a sequence of weights representing power usage; energy use, as a discounted sum; average response time, as a limit average [CCHK+05, CAHS03].
In this paper, we consider probabilistic weighted automata as generators of quantitative languages. In such automata, nondeterministic choice is replaced by probability distributions on successor states. The value of an infinite word is defined to be the maximal value such that the set of runs over with value at least has either positive probability (positive semantics), or probability 1 (almostsure semantics). This simple definition combines in a general model the natural quantitative extensions of logics and automata [AlfaroFS04, DrosteK06, CDH08a], and the probabilistic models of automata for which boolean properties have been well studied [Rabin63, BlondelC03, BG05]. Note that the probabilistic Büchi and coBüchi automata of [BG05] are a special case of probabilistic weighted automata with weights 0 and 1 only (and the value of an infinite run computed as or , respectively). While quantitative objectives are standard in the branchingtime context of stochastic games [Sha53, EM79, FV97, CAHS03, CJH04, Gimbert06], we are not aware of any model combining probabilities and weights in the lineartime context of words and languages, though such a model is very natural for the specification of quantitative properties. Consider the specification of two types of communication channels given in Figure 1. One has low cost (sending costs unit) and low reliability (a failure occurs in 10% of the case and entails an increased cost for the operation), while the second is expensive (sending costs units), but the reliability is high (though the cost of a failure is prohibitive). In the figure, we omit the selfloops with cost in state and over ack, and in over send. Natural questions can be formulated in this framework, such as whether the averagecost of every word is really smaller in the lowcost channel, or to construct a probabilistic weighted automaton that assigns the minimum of the averagecost of the two types of channels. In this paper, we attempt a comprehensive study of such fundamental questions, about the expressive power, closure properties, and decision problems for probabilistic weighted automata.
First, we compare the expressiveness of the various classes of probabilistic and nondeterministic weighted automata over infinite words. For , , and limit average, we show that a wide variety of new classes of quantitative languages can be defined using probabilities, which are not expressible using nondeterminism. Our results rely on reachability properties of closed recurrent sets in Markov chains. For discounted sum, we show that probabilistic weighted automata under the positive semantics have the same expressive power as nondeterministic weighted automata, while under the almostsure semantics, they have the same expressive power as weighted automata with universal branching, where the value of a word is the minimal (instead of maximal) value of all runs. The question of whether the positive semantics of weighted limitaverage automata is more expressive than nondeterminism, remains open.
Second, we give an almost complete picture of the closure of probabilistic weighted automata under the pointwise operations of maximum, minimum, and sum for quantitative languages. We also define the complement of a quantitative language by for all words .^{1}^{1}1One can define for any constant without changing the results of this paper. Note that maximum and minimum are in fact the operation of least upper bound and greatest lower bound for the pointwise natural order on quantitative languages (where if and only if for all words ). Therefore, they also provide natural generalization of the classical union and intersection operations of boolean languages.
Note that closure under max trivially holds for the positive semantics, and closure under min for the almostsure semantics. We also define the complement of a quantitative language by for all words . Only automata under positive semantics and automata under almostsure semantics are closed under all four operations; these results extend corresponding results for the boolean (i.e., nonquantitative) case [BG08]. To establish the closure properties of limitaverage automata, we characterize the expected limitaverage reward of Markov chains. Our characterization answers all closure questions except for the language sum in the case of positive semantics, which we leave open. Note that expressiveness results and closure properties are tightly connected. For instance, because they are closed under max, the automata with positive semantics can be reduced to automata with almostsure semantics and to automata with positive semantics; and because they are not closed under complement, the automata with almostsure semantics and automata with positive semantics have incomparable expressive powers.
Third, we investigate the emptiness and universality problems for probabilistic weighted automata, which ask to decide if some (resp. all) words have a value above a given threshold. Using our expressiveness results, as well as [BG08, CDH08b], we establish some decidability and undecidability results for , , and automata; in particular, emptiness and universality are undecidable for automata with positive semantics and for automata with almostsure semantics, while the question is open for the emptiness of automata with positive semantics and for the universality of automata with almostsure semantics. We also prove the decidability of emptiness for probabilistic discountedsum automata with positive semantics, while the universality problem is as hard as for the nondeterministic discountedsum automata, for which no decidability result is known. We leave open the case of limit average.
2 Definitions
A quantitative language over a finite alphabet is a function . A boolean language (or a set of infinite words) is a special case where for all words . Nondeterministic weighted automata define the value of a word as the maximal value of a run [CDH08a]. In this paper, we study probabilistic weighted automata as generator of quantitative languages.
Value functions. We consider the following value functions to define quantitative languages. Given an infinite sequence of rational numbers, define

;

;

;

;

For , ;
Given a finite set , a probabilistic distribution over is a function such that . We denote by the set of all probabilistic distributions over .
Probabilistic weighted automata. A probabilistic weighted automaton is a tuple where:

is a finite set of states;

is the initial distribution;

is a finite alphabet;

is a probabilistic transition function;

is a weight function.
We can define a nonprobabilistic automaton from by ignoring the probability values, and saying that is initial if , and is an edge of if . The automaton is deterministic if for some , and for all and , there exists such that .
A run of over a finite (resp. infinite) word is a finite (resp. infinite) sequence of states and letters such that () , and () for all . We denote by the sequence of weights that occur in where for all .
The probability of a finite run over a finite word is . For each , the function defines a unique probability measure over Borel sets of (infinite) runs of over .
Given a value function , we say that the probabilistic automaton generates the quantitative languages defined for all words by under the almostsure semantics, and under the positive semantics. For nonprobabilistic automata, the value of a word is either the maximal value of the runs (i.e., for all ) and the automaton is then called nondeterministic, or the minimal value of the runs, and the automaton is then called universal.
Note that Büchi and coBüchi automata ([BG05]) are special cases of respectively  and automata, where all weights are either or .
Notations. The first letter in acronyms for classes of automata can be N(ondeterministic), D(eterministic), U(niversal), Pos for the language in the positive semantics, or As for the language in the almostsure semantics. We use the notations D N to denote the classes of automata whose deterministic version has the same expressiveness as their nondeterministic version. When the type of an automaton is clear from the context, we often denote its language simply by or even , instead of , , etc.
Reducibility. A class of weighted automata is reducible to a class of weighted automata if for every there exists such that , i.e. for all words . Reducibility relationships for (non)deterministic weighted automata are given in [CDH08a].
Composition. Given two quantitative languages , we denote by (resp. and ) the quantitative language that assigns (resp. and ) to each word . The language is called the complement of . The , and complement operators for quantitative languages generalize respectively the union, intersection and complement operator for boolean languages. The closure properties of (non)deterministic weighted automata are given in [CDH08b].
Remark. We sometimes use automata with weight functions that assign a weight to states instead of transitions. This is a convenient notation for weighted automata in which from each state, all outgoing transitions have the same weight. In pictorial descriptions of probabilistic weighted automata, the transitions are labeled with probabilities, and states with weights.
3 Expressive Power of Probabilistic Weighted Automata
We complete the picture given in [CDH08a] about reducibility for nondeterministic weighted automata, by adding the relations with probabilistic automata. The results for , , and are summarized in Figure 2s, and for  and automata in Theorems 3.1 and 3.5.
3.1 Probabilistic automata
Like for probabilistic automata over finite words, the quantitative languages definable by probabilistic and (non)deterministic automata coincide.
Theorem 3.1
PosSup and AsSup are reducible to DSup.
Proof. It is easy to see that PosSupautomata define the same language when interpreted as NSupautomata, and the same holds for AsSup and USup. The result then follows from [CDH08a, Theorem 9].
3.2 Probabilistic automata
Many of our results would consider Markov chains and closed recurrent states in Markov chains. A Markov chain consists of a finite set of states, a set of edges, and a probabilistic transition function . For all , there is an edge iff . A closed recurrent set of states in is a bottom strongly connected set of states in the graph . We will use the following two key properties of closed recurrent states.

Property 1. Given a Markov chain , and a start state , with probability 1, the set of closed recurrent states is reached from in finite time. Hence for any , there exists such that for all , for all starting state , the set of closed recurrent states are reached with probability at least in steps.

Property 2. If a closed recurrent set is reached, and the limit of the expectation of the average weights of is , then for all , there exists a such that for all the expectation of the average weights for steps is at least .
The above properties are the basic properties of finite state Markov chains and closed recurrent states [KemenyBook].
Lemma 1
Let be a probabilistic weighted automata with alphabet . Consider the Markov chain arising of on input (we refer to this as the Markov chain) and we use similar notation for the Markov chain. The following assertions hold:

If for all closed recurrent sets in the Markov chain, the (expected) limitaverage value (in probabilistic sense) is at least 1, then there exists such that for all closed recurrent sets arising of on input the expected limitaverage reward is positive.

If for all closed recurrent sets in the Markov chain, the (expected) limitaverage value (in probabilistic sense) is at most 0, then there exists such that for all closed recurrent sets arising of on input the expected limitaverage reward is strictly less than 1.

If for all closed recurrent sets in the Markov chain, the (expected) limitaverage value (in probabilistic sense) is at most 0, and if for all closed recurrent sets in the Markov chain, the (expected) limitaverage value (in probabilistic sense) is at most 0, then there exists such that for all closed recurrent sets arising of on input the expected limitaverage reward is strictly less than 1/2.
Proof. We present the proof in three parts.

Let be the maximum absolute value of the weights of . From any state , there is a path of length at most to a closed recurrent set in the Markov chain, where is the number of states of . Hence if we choose , then any closed recurrent set in the Markov chain arising on the input contains closed recurrent sets of the Markov chain. For , there exists such that from any state , for all , on input from , the closed recurrent sets of the Markov chain is reached with probability at least (by property 1 for Markov chains). If all closed recurrent sets in the Markov chain have expected limitaverage value at least 1, then (by property 2 for Markov chains) for all , there exists such that for all , from all states of a closed recurrent set on the input the expected average of the weights is at least , (i.e., expected sum of the weights is ). Consider , we choose , where and . Observe that by our choice . Consider a closed recurrent set in the Markov chain on and we obtain a lower bound on the expected average reward as follows: with probability the closed recurrent set of the Markov chain is reached within steps, and then in the next steps at the expected sum of the weights is at least , and since the worst case weight is we obtain the following bound on the expected sum of the rewards
Hence the expected average reward is at least and hence positive.

The proof is similar to the previous result.

The proof is also similar to the first result. The only difference is that we use a long enough sequence of such that with high probability a closed recurrent set in the Markov chain is reached and then stay long enough in the closed recurrent set to approach the expected sum of rewards to 0, and then present a long enough sequence of such that with high probability a closed recurrent set in the Markov chain is reached and then stay long enough in the closed recurrent set to approach the expected sum of rewards to 0. The calculation is similar to the first part of the proof.
Thus we obtain the desired result.
We consider the alphabet consisting of letters and , i.e., . We define the language of finitely many ’s, i.e., for an infinite word if consists of infinitely many ’s, then , otherwise . We also consider the language of words with infinitely many ’s (it is the complement of ).
Lemma 2
Consider the language of finitely many ’s. The following assertions hold.

The language can be expressed as a NLimAvg.

The language can be expressed as a PosLimAvg.

The language cannot be expressed as AsLimAvg.
Proof. We present the three parts of the proof.

The result follows from the results of [CDH08a, Theorem 12] where the explicit construction of a NLimAvg to express is presented.

A PosLimAvg automaton to express is as follows (see Figure 3):

States and weight function. The set of states of the automaton is , with as the starting state. The weight function is as follows: and .

Transition function. The probabilistic transition function is as follows:
(i) from , given or , the next states are , each with probability 1/2;
(ii) from given the next state is with probability 1, and from given the next state is with probability 1; and
(iii) from state the next state is with probability 1 on both and . (it is an absorbing state).
Given the automaton consider any word with infinitely many ’s then, the automata reaches sink state in finite time with probability 1, and hence . For a word with finitely many ’s, let be the last position that an appears. Then with probability , after steps, the automaton only visits the state and hence . Hence there is a PosLimAvg for .


We show that cannot be expressed as an AsLimAvg. Consider an AsLimAvg automaton . Consider the Markov chain that arises from if the input is only (i.e., on ), we refer to it as the Markov chain. If there is a closed recurrent set that can be reached from the starting state (reached by any sequence of and ’s), then the limitaverage reward (in probabilistic sense) in must be at least 1 (otherwise, if there is a closed recurrent set with limitaverage reward less than 1, we can construct a finite word that with positive probability will reach , and then follow by and we will have ). Hence any closed recurrent set on the Markov chain has limitaverage reward at least 1 and by Lemma 1 there exists such that the . Hence it follows that cannot express .
Hence the result follows.
Lemma 3
Consider the language of infinitely many ’s. The following assertions hold.

The language cannot be expressed as an NLimAvg.

The language cannot be expressed as a PosLimAvg.

The language can be expressed as AsLimAvg.
Proof. We present the three parts of the proof.

It was shown in the proof of [CDH08a, Theorem 13] that NLimAvg cannot express .

We show that is not expressible by a PosLimAvg. Consider a PosLimAvg and consider the Markov chain arising from under the input . All closed recurrent sets reachable from the starting state must have the limitaverage value at most (otherwise we can construct an word with finitely many ’s such that ). Since all closed recurrent set in the Markov chain has limitaverage reward that is 0, using Lemma 1 we can construct a word , for a large enough , such that . Hence the result follows.

We now show that is expressible as an AsLimAvg. The automaton is as follows (see Figure 4):

States and weight function. The set of states are with as the starting state. The weight function is as follows: and .

Transition function. The probabilistic transition function is as follows:
(i) from given the next state is with probability 1;
(ii) at given the next states are and each with probability 1/2; (iii) the state is an absorbing state.
Consider a word with infinitely many ’s, then the probability of reaching the sink state is 1, and hence . Consider a word with finitely many ’s, and let be the number of ’s, and then with probability the automaton always stay in , and hence .

Hence the result follows.
Lemma 4
There exists a language that can be expressed by PosLimAvg, PosLimSup and PosLimInf, but not by NLimAvg, NLimSup or NLimInf.
Proof. Consider an automaton as follows (see Figure 5):

States and weight function. The set of states are with as the starting state. The weight function is as follows: and .

Transition function. The probabilistic transition is as follows:
(i) from if the input letter is , then the next states are and with probability 1/2;
(ii) from if the input letter is , then the next state is with probability 1;
(iii) from , if the input letter is , then the next state is with probability 1;
(iv) from , if the input letter is , then the next state is with probability 1; and
(v) the state is an absorbing state.
If we consider the automaton , and interpret it as a PosLimAvg, PosLimSup, or PosLimInf, then it accepts the following language:
i.e., if and if : the above claim follows easily from the argument following Lemma 5 of [BG05]. We now show that cannot be expressed as NLimAvg, NLimSup or NLimInf. Consider a nondeterministic automaton . Suppose there is a cycle in such that average of the rewards in is positive, and is formed by a word that contains a . If no such cycle exists, then clearly cannot express as there exists word for which such that contains infinitely many ’s. Consider a cycle such that average of the rewards is positive, and let the cycle be formed by a finite word and there must exist at least one index such that . Hence the word can be expressed as , and hence there exists a finite word (that reaches the cycle) such that . This contradicts that is an automaton to express as . Simply exchanging the average reward of the cycle by the maximum reward (resp. minimum reward) shows that is not expressible by a NLimSup (resp. NLimInf).
The next theorem summarizes the results for limitaverage automata obtained in this section.
Theorem 3.2
AsLimAvg is incomparable in expressive power with PosLimAvg and NLimAvg, and NLimAvg cannot express all languages expressible by PosLimAvg.
Open question. Whether NLimAvg is reducible to PosLimAvg or NLimAvg is incomparable to PosLimAvg (i.e., there is a language expressible by NLimAvg but not by a PosLimAvg) remains open.
3.3 Probabilistic automata
Lemma 5
NLimInf is reducible to both AsLimInf and PosLimInf.
Proof. It was shown in [CDH08a] that NLimInf is reducible to DLimInf. Since DLimInf are special cases of AsLimInf and PosLimInf the result follows.
Lemma 6
The language is expressible by an AsLimInf, but cannot be expressed as a NLimInf or a PosLimInf.
Proof. It was shown in [CDH08a] that the language is not expressible by NLimInf. If we consider the automaton of Lemma 3 and interpret it as an AsLimInf, then the automaton expresses the language . The proof of the fact that PosLimInf cannot express is similar to the the proof of Lemma 3 (part(2)) and instead of the average reward of the closed recurrent set , we need to consider the minimum reward of the closed recurrent set .
Lemma 7
PosLimInf is reducible to AsLimInf.
Proof. Let be a PosLimInf and we construct a AsLimInf such that is equivalent to . Let be the set of weights that appear in and let be the least value in . For each weight , consider the PosCW that is obtained from by considering all states with weight at least as accepting states. It follows from the results of [BG08] that PosCW is reducible to AsCW (it was shown in [BG08] that AsBW is reducible to PosBW and it follows easily that dually PosCW is reducible to AsCW). Let be an AsCW that is equivalent to . We construct a PosLimInf from by assigning weights to the accepting states of and the minimum weight to all other states. Consider a word , and we consider the following cases.

If , then for all such that we have , (i.e., the PosCW and the AsCW accepts ).

For , if , then
It follows from above that . We will show later that AsLimInf is closed under (Lemma LABEL:lemmclosedasliwzlsw) and hence we can construct an AsLimInf such that . Thus the result follows.
Theorem 3.3
We have the following strict inclusion
NLimInf PosLimInf AsLimInf
Proof. The fact that NLimInf is reducible to PosLimInf follows from Lemma 5, and the fact the PosLimInf is not reducible to NLimInf follows from Lemma 4. The fact that PosLimInf is reducible to AsLimInf follows from Lemma 7 and the fact that AsLimInf is not reducible to PosLimInf follows from Lemma 6.
3.4 Probabilistic automata
Lemma 8
NLimSup and PosLimSup are not reducible to AsLimSup.
Proof. The language of finitely many ’s can be expressed as a nondeterministic Büchi automata, and hence as a NLimSup. We will show that NLimSup is reducible to PosLimSup. It follows that is expressible as NLimSup and PosLimSup. The proof of the fact that AsLimSup cannot express is similar to the the proof of Lemma 2 (part(3)) and instead of the average reward of the closed recurrent set , we need to consider the maximum reward of the closed recurrent set .
Deterministic in limit NLimSup. Consider an automaton that is a NLimSup. Let be the weights that appear in . We call the automaton deterministic in the limit if for all states with weight greater than , all states reachable from are deterministic.
Lemma 9
For every NLimSup , there exists a NLimSup that is deterministic in the limit and equivalent to .
Proof. From the results of [CY95] it follows that a NBW can be reduced to an equivalent NBW such that is deterministic in the limit. Let be a NLimSup, and let be the set of weights that appear in . and let with . For each , consider the NBW whose (boolean) language is the set of words such that , by declaring to be accepting the states with weight at least . Let be the deterministic in the limit NBW that is equivalent to . The automaton that is deterministic in the limit and is equivalent to is obtained as the automaton that by initial nondeterminism chooses between the ’s, for .
Lemma 10
NLimSup is reducible to PosLimSup.
Proof. Given a NLimSup , consider the NLimSup that is deterministic in the limit and equivalent to . By assigning equal probabilities to all outgoing transitions from a state we obtain a PosLimSup that is equivalent to (and hence ). The result follows.
Lemma 11
AsLimSup is reducible to PosLimSup.
Proof. Consider a AsLimSup and let the weights of be . For consider the AsBW obtained from with the set of state with reward at least as the Büchi states. It follows from the results of [BG08] that AsBW is reducible to PosBW. Let be the PosBW that is equivalent to . Let be the automaton such that all Büchi states of is assigned weight and all other states are assigned . Consider the automata that goes with equal probability to the starting states of , for , and we interpret as a PosLimSup. Consider a word , and let for some , i.e., given , the set of states with reward at least is visited infinitely often with probability 1 in . Hence the PosBW accepts with positive probability, and since chooses with positive probability, it follows that given , in the weight is visited infinitely often with positive probability, i.e., . Moreover, given , for all , the set of states with weight at least is visited infinitely often with probability 0 in . Hence for all , the automata accepts with probability 0. Thus for all . Hence and thus AsLimSup is reducible to PosLimSup.
Lemma 12
AsLimSup is not reducible to NLimSup.
Proof. It follows from [BG08] that for the following language can be expressed by a AsBW and hence by AsLimSup:
It follows from argument similar to Lemma 4 that there exists such that cannot be expressed by a NLimSup. Hence the result follows.
Theorem 3.4
AsLimSup and NLimSup are incomparable in expressive power, and PosLimSup is more expressive than AsLimSup and NLimSup.
Lemma 13
PosCW is reducible to PosBW.
Proof. Let be a PosCW with the set of accepting states. We construct a PosBW as follows:

The set of states is where is a copy of the states in ;

is the initial state;

The transition function is as follows, for all :

for all states , we have , i.e., the state and its copy are reached with half of the original transition probability;

the states such that are absorbing states (i.e., );

for all states and , we have , i.e., the transition function in the copy automaton follows that of for states that are copy of the accepting states.


The set of accepting states is .
We now show that the language of the PosCW and the language of PosBW coincides. Consider a word such that . Let be the probability that given the word eventually always states in are visited in , and since we have . In other words, as limit tends to , the probability that after steps only states in are visited is . Hence there exists such that the probability that after steps only states in are visited is at least . In the automaton , the probability to reach states of after steps has probability . Hence with positive probability (at least ) the automaton visits infinitely often the states of , and hence . Observe that since every state in is absorbing and nonaccepting), it follows that if we consider an accepting run , then the run must eventually always visits states in (i.e., the copy of the accepting states ). Hence it follows that for a given word , if , then with positive probability eventually always states in are visited in . Thus , and the result follows.
Lemma 14
PosLimInf is reducible to PosLimSup, and AsLimSup is reducible to AsLimInf.
Proof. We present the proof that PosLimInf is reducible to PosLimSup, the other proof being similar. Let be a PosLimInf, and let be the set of weights that appear in . For each , it is easy to construct a PosCW whose (boolean) language is the set of words such that , by declaring to be accepting the states with weight at least . We then construct for each a PosBW that accepts the language of (such a PosBW can be constructed by Lemma 13). Finally, assuming that with , we construct the PosLimSup for where is obtained from by assigning weight to each accepting states, and to all the other states. The PosLimSup that expresses the language of is and since PosLimSup is closed under (see Lemma 16), the result follows.
Lemma 15
AsLimInf and PosLimSup are reducible to each other; AsLimSup and PosLimInf have incomparable expressive power.
Proof. This result is an easy consequence of the fact that an automaton interpreted as AsLimInf defines the complement of the language of the same automaton interpreted as PosLimSup (and similarly for AsLimSup and PosLimInf), and from the fact that AsLimInf and PosLimSup are closed under complement, while AsLimSup and PosLimInf are not (see Lemma LABEL:lem:aslizlscomplement and LABEL:lem:aslszlicomplement).
3.5 Probabilistic automata
For probabilistic discountedsum automata, the following result establishes equivalence of the nondeterministic and the positive semantics, and the equivalence of the universal and the almostsure semantics.
Theorem 3.5
The following assertions hold: (a) NDisc and PosDisc are reducible to each other; (b) UDisc and AsDisc are reducible to each other.
Proof. (a) We first prove that NDisc is reducible to PosDisc. Let be a NDisc, and let be its minimal and maximal weights respectively. Consider the PosDisc where is the uniform distribution over the set of states such that . Let be a run of (over ) with value . For all , we show that . Let such that , and let . The discounted sum of the weights in is at least . The probability of the set of runs over that are continuations of is positive, and the value of all these runs is at least , and therefore at least . This shows that , and thus . Note that since there is no run in (nor in ) over with value greater than . Hence .
Now, we prove that PosDisc is reducible to NDisc. Given a PosDisc , we construct a NDisc where if and only if , for all , . By analogous arguments as in the first part of the proof, it is easy to see that .
(b) It is easy to see that the complement of the quantitative language defined by a UDisc (resp. AsDisc) can be defined by a NDisc (resp. PosDisc). Then, the result follows from Part (essentially, given a UDisc, we obtain easily an NDisc for the complement, then an equivalent PosDisc, and finally a AsDisc for the complement of the complement, i.e., the original quantitative language).
Note that a byproduct of this proof is that the language of a PosDisc does not depend on the precise values of the probabilities, but only on whether they are positive or not.
4 Closure Properties of Probabilistic Weighted Automata
We consider the closure properties of the probabilistic weighted automata under the operations , , complement, and sum. The results are presented in Table 1.
comp.  sum  emptiness  universality  

PosSup  
PosLimSup  
PosLimInf  
PosLimAvg  ?  ?  ?  
PosDisc  ? (1)  
almostsure 
AsSup  
AsLimSup  
AsLimInf  
AsLimAvg  ?  ?  
AsDisc  ? (1)  
The universality problem for NDisc can be reduced to (1). It is not known whether this problem is decidable. 
4.1 Closure under and
Lemma 16 (Closure by initial nondeterminism)
PosLimSup, PosLimInf and PosLimAvg is closed under ; and AsLimSup, AsLimInf and AsLimAvg is closed under .
Proof. Given two automata and consider the automata obtained by initial nondeterministic choice of and . Formally, let and be the initial states of and , respectively, then in we add an initial state and the transition from is as follows: for , consider the set . From , for input letter , the successors are from each with probability . If and are PosLimSup (resp. PosLimInf, PosLimAvg), then is a PosLimSup (resp. PosLimInf, PosLimAvg) such that . Similarly, if and are AsLimSup (resp. AsLimInf, AsLimAvg), then is a AsLimSup (resp. AsLimInf, AsLimAvg) such that .
Lemma 17 (Closure by synchronized product)
AsLimSup is closed under and PosLimInf is closed under .
Proof. We present the proof that AsLimSup is closed under . Let and be two probabilistic weighted automata with weight function and , respectively. Let be the usual synchronized product of and with weight function such that . Given a path in we denote by the path in that is the projection of the first component of and we use similar notation for . Consider a word , let . We consider the following two cases to show that .

W.l.o.g. let the maximum be achieved by , i.e., . Let be the set of states in such that weight of is at least . Since , given the word , in the event holds with probability 1. Consider the following set of paths in
Since given , the event holds with probability 1 in , it follows that given , the event holds with probability 1 in . The function ensures that every path visits weights of value at least infinitely often. Hence .

Consider a weight value . Let be the set of states in such that the weight of is less than . Given the word , since , it follows that probability of the event in , given the word , is positive. Hence given the word , the probability of the event is positive in . It follows that .
The result follows. If and are PosLimInf, and in we assign weights such that every state in has the minimum weight of its component states, and we consider as a PosLimInf, then . The proof is similar to the result for AsLimSup.