This paper is concerned with the computational complexity of equivalence and minimisation for automata with transition weights in the ring of rational numbers. We use polynomial identity testing and the Isolation Lemma to obtain complexity bounds, focussing on the class NC of problems within P solvable in polylogarithmic parallel time. For finite -weighted automata, we give a randomised NC procedure that either outputs that two automata are equivalent or returns a word on which they differ. We also give an NC procedure for deciding whether a given automaton is minimal, as well as a randomised NC procedure that minimises an automaton. We consider probabilistic automata with rewards, similar to Markov Decision Processes. For these automata we consider two notions of equivalence: expectation equivalence and distribution equivalence. The former requires that two automata have the same expected reward on each input word, while the latter requires that each input word induce the same distribution on rewards in each automaton. For both notions we give algorithms for deciding equivalence by reduction to equivalence of -weighted automata. Finally we show that the equivalence problem for -weighted visibly pushdown automata is logspace equivalent to the polynomial identity testing problem.

weighted automata, equivalence checking, polynomial identity testing, minimisation

9(1:08)2013 1–22 Aug. 12, 2012 Mar. 04, 2013 \theoremstyleplain\theoremstyleplain\theoremstyleplain\theoremstyleplain\theoremstyledefinition

On the Complexity of Equivalence and Minimisation for -Weighted Automata] On the Complexity of Equivalence and Minimisation for -Weighted Automata\rsuper*

S. Kiefer]Stefan Kiefer\rsupera

A.S. Murawski]Andrzej S. Murawski\rsuperb

J. Ouaknine]Joël Ouaknine\rsuperc

B. Wachter]Björn Wachter\rsuperd

J. Worrell]James Worrell\rsupere


[Theory of computation]: Design and analysis of algorithms—Approximation algorithms analysis—Numeric approximation algorithms & Semantics and reasoning—Program reasoning—Program verification


F.2.1, F.3.1 \titlecomment\lsuper*This is a full and improved version of the FoSSaCS’12 paper with the same title. An algorithm from the same authors’ CAV’11 paper [15] was incorporated in Section 3.1 and new algorithms for minimisation were added in Section 4. Section 5.1 is also new.

1 Introduction

Probabilistic and weighted automata were introduced in the 1960s, with many fundamental results established in the papers of Schutzenberger [23] and Rabin [21]. Nowadays probabilistic automata are widely used in automated verification, natural-language processing, and machine learning. In this paper we consider weighted automata over the ring , which generalise probabilistic automata. Note that we restrict to rational transition weights to permit effective representation of automata.

Two -weighted automata are said to be equivalent if they assign the same weight to any given word. It has been shown by Schutzenberger [23] and later by Tzeng [28] that equivalence for -weighted automata is decidable in polynomial time. By contrast, the natural analog of language inclusion, that one automaton accepts each word with weight at least as great as another automaton, is undecidable [9]. Let us emphasize that we consider the standard ring structure on . For example, for weighted automata over the max-plus semiring on , equivalence is undecidable [2, 18].

In this paper we show that the equivalence problem for -weighted automata, and various extensions thereof, can be efficiently solved by techniques rooted in polynomial identity testing. We focus on establishing bounds involving complexity classes within the class P of polynomial-time solvable problems. In particular, we consider the class NC of problems solvable in polylogarithmic parallel time with polynomially many processors [13] (see Section 2 for background on complexity theory).

It has long been known that equivalence for -weighted automata can be solved in polynomial time [23, 28]. There is moreover an NC algorithm for solving equivalence [29]. Our first contribution, in Section 3, is a randomised NC algorithm for deciding equivalence, based on polynomial identity testing. The advantage of using randomisation in this context is that our algorithm has much lower processor complexity than [29]. The latter performs quadratically more work than the classical sequential procedure. On the other hand, our randomised algorithm compared well with the classical sequential algorithm of [23, 28] on a collection of benchmarks [15].

We also show that our algorithm can be used not just to decide equivalence but also to generate counterexamples in case of inequivalence. However the counterexample generation is essentially sequential. We address this deficiency by giving a second randomised NC algorithm to decide equivalence of automata and output counterexamples in case of inequivalence. The algorithm is based on the Isolation Lemma, a classical technique in randomised algorithms that has previously been used, e.g., to derive randomised NC algorithms for matching in graphs [20]. Whether there is a deterministic NC algorithm that outputs counterexamples in case of inequivalence remains open.

A -weighted automaton is minimal if no equivalent automaton has fewer states. Minimal automata are unique up to change of basis. In Section 4 we give an NC procedure to decide if a given automaton is minimal. For the associated function problem, that of minimising a given automaton, we give a randomised NC procedure. Thus the situation for minimisation is similar to that for equivalence: the decision problem is in NC whereas the function problem can only be shown to be in RNC.

In Section 5 we consider probabilistic automata with rewards on transitions, which can be seen as partially observable Markov decision processes. Rewards (and costs, which can be considered as negative rewards) are omnipresent in probabilistic modelling for capturing quantitative effects of probabilistic computations, such as consumption of time, allocation of memory, energy usage, etc. For these automata we consider a notion of expectation equivalence, requiring that two automata have the same expected reward on each input word, and a stronger notion of distribution equivalence, requiring that each word induce the same distribution on rewards in both automata. In both cases we give decision procedures for equivalence by reduction to the case of -weighted automata, thus inheriting the complexity bounds established there.

We present a case study in which costs are used to model the computation time required by an RSA encryption algorithm, and show that the vulnerability of the algorithm to timing attacks depends on the equivalence of associated probabilistic reward automata. In [17] two possible defenses against such timing leaks were suggested. We also analyse their effectiveness.

In Section 6 we consider pushdown automata. Probabilistic pushdown automata are a natural model of recursive probabilistic procedures, stochastic grammars and branching processes [12, 19]. The equivalence problem for deterministic pushdown automata has been extensively studied [26, 27]. We study the equivalence problem for -weighted visibly pushdown automata (VPA) [3]. In a visibly pushdown automaton the stack operation of a given transition—whether to pop or push—is determined by the input symbol being read.

We show that the equivalence problem for -weighted VPA is logspace equivalent to Arithmetic Circuit Identity Testing (ACIT), which is the problem of determining equivalence of polynomials presented via arithmetic circuits [1]. Several polynomial-time randomized algorithms are known for ACIT, but it is a major open problem whether it can be solved in polynomial time by a deterministic algorithm. A closely related result is that of Seidl [25], that equivalence of -weighted tree automata is decidable in randomised polynomial time. However [25] does not establish a connection with ACIT in either direction.

2 Preliminaries

2.1 Complexity Classes

Recall that NC is the subclass of P comprising those problems considered efficiently parallelisable. NC can be defined via parallel random-access machines (PRAMs), which consist of a set of processors communicating through a shared memory. A problem is in NC if it can be solved in time (polylogarithmic time) on a PRAM with (polynomially many) processors. A more abstract definition of NC is as the class of languages which have L-uniform Boolean circuits of polylogarithmic depth and polynomial size. More specifically, denote by the class of languages which have circuits of depth . The complexity class RNC consists of those languages with randomized NC algorithms. We have the following chain of inclusions, none of which is known to be strict:

We also have , that is, problems in NC are solvable in polylogarithmic space.

Problems in NC include reachability in directed graphs, computing the rank and determinant of an integer matrix, solving linear systems of equations, and the Tree Isomorphism problem. Problems that are P-hard under logspace reductions include Circuit Value and Max Flow. Such problems are not in NC unless . Problems in include matching in graphs and max flow in -valued networks. In both cases these problems have resisted classification as either being in NC or P-hard. See [13] for more details about NC and RNC.

2.2 Linear Algebra

Given an matrix and a matrix , the Kronecker product is an matrix defined by

The following is a key property of the Kronecker product:

Proposition \thethm

for matrices of appropriate dimensions.

Given two matrices and , the Hadamard product is the matrix defined by .

2.3 Laurent Polynomials

A Laurent polynomial in variables with coefficients in is an expression of the form , where is a finite set and . We say that has degree bound if . We write for the ring of such polynomials, with the usual addition and multiplication operations; we furthermore write for the corresponding field of fractions, whose elements are quotients of Laurent polynomials.

The following proposition immediately follows from the cofactor formula for matrix inversion.

Proposition \thethm

Let be an matrix with entries in of degree bound . If , then is invertible over , and each entry of can be represented as the quotient of Laurent polynomials, each of degree bound at most .

In the situation of Proposition 2.3 we denote by .

3 Equivalence of -Weighted Automata

Given a field , an -weighted automaton consists of a positive integer representing the number of states, a finite alphabet , a map assigning a transition matrix to each alphabet symbol, an initial (row) vector , and a final (column) vector . We extend to as the matrix product . The automaton  assigns to each word  a weight , where . An automaton  is said to be zero if for all . Two automata over the same alphabet  are said to be equivalent if for all .

Given two automata that are to be checked for equivalence, one can compute an automaton  with for all . Then is zero if and only if and  are equivalent. Given and , set with and

This reduction allows us to focus on zeroness, i.e., the problem of determining whether a given -weighted automaton is zero. (Since transition weights can be negative, zeroness is not the same as emptiness of the underlying unweighted automaton.) Note that a witness word against zeroness of  is also a witness against the equivalence of and .

In the remainder of this section we present two randomised algorithm algorithms for deciding equivalence of -weighted automata. The following result from [28] immediately implies decidability of testing zeroness, and hence equivalence, of -weighted automata.

Proposition \thethm

Let be any field and an -weighted automaton. Then: (i) ; (ii) if is not equal to the zero automaton then there exists a word of length at most such that .

3.1 Algorithm Based on the Schwartz-Zippel Lemma

By Proposition 3 a -weighted automaton with states is zero if and only if its -bounded language is zero, that is, it assigns weight zero to all words of length at most . Inspired by the work of Blum, Carter and Wegman on free Boolean graphs [5], we represent the -bounded language of an automaton by a polynomial in which each monomial represents a word and the coefficient of the monomial represents the weight of the word. We thereby reduce the zeroness problem to polynomial identity testing, for which there are a number of efficient randomised procedures.

Let be a -weighted automaton. We introduce a family of variables and associate the monomial with a word of length . Then we define the polynomial by


It is immediate from Proposition 3 that if and only if is zero.

To test whether we select a value for each variable independently and uniformly at random from a set of integers of size , for some constant . Clearly if then this yields the value . On the other hand, if then will evaluate to a nonzero value with probability at least by the following result of De Millo and Lipton [11], Schwartz [24] and Zippel [30] and the fact that has degree .

Theorem \thethm ([11, 24, 30])

Let be a field and a multivariate polynomial of total degree . Fix a finite set , and let be chosen independently and uniformly at random from . Then

While the number of monomials in is proportional to , i.e., exponential in , writing


it is clear that can be evaluated on a particular set of numerical arguments in time polynomial in . The formula (2) can be evaluated in a forward direction, starting with the initial state vector and post-multiplying by the transition matrices, or in a backward direction, starting with the final state vector and pre-multiplying by the transition matrices. In either case we get a polynomial-time Monte-Carlo algorithm for testing zeroness of -weighted automata. The backward variant is shown in Figure 1.

Algorithm Input: Automaton if       return “ := for from to  do       choose a random vector        :=       if             return “ with such that return “ is zero with probability at least
Figure 1: Algorithm for testing zeroness
Algorithm Input: Automaton if       return “ := for from to  do       choose a random vector        :=       if              :=              :=             for from downto  do                   choose with                    :=                    :=             return “ return “ is zero with probability at least
Figure 2: Algorithm for testing zeroness, with counterexamples

The algorithm runs in time , where is the number of nonzero entries in all , provided that sparse-matrix representations are used. In a set of case studies this randomised algorithm outperformed deterministic algorithms [15].

We can obtain counterexamples from the randomised algorithm by exploiting the self-reducible structure of the equivalence problem. We generate counterexamples incrementally, starting with the empty string and using the randomised algorithm as an oracle to know at each stage what to choose as the next letter in our counterexample. For efficiency reasons it is important to avoid repeatedly running the randomised algorithm. In fact, as shown in Figure 2, this can all be made to work with some post-processing following a single run of the randomised procedure.

To evaluate the polynomial  we substitute a set of randomly chosen rational values into Equation (2). Here we generalize this to a notion of partial evaluation of polynomial  with respect to values and a word , . We define


Notice that , where is the empty word, and, at the other extreme, for any word  of length .

Proposition \thethm

Suppose that , where . If then either or for some .


We prove the contrapositive: if and for each , then . This immediately follows from the equation

This equation is established from the definition of as follows:

From Proposition 3.1 it is clear that the algorithm in Figure 2 generates a counterexample trace given such that .

The algorithm in Figure 1 can be parallelised, yielding an algorithm, as iterated products of matrices can be computed in . On the other hand, the algorithm in Figure 2 yields a counterexample, but apparently cannot be parallelised efficiently because the counterexample is produced incrementally.

3.2 Algorithm Based on the Isolating Lemma

We now develop a randomised procedure that can produce a counterexample in case of inequivalence. To this end we employ the Isolating Lemma of Mulmuley, Vazirani and Vazirani [20]. We use this lemma in a very similar way to [20], who are concerned with computing maximum matchings in graphs in RNC.

Lemma \thethm

Let be a family of subsets of a set . Suppose that each element is assigned a weight chosen independently and uniformly at random from . Define the weight of to be . Then the probability that there is a unique minimum weight set in is at least .

We will apply the Isolating Lemma in conjunction with Proposition 3 to decide zeroness of a -weighted automaton . Suppose has states and alphabet . Given and , choose a weight independently and uniformly at random from the set . Define the weight of a word , , to be . (The reader should not confuse this with the weight  assigned to  by the automaton .) Then we obtain a univariate polynomial from automaton  as follows:

If is equivalent to the zero automaton then clearly . On the other hand, if is non-zero, then by Proposition 3 the set is non-empty. Thus there is a unique minimum-weight word with probability at least by the Isolating Lemma. In this case contains the monomial with coefficient as its smallest-degree monomial. Thus with probability at least .

It remains to observe that from the formula

and the fact that iterated products of matrices of univariate polynomials can be computed in  [10] we obtain an algorithm for determining zeroness of -weighted automata.

It is straightforward to extend the above algorithm to obtain an procedure that not only decides zeroness of but also outputs a word such that in case is non-zero. Assume that is non-zero and that the random choice of weights has isolated a unique minimum-weight word such that . To determine whether is the -th letter of we can increase the weight by while leaving all other weights unchanged and recompute the polynomial . Then is the -th letter in if and only if the minimum-degree monomial in changes. All of these tests can be done independently, yielding an procedure.

Theorem \thethm

Given two -weighted automata and , there is an RNC procedure that determines whether or not and are equivalent and that outputs a word with in case and are inequivalent.

From a practical perspective, the algorithm is less efficient than those from the previous subsection, as it requires computations on univariate polynomials rather than on mere numbers.

4 Minimisation of -Weighted Automata

A -weighted automaton is minimal if there is no equivalent automaton with strictly fewer states. It is known that minimal automata are unique up to a change of basis [7]. In this section we give an NC algorithm to decide whether a given -weighted automaton is minimal. We also give an RNC algorithm that computes a minimal automaton equivalent to a given -weighted automaton .

4.1 Deciding Minimality

Let be an automaton. Define the (infinite) matrix to have rows indexed by and columns indexed by , with the row indexed by being the vector . The forward space is defined to be the row space of . Similarly define the matrix to have rows indexed by and columns indexed by , with the column indexed by being the vector . The backward space is defined to be the column space of . The product is called the Hankel matrix; it has rows and columns indexed by with . By linear algebra we have . A fundamental result [7] is that the above inequalities are tight precisely when is minimal:

Proposition \thethm (Carlyle and Paz)

An automaton with states is minimal if and only if the Hankel matrix has rank .

Using this result we show

Theorem \thethm

Deciding whether a -weighted automaton is minimal is in NC.


To check that a given automaton is minimal it suffices to verify that the associated Hankel matrix has rank . Since , this holds if and only if the matrices and both have rank . We show how to check that has rank ; the procedure for is entirely analogous.

Let be the sub-matrix of obtained by retaining only those rows indexed by words in . By Proposition 3(i) we have . Thus

The middle equivalence holds because for any vector , implies , which in turn implies that .

Since determinants can be computed in NC it only remains to show that we can compute each entry of the matrix in NC. Let be the column vector with in the -th position and in all other positions. Given we have

But this last expression can be computed in NC since sums and matrix powers can be computed in NC [10].

4.2 Minimising an Automaton

Next we give an RNC algorithm to minimise a given automaton. The key idea is that we can compute a basis of the forward space by generating random vectors in the space. We show that a randomly generated set of such vectors of cardinality equal to the dimension of is likely to be a basis of . We can likewise compute a basis of the backward space . We give the construction of the forward space; the proof for the backward space is similar.

The construction involves an application of polynomial identity testing in similar manner to Section 3.1. Consider again a family of variables and associate the monomial with a word . Then we define the row vector by


Note that evaluating at a vector of rationals yields a vector in the forward space .

Proposition \thethm

Let be a proper subspace of and let be a positive integer. Then for chosen uniformly at random from we have .


Pick a non-zero vector that is orthogonal to . Notice that the polynomial is non-zero since the coefficient of the monomial corresponding to a word is , and this is clearly non-zero for at least one . Now only if . Since has degree at most , it follows from Theorem 3.1 that is at most .

The procedure to generate a basis for the forward space is shown in Figure 3.

AlgorithmForward-Basis Input: Automaton and error parameter for from to  do       choose a random vector        := let be maximum such that is linearly independent return “ is a basis of
Figure 3: Algorithm for generating a basis of the forward space

The algorithm Forward-Basis necessarily returns a linearly independent set of vectors in the forward space. It only fails to output a basis if for some . By Proposition 4.2 this happens with probability at most for any given , so the total probability that Forward-Basis does not give a correct output is at most . Thus, e.g., choosing we have an error probability of at most .

It remains to observe that Forward-Basis can be made to run in parallel time. We perform the assignments for in parallel. As observed in Section 3.1, the computation of involves an iterated matrix product, which can be done in parallel time. We also check linear independence of for in parallel. Each check involves computing the rank of an matrix, which can again be done in parallel time [14].

Given bases of and , minimisation proceeds via a classical construction of Schützenberger [23]. We briefly recall this construction and show that it can be implemented in NC by making one call to algorithm Forward-Basis and one call to the corresponding backward version of this algorithm.

Let and be such that the rows of  form a basis of the forward space , with the first row of being . Similarly, let and be such that the columns of  form a basis of the backward space , with the first column of being . Since and for all , there exist maps and such that


Call a forward reduction of  with base  and similarly a backward reduction of  with base .

Proposition \thethm[23])

Let be an automaton. Then is minimal and equivalent to .

Theorem \thethm

There is an RNC algorithm that transforms a given automaton into an equivalent minimal automaton.


Let be an automaton. We have already shown that we can compute in randomised NC a matrix whose rows form a basis of the forward space of . Given we can compute the forward reduction in NC since each transition matrix is uniquely defined as the solution to the linear system of equations (5). Using the same reasoning we can compute from in randomised NC. This is the minimal automaton that we seek.

5 Probabilistic Reward Automata

In this section we consider probabilistic reward automata, which extend Rabin’s probabilistic automata [21] with rewards on transitions. The resulting notion can be seen as a type of partially observable Markov Decision Process [4]. A similar model has been investigated from the point of view of language theory in [8]. Rewards are allowed to be negative, in which case they can be seen as costs. In Example 5.2 we use costs to record the passage of time in an encryption protocol.

A Probabilistic Reward Automaton is a tuple , where is the number of states; is the number of types of reward; is a finite alphabet, is an rational sub-stochastic matrix for each ; is an matrix with entries in for each ; is an -dimensional rational stochastic row vector; is a rational -dimensional column vector with all entries lying in the interval . We think of as the transition matrix, as the reward matrix, as the initial-state vector, and as the final-state vector.

The total reward of a run is the sum of the rewards along all its transitions. The expected reward of a word is the sum of the rewards of all runs over that word, weighted by their respective probabilities. Formally, given a word and a path of states , the probability and total reward of the path are respectively defined by

The value of the word is the expected reward over all runs:


5.1 Expectation Equivalence

Two probabilistic reward automata and over the same alphabet are defined to be equivalent in expectation if for all words . In this section we give a simple reduction of the equivalence problem for probabilistic reward automata to the equivalence problem for -weighted automata. The idea is to combine transition probabilities and rewards in a single matrix. Without loss of generality we consider automata with a single type of reward; the general problem can be reduced to this by considering each component separately.

Let be a probabilistic reward automaton. We define a -weighted automaton such that for each word . First we introduce the following matrices:

We also write for the identity matrix. Now we define

where denotes Kronecker product and denotes Hadamard product (cf. Section 2.2).

Proposition \thethm

for all words .


We show by induction that for all words we have


The base case, , is clear. For the induction step we have

But using Proposition 2.2 and the identity , the above expression simplifies to

This completes the induction step.

Using Proposition 2.2 and the fact that and it follows from (7) that

But the equivalence of the above expression and (6) follows from distributivity of multiplication over addition.

Corollary \thethm

Expectation equivalence of probabilistic reward automata can be decided in NC. Moreover there is an RNC procedure that determines whether or not two automata are equivalent and outputs a word on which they differ in case they are inequivalent.


The first part follows by combining Proposition 5.1 with the NC algorithm for -weighted automaton equivalence in [29]. The second part follows by combining Proposition 5.1 with Theorem 3.2.

5.2 Distribution Equivalence

Two probabilistic reward automata are called distribution equivalent if they induce identical distributions on rewards for each input word . We formalise this notion by translating probabilistic reward automata into -weighted automata over the field of rational Laurent functions, as defined in Section 2. We consider -transitions in this section because they are convenient for applications (cf. Example 5.2) and because we cannot rely on existing -elimination results in the presence of rewards.

Let be a probabilistic reward automaton, where . To make -elimination more straightforward, we assume that the transition matrix has no recurrent states, i.e., that its spectral radius is strictly less than one. We now define an -weighted automaton as follows. For , let , where and . We extend to a map by defining


for a word . Our convention on -transitions implies that and therefore, by Proposition 2.3, that is well-defined and has entries whose numerators and denominators are Laurent polynomials with degree bound . It follows that the entries of have degree bound .

Two probabilistic reward automata over the same alphabet  and with the same number of reward types are said to be equivalent if the corresponding -weighted automata and are equivalent, i.e., for all words . Now Proposition 3 implies that equivalence for -weighted automata is decidable, but the algorithms of Schützenberger [23] and Tzeng [28] do not yield polynomial-time procedures in our case because the complexity of solving systems of linear equations over the field is not polynomial in (indeed the solution need not have length exponential in ). However, it not difficult to give a randomised polynomial-time algorithm to decide equivalence of probabilistic reward automata.

Let be the -weighted automaton corresponding to a probabilistic reward automaton with states. For each word of length at most we have a rational function whose numerator and denominator are polynomials of degree at most , as observed above. Now consider the set . Suppose that we pick uniformly at random. Denote by the result of substituting for the formal variables in the rational function . Clearly if is a zero automaton then for all . On the other hand, if is non-zero then by Proposition 3 there exists a word of length at most such that . Since the degree of the rational expression is at most it follows from the Schwartz-Zippel theorem [11, 24, 30] that the probability that is at most .

Thus our randomised procedure is to pick uniformly at random and to check whether for some . To perform this final check we show that there is a -weighted automaton such that for all . Then check for zeroness using, e.g., Tzeng’s algorithm [28]. The automaton has the form , where , , and for all .

Theorem \thethm

There is an RNC procedure that determines whether or not two probabilistic reward automata are distribution equivalent, and which outputs a word on which they differ in case they are inequivalent.

Example \thethm

We consider probabilistic programs that randomly increase and decrease a single counter (initialised with ) so that upon termination the counter has a random value . The programs should be such that is a random variable with where and  are independent random variables with a geometric distribution with parameters and , respectively. (By that we mean that for , and similarly for .) Figure 4 shows code in the syntax of the apex tool [16].

inc:com, dec:com |-
  var%2 flip;
  flip := 0;
  while (flip = 0) do {
    flip := coin[0:1/2,1:1/2];
    if (flip = 0) then {
  flip := 0;
  while (flip = 0) do {
    flip := coin[0:2/3,1:1/3];
    if (flip = 0) then {

inc:com, dec:com |-
  var%2 flip;
  flip := coin[0:1/2,1:1/2];
  if (flip = 0) then {
    while (flip = 0) do {
      flip := coin[0:1/2,1:1/2];
      if (flip = 0) then {
  } else {
    flip := 0;
    while (flip = 0) do {
      flip := coin[0:2/3,1:1/3];
Figure 4: Two apex programs for producing a counter that is distributed as the difference between two geometrically distributed random variables.

The program on the left consecutively runs two while loops: it first increments the counter according to a geometric distribution with parameter  and then decrements the counter according to a geometric distribution with parameter , so that the final counter value is distributed as desired. The program on the right is more efficient in that it runs only one of two while loops, depending on a single coin flip at the beginning. It may not be obvious though that the final counter value follows the same distribution as in the left program. We used the apex tool to translate the programs to the probabilistic reward automata and  shown in Figure 5. Here each counter increment corresponds to a reward of and each counter decrement to a reward of .

: inc

: dec

: dec

: inc