Bounded Model Checking for
Probabilistic Programs^{†}^{†}thanks: This work has been partly funded by the awards AFRL # FA94531510317, ARO # W911NF1510592 and ONR # N0001415IP00052 and is supported by the Excellence Initiative of the German federal and state government.
Abstract
In this paper we investigate the applicability of standard model checking approaches to verifying properties in probabilistic programming. As the operational model for a standard probabilistic program is a potentially infinite parametric Markov decision process, no direct adaption of existing techniques is possible. Therefore, we propose an on–the–fly approach where the operational model is successively created and verified via a step–wise execution of the program. This approach enables to take key features of many probabilistic programs into account: nondeterminism and conditioning. We discuss the restrictions and demonstrate the scalability on several benchmarks.
1 Introduction
Probabilistic programs are imperative programs, written in languages like C, Scala, Prolog, or ML, with two added constructs: (1) the ability to draw values at random from probability distributions, and (2) the ability to condition values of variables in a program through observations. In the past years, such programming languages became very popular due to their wide applicability for several different research areas [1]: Probabilistic programming is at the heart of machine learning for describing distribution functions; Bayesian inference is pivotal in their analysis. They are central in security for describing cryptographic constructions (such as randomized encryption) and security experiments. In addition, probabilistic programs are an active research topic in quantitative information flow. Moreover, quantum programs are inherently probabilistic due to the random outcomes of quantum measurements. All in all, the simple and intuitive syntax of probabilistic programs makes these different research areas accessible to a broad audience.
However, although these programs typically consist of a few lines of code, they are often hard to understand and analyze; bugs, for instance non–termination of a program, can easily occur. It seems of utmost importance to be able to automatically prove properties like “Is the probability for termination of the program at least 90%” or “Is the expected value of a certain program variable at least 5 after successful termination?”. Approaches based on the simulation of a program to show properties or infer probabilities have been made in the past [2, 3]. However, to the best of our knowledge there is no work which exploits wellestablished model checking algorithms for probabilistic systems such as Markov decision processes (MDP) or Markov chains (MCs), as already argued to be an interesting avenue for the future in [1].
As the operational semantics for a probabilistic program can be expressed as a (possible infinite) MDP [4], it seems worthwhile to investigate the opportunities there. However, probabilistic model checkers like PRISM [5], iscasMc [6], or MRMC [7] offer efficient methods only for finite models.
We make use of the simple fact that for a finite unrolling of a program the corresponding operational MDP is also finite. Starting from a profound understanding of the (intricate) probabilistic program semantics—including features such as observations, unbounded (and hence possibly diverging) loops, and nondeterminism—we show that with each unrolling of the program both conditional reachability probabilities and conditional expected values of program variables increase monotonically. This gives rise to a bounded modelchecking approach for verifying probabilistic programs. This enables for a user to write a program and automatically verify it against a desired property without further knowledge of the programs semantics.
We extend this methodology to the even more complicated case of parametric probabilistic programs, where probabilities are given by functions over parameters. At each iteration of the bounded model checking procedure, parameter valuations violating certain properties are guaranteed to induce violation at each further iteration.
We demonstrate the applicability of our approach using five wellknown benchmarks from the literature. Using efficient model building and verification methods, our prototype is able to prove properties where either the state space of the operational model is infinite or consists of millions of states.
Related Work.
Besides the tools employing probabilistic model checking as listed above, one should mention the approach in [8], where finite abstractions of the operational semantics of a program were verified. However, this was defined for programs without parametric probabilities or observe statements. In [9], verification on partial operational semantics is theoretically discussed for termination probabilities.
The paper is organized as follows: In Section 2, we introduce the probabilistic models we use, the probabilistic programming language, and the structured operational semantics (SOS) rules to construct an operational (parametric) MDP. Section 3 first introduces formal concepts needed for the finite unrollings of the program, then shows how expectations and probabilities grow monotonically, and finally explains how this is utilized for bounded model checking. In Section 4, an extensive description of used benchmarks, properties and experiments is given before the paper concludes with Section 5.
2 Preliminaries
2.1 Distributions and Polynomials
A probability distribution over a finite or countably infinite set is a function with . The set of all distributions on is denoted by . Let be a finite set of parameters over . A valuation for is a function . Let denote the set of multivariate polynomials with rational coefficients and the set of rational functions (fractions of polynomials) over . For or , let denote the evaluation of at . We write if can be reduced to , and otherwise.
2.2 Probabilistic Models
First, we introduce parametric probabilistic models which can be seen as transition systems where the transitions are labelled with polynomials in .
Definition 1 (pMDP and pMC)
A parametric Markov decision process (pMDP) is a tuple with a countable set of states, an initial state , a finite set of actions, and a transition function satisfying for all , where is a finite set of parameters over and . If for all it holds that , is called a parametric discretetime Markov chain (pMC), denoted by .
At each state, an action is chosen nondeterministically, then the successor states are determined probabilistically as defined by the transition function. is the set of enabled actions at state . As is nonempty for all , there are no deadlock states. For pMCs there is only one single action per state and we write the transition probability function as , omitting that action. Rewards are defined using a reward function which assigns rewards to states of the model. Intuitively, the reward is earned upon leaving the state .
Schedulers.
The nondeterministic choices of actions in pMDPs can be resolved using schedulers^{1}^{1}1Also referred to as adversaries, strategies, or policies.. In our setting it suffices to consider memoryless deterministic schedulers [10]. For more general definitions we refer to [11].
Definition 2
(Scheduler) A scheduler for pMDP is a function with for all .
Let denote the set of all schedulers for . Applying a scheduler to a pMDP yields an induced parametric Markov chain, as all nondeterminism is resolved, i.e., the transition probabilities are obtained w.r.t. the choice of actions.
Definition 3
(Induced pMC) Given a pMDP , the pMC induced by is given by , where
Valuations.
Applying a valuation to a pMDP , denoted , replaces each polynomial in by . We call the instantiation of at . A valuation is welldefined for if the replacement yields probability distributions at all states; the resulting model is a Markov decision process (MDP) or, in absence of nondeterminism, a Markov chain (MC).
Properties.
For our purpose we consider conditional reachability properties and conditional expected reward properties in MCs. For more detailed definitions we refer to [11, Ch. 10]. Given an MC with state space and initial state , let denote the probability not to reach a set of undesired states from the initial state within . Furthermore, let denote the conditional probability to reach a set of target states from the initial state within , given that no state in the set is reached. We use the standard probability measure on infinite paths through an MC. For threshold , the reachability property, asserting that a target state is to be reached with conditional probability at most , is denoted . The property is satisfied by , written , iff . This is analogous for comparisons like , , and .
The reward of a path through an MC until is the sum of the rewards of the states visited along on the path before reaching . The expected reward of a finite path is given by its probability times its reward. Given , the conditional expected reward of reaching , given that no state in set is reached, denoted , is the expected reward of all paths accumulated until hitting while not visiting a state in in between divided by the probability of not reaching a state in (i.e., divided by ). An expected reward property is given by with threshold . The property is satisfied by , written , iff . Again, this is analogous for comparisons like , , and . For details about conditional probabilities and expected rewards see [12].
Reachability probabilities and expected rewards for MDPs are defined on induced MCs for specific schedulers. We take here the conservative view that a property for an MDP has to hold for all possible schedulers.
Parameter Synthesis.
For pMCs, one is interested in synthesizing welldefined valuations that induce satisfaction or violation of the given specifications [13]. In detail, for a pMC , a rational function is computed which—when instantiated by a welldefined valuation for —evaluates to the actual reachability probability or expected reward for , i.e., or . For pMDPs, schedulers inducing maximal or minimal probability or expected reward have to be considered [14].
2.3 Conditional Probabilistic Guarded Command Language
We first present a programming language which is an extension of Dijkstra’s guarded command language [15] with a binary probabilistic choice operator, yielding the probabilistic guarded command language (pGCL) [16]. In [17], pGCL was endowed with observe statements, giving rise to conditioning. The syntax of this conditional probabilistic guarded command language (cpGCL) is given by
Here, belongs to the set of program variables ; is an arithmetical expression over ; is a Boolean expression over arithmetical expressions over . The probability is given by a polynomial . Most of the cpGCL instructions are self–explanatory; we elaborate only on the following: For cpGCLprograms and , is a probabilistic choice where is executed with probability and with probability ; analogously, is a nondeterministic choice between and ; abort is syntactic sugar for the diverging program . The statement for the Boolean expression blocks all program executions violating and induces a rescaling of probability of the remaining execution traces so that they sum up to one. For a cpGCLprogram , the set of program states is given by , i.e., the set of all variable valuations. We assume all variables to be assigned zero prior to execution or at the start of the program. This initial variable valuation with is called the initial state of the program.
Example 1
Consider the following cpGCLprogram with variables and :
While is , the loop body is iterated: With probability either is incremented by one or is set to one. After leaving the loop, the event that the valuation of is odd is observed, which means that all program executions where is even are blocked. Properties of interest for this program would, e.g., concern the termination probability, or the expected value of after termination.
2.4 Operational Semantics for Probabilistic Programs
We now introduce an operational semantics for cpGCLprograms which is given by an MDP as in Definition 1. The structure of such an operational MDP is schematically depicted below.
Squiggly arrows indicate reaching certain states via possibly multiple paths and states; the clouds indicate that there might be several states of the particular kind. marks the initial state of the program . In general the states of the operational MDP are of the form where is the program that is left to be executed and is the current variable valuation.
All runs of the program (paths through the MDP) are either terminating and eventually end up in the state, or are diverging (thus they never reach ). Diverging runs occur due to non–terminating computations. A terminating run has either terminated successfully, i.e., it passes a –state, or it has terminated due to a violation of an observation, i.e., it passes the –state. Sets of runs that eventually reach , or , or diverge are pairwise disjoint.
The –labelled states are the only ones with positive reward, which is due to the fact that we want to capture probabilities of events (respectively expected values of random variables) occurring at successful termination of the program.
The random variables of interest are . Such random variables are referred to as post–expectations [16]. Formally, we have:
Definition 4 (Operational Semantics of Programs)
The operational semantics of a cpGCL program with respect to a post–expectation is the MDP , together with a reward function , where

is the countable set of states,

is the initial state,

is the set of actions, and

is the smallest relation defined by the SOS rules given in Figure 1.
The reward function is if , and , otherwise.
A state of the form indicates successful termination, i.e., no commands are left to be executed. These terminal states and the –state go to the state. skip without context terminates successfully. abort self–loops, i.e., diverges. alters the variable valuation according to the assignment then terminates successfully. For the concatenation, indicates successful termination of the first program, so the execution continues with . If for the execution of leads to , does so, too. Otherwise, for , is lifted such that is concatenated to the support of . For more details on the operational semantics we refer to [4].
If for the conditional choice holds, is executed, otherwise . The case for is similar. For the probabilistic choice, a distribution is created according to probability . For , we call the choice and the choice for actions . For the observe statement, if then observe acts like . Otherwise, the execution leads directly to indicating a violation of the observe statement.
Example 2
Reconsider Example 1, where we set for readability , , , and . A part of the operational MDP for an arbitrary initial variable valuation and post–expectation is depicted in Figure 2.^{2}^{2}2We have tacitly overloaded the variable name to an expectation here for readability. More formally, by the “expectation ” we actually mean the expectation . Note that this MDP is an MC, as contains no nondeterministic choices. The MDP has been unrolled until the second loop iteration, i.e., at state , the unrolling could be continued. The only terminating state is . As our postexpectation is the value of variable , we assign this value to terminating states, i.e., reward at state , where has been assigned . At state , the loop condition is violated as is the subsequent observation because of being assigned an even number.
3 Bounded Model Checking for Probabilistic Programs
In this section we describe our approach to model checking probabilistic programs. The key idea is that satisfaction or violation of certain properties for a program can be shown by means of a finite unrolling of the program. Therefore, we introduce the notion of a partial operational semantics of a program, which we exploit to apply standard model checking to prove or disprove properties.
First, we state the correspondence between the satisfaction of a property for a cpGCLprogram and for its operational semantics, the MDP . Intuitively, a program satisfies a property if and only if the property is satisfied on the operational semantics of the program.
Definition 5 (Satisfaction of Properties)
Given a cpGCL program and a (conditional) reachability or expected reward property . We define
This correspondence on the level of a denotational semantics for cpGCL has been discussed extensively in [17]. Note that there only schedulers which minimize expected rewards were considered. Here, we also need maximal schedulers as we are considering both upper and lower bounds on expected rewards and probabilities. Note that satisfaction of properties is solely based on the operational semantics and induced maximal or minimal probabilities or expected rewards.
We now introduce the notion of a partial operational MDP for a cpGCL–program , which is a finite approximation of the full operational MDP of . Intuitively, this amounts to the successive application of SOS rules given in Figure 1, while not all possible rules have been applied yet.
Definition 6 (Partial Operational Semantics)
A partial operational semantics for a cpGCL–program is a subMDP of the operational semantics for (denoted ) with . Let be the set of expandable states. Then the transition probability function is for and given by
Intuitively, the set of non–terminating expandable states describes the states where there are still SOS rules applicable. Using this definition, the only transitions leaving expandable states are selfloops, enabling to have a welldefined probability measure on partial operational semantics. We will use this for our method, which is based on the fact that both (conditional) reachability probabilities and expected rewards for certain properties will always monotonically increase for further unrollings of a program and the respective partial operational semantics. This is discussed in what follows.
3.1 Growing Expectations
As mentioned before, we are interested in the probability of termination or the expected values of expectations (i.e. random variables ranging over program states) after successful termination of the program. This is measured on the operational MDP by the set of paths reaching from the initial state conditioned on not reaching [17]. In detail, we have to compute the conditional expected value of post–expectation after successful termination of program , given that no observation was violated along the computation. For nondeterministic programs, we have to compute this value either under a minimizing or maximizing scheduler (depending on the given property). We focus our presentation on expected rewards and minimizing schedulers, but all concepts are analogous for the other cases. For we have
Recall that is the induced MC under scheduler as in Definition 3. Recall also that for all paths not eventually reaching either diverge (collecting reward 0) or pass by a –state and reach . More importantly, all paths that do eventually reach also collect reward 0. Thus:
Finally, observe that the probability of not reaching is one minus the probability of reaching , which gives us:  
() 
Regarding the quotient minimization we assume “” as we see —being undefined—to be less favorable than . For programs without nondeterminism this view agrees with a weakest–precondition–style semantics for probabilistic programs with conditioning [17].
It was shown in [18] that all strict lower bounds for are in principle computably enumerable in a monotonically non–decreasing fashion. One way to do so, is to allow for the program to be executed for an increasing number of steps, and collect the expected rewards of all execution traces that have lead to termination within computation steps. This corresponds naturally to constructing a partial operational semantics as in Definition 6 and computing minimal expected rewards on .
Analogously, it is of course also possible to monotonically enumerate all strict lower bounds of , since—again—we need to just collect the probability mass of all traces that have led to within computation steps. Since probabilities are quantities bounded between 0 and 1, a lower bound for is an upper bound for .
Put together, a lower bound for and a lower bound for yields a lower bound for (). We are thus able to enumerate all lower bounds of by inspection of a finite sub–MDP of . Formally, we have:
Theorem 3.1
For a cpGCL program , post–expectation , and a partial operational MDP it holds that
3.2 Model Checking
Using Theorem 3.1, we transfer satisfaction or violation of certain properties from a partial operational semantics to the full semantics of the program. For an upper bounded conditional expected reward property where we exploit that
(1) 
That means, if we can prove the violation of on the MDP induced by a finite unrolling of the program, it will hold for all further unrollings, too. This is because all rewards and probabilities are positive and thus further unrolling can only increase the accumulated reward and/or probability mass.
Dually, for a lower bounded conditional expected reward property we use the following property:
(2) 
The preconditions of Implication (1) and Implication (2) can be checked by probabilistic model checkers like PRISM [5]; this is analogous for conditional reachability properties. Let us illustrate this by means of an example.
Example 3
As mentioned in Example 1, we are interested in the probability of termination. As outlined in Section 2.4, this probability can be measured by
We want this probability to be at least , i.e., . Since for further unrollings of our partially unrolled MDP this probability never decreases, the property can already be verified on the partial MDP by
where is the subMDP from Figure 2. This finite subMDP is therefore a witness of .
Algorithmically, this technique relies on suitable heuristics regarding the size of the considered partial MDPs. Basically, in each step states are expanded and the corresponding MDP is model checked, until either the property can be shown to be satisfied or violated, or no more states are expandable. In addition, heuristics based on shortest path searching algorithms can be employed to favor expandable states that so far induce high probabilities.
Note that this method is a semialgorithm when the model checking problems stated in Implications (1) and (2) are considering strict bounds, i.e. and . It is then guaranteed that the given bounds are finally exceed.
Consider now the case where we want to show satisfaction of , i.e., . As the conditional expected reward will monotonically increase as long as the partial MDP is expandable, the implication is only true if there are no more expandable states, i.e., the model is fully expanded. This is analogous for the violation of upper bounded properties. Note that many practical examples actually induce finite operational MDPs which enables to build the full model and perform model checking.
It remains to discuss how this approach can be utilized for parameter synthesis as explained in Section 2.2. For a partial operational pMDP and a property we use tools like PROPhESY [13] to determine for which parameter valuations is violated. For each valuation with it holds that ; each parameter valuation violating a property on a partial pMDP also violates it on the fully expanded MDP.
4 Evaluation
Experimental Setup.
We implemented and evaluated the bounded model checking method in C++. For the model checking functionality, we use the stochastic model checker Storm, developed at RWTH Aachen University, and PROPhESY [19] for parameter synthesis.
We consider five different, wellknown benchmark programs, three of which are based on models from the PRISM benchmark suite [5] and others taken from other literature (see Appendix A for some examples). We give the running times of our prototype on several instances of these models. Since there is — to the best of our knowledge — no other tool that can analyze cpGCL programs in a purely automated fashion, we cannot meaningfully compare these figures to other tools. As our technique is restricted to establishing that lower bounds on reachability probabilities and the expectations of program variables, respectively, exceed a threshold , we need to fix for each experiment. For all our experiments, we chose to be 90% of the actual value for the corresponding query and choose to expand states of the partial operational semantics of a program between each model checking run.
We ran the experiments on an HP BL685C G7 machine with 48 cores clocked with 2.0GHz each and 192GB of RAM while each experiment only runs in a single thread with a time–out of one hour. We ran the following benchmarks^{3}^{3}3All input programs and log files of the experiments can be downloaded at moves.rwthaachen.de/wpcontent/uploads/conference_material/pgcl_atva16.tar.gz:
Crowds Protocol [21].
This protocol aims at anonymizing the sender of messages by routing them probabilistically through a crowd of hosts. Some of these hosts, however, are corrupt and try to determine the real sender by observing the host that most recently forwarded a message. For this model, we are interested in {enumerate*}[label=()]
the probability that the real sender is observed more than times, and
the expected number of times that the real sender is observed.
We also consider a variant (crowdsobs) of the model in which an observe statement ensures that after all messages have been delivered, hosts different from the real sender have been observed at least times. Unlike the model from the PRISM website, our model abstracts from the concrete identity of hosts different from the sender, since they are irrelevant for properties of interest.
Herman Protocol.
In this protocol [22], hosts form a tokenpassing ring and try to steer the system into a stable state. We consider the probability that the system eventually reaches such a state in two variants of this model where the initial state is either chosen probabilistically or nondeterministically.
Robot.
The robot casestudy is loosely based on a similar model from the PRISM benchmark suite. It models a robot that navigates through a bounded area of an unbounded grid. Doing so, the robot can be blocked by a janitor that is moving probabilistically across the whole grid. The property of interest is the probability that the robot will eventually reach its final destination.
Predator.
This model is due to Lotka and Volterra [23, p. 127]. A predator and a prey population evolve with mutual dependency on each other’s numbers. Following some basic biology principles, both populations undergo periodic fluctuations. We are interested in {enumerate*}[label=()]
the probability of one of the species going extinct, and
the expected size of the prey population after one species has gone extinct.
Coupon Collector.
This is a famous example^{4}^{4}4https://en.wikipedia.org/wiki/Coupon_collector%27s_problem from textbooks on randomized algorithms [24]. A collector’s goal is to collect all of distinct coupons. In every round, the collector draws three new coupons chosen uniformly at random out of the coupons. We consider {enumerate*}[label=()]
the probability that the collector possesses all coupons after rounds, and
the expected number of rounds the collector needs until he has all the coupons as properties of interest. Furthermore, we consider two slight variants: in the first one (couponobs), an observe statement ensures that the three drawn coupons are all different and in the second one (couponclassic), the collector may only draw one coupon in each round.
program  instance  #states  #trans.  full?  result  actual  time  
crowds  (100,60)  yes  
(100,80)  no  
(100,100)  no  
crowdsobs  (100,60)  yes  
(100,80)  no  
(100,100)  no  
herman  (17)  no  
(21)  no  
hermannd  (13)  yes  
(17)  no  TO  
robot    yes  
predator    no  
coupon  (5)  no  
(7)  no  
(10)  no  TO  
couponobs  (5)  no  
(7)  no  
(10)  no  TO  
couponclassic  (5)  no  3.4e3  3.8e3  3.8e3  
(7)  no  5.5e4  6.1e4  6.1e4  
(10)  no  3.3e5  3.6e5  TO  
Table 1 shows the results for the probability queries. For each model instance, we give the number of explored states and transitions and whether or not the model was fully expanded. Note that the state number is a multiple of in case the model was not fully explored, because our prototype always expands states before it does the next model checking call. The next three columns show the probability bound (), the result that the tool could achieve as well as the actual answer to the query on the full (potentially infinite) model. Due to space constraints, we rounded these figures to two significant digits. We report on the time in seconds that the prototype took to establish the result (TO = 3600 sec.).
We observe that for most examples it suffices to perform few unfolding steps to achieve more than 90% of the actual probability. For example, for the largest crowdsobs program, states are expanded, meaning that three unfolding steps were performed. Answering queries on programs including an observe statement can be costlier (crowds vs. crowdsobs), but does not need to be (coupon vs. couponobs). In the latter case, the observe statement prunes some paths early that were not promising to begin with, whereas in the former case, the observe statement only happens at the very end, which intuitively makes it harder for the search to find target states. We are able to obtain nontrivial lower bounds for all but two case studies. For hermannd, not all of the (nondeterministically chosen) initial states were explored, because our exploration order currently does not favour states that influence the obtained result the most. Similarly, for the largest coupon collector examples, the time limit did not allow for finding one target state. Again, an exploration heuristic that is more directed towards these could potentially improve performance drastically.
program  instance  #states  #trans.  full?  result  actual  time 

crowds  (100,60)  yes  
(100,80)  no  
(100,100)  no  
crowdsobs  (100,60)  yes  
(100,80)  no  
(100,100)  no  
predator  no  ?  
coupon  (5)  no  
(7)  no  
(10)  no  TO  
couponobs  (5)  no  
(7)  no  
(10)  no  TO  
couponclassic  (5)  no  
(7)  no  
(10)  no  TO  
Table 2 shows the results for computing the expected value of program variables at terminating states. For technical reasons, our prototype currently cannot perform more than one unfolding step for this type of query. To achieve meaningful results, we therefore vary the number of explored states until 90% of the actual result is achieved. Note that for the predator program, the actual value for the query is not known to us, so we report on the value at which the result only grows very slowly. The results are similar to the probability case in that most often a low number of states suffices to show meaningful lower bounds. Unfortunately — as before — we can only prove a trivial lower bound for the largest coupon collector examples.
Figure 3 illustrates how the obtained lower bounds approach the actual expected value with increasing number of explored states for two case studies. For example, in the left picture one can observe that exploring 60000 states is enough to obtain a very precise lower bound on the expected number of rounds the collector needs to gather all five coupons, as indicated by the dashed line.
Finally, we analyze a parametric version of the crowds model that uses the parameters and to leave the probabilities {enumerate*}[label=()]
for a crowd member to be corrupt () and
of forwarding (instead of delivering) a message () unspecified. In each iteration of our algorithm, we obtain a rational function describing a lower bound on the actual probability of observing the real sender of the message more than once for each parameter valuation. Figure 4 shows the regions of the parameter space in which the protocol was determined to be unsafe (after iterations and , respectively) in the sense that the probability to identify the real sender exceeds . Since the results obtained over different iterations are monotonically increasing, we can conclude that all parameter valuations that were proved to be unsafe in some iteration are in fact unsafe in the full model. This in turn means that the blue area in Figure 4 grows in each iteration.
5 Conclusion and Future Work
We presented a direct verification method for probabilistic programs employing probabilistic model checking. We conjecture that the basic idea would smoothly translate to reasoning about recursive probabilistic programs [25]. In the future we are interested in how loop invariants [26] can be utilized to devise complete model checking procedures preventing possibly infinite loop unrollings. This is especially interesting for reasoning about covariances [27], where a mixture of invariant–reasoning and successively constructing the operational MC would yield sound over and underapproximations of covariances. To extend the gain for the user, we will combine this approach with methods for counterexamples [28], which can be given in terms of the programming language [29, 19]. Moreover, it seem promising to investigate how approaches to automatically repair a probabilistic model towards satisfaction of properties [30, 31] can be transferred to programs.
References
 [1] Gordon, A.D., Henzinger, T.A., Nori, A.V., Rajamani, S.K.: Probabilistic programming. In: FOSE, ACM Press (2014) 167–181
 [2] Sankaranarayanan, S., Chakarov, A., Gulwani, S.: Static analysis for probabilistic programs: inferring whole program properties from finitely many paths. In: PLDI, ACM (2013) 447–458
 [3] Claret, G., Rajamani, S.K., Nori, A.V., Gordon, A.D., Borgström, J.: Bayesian inference using data flow analysis. In: ESEC/SIGSOFT FSE, ACM Press (2013) 92–102
 [4] Gretz, F., Katoen, J.P., McIver, A.: Operational versus weakest preexpectation semantics for the probabilistic guarded command language. Perform. Eval. 73 (2014) 110–132
 [5] Kwiatkowska, M., Norman, G., Parker, D.: Prism 4.0: Verification of probabilistic realtime systems. In: CAV. Volume 6806 of LNCS, Springer (2011) 585–591
 [6] Hahn, E.M., Li, Y., Schewe, S., Turrini, A., Zhang, L.: IscasMC: A webbased probabilistic model checker. In: FM. Volume 8442 of LNCS, Springer (2014) 312–317
 [7] Katoen, J.P., Zapreev, I.S., Hahn, E.M., Hermanns, H., Jansen, D.N.: The ins and outs of the probabilistic model checker MRMC. Performance Evaluation 68(2) (2011) 90–104
 [8] Kattenbelt, M.: Automated Quantitative Software Verification. PhD thesis, Oxford University (2011)
 [9] Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM Journal on Computing 13(2) (1984) 292–314
 [10] Vardi, M.Y.: Automatic verification of probabilistic concurrent finitestate programs. In: FOCS, IEEE Computer Society (1985) 327–338
 [11] Baier, C., Katoen, J.P.: Principles of Model Checking. The MIT Press (2008)
 [12] Baier, C., Klein, J., Klüppelholz, S., Märcker, S.: Computing conditional probabilities in Markovian models efficiently. In: TACAS. Volume 8413 of LNCS, Springer (2014) 515–530
 [13] Dehnert, C., Junges, S., Jansen, N., Corzilius, F., Volk, M., Bruintjes, H., Katoen, J., Ábrahám, E.: Prophesy: A probabilistic parameter synthesis tool. In: CAV. Volume 9206 of LNCS, Springer (2015) 214–231
 [14] Quatmann, T., Dehnert, C., Jansen, N., Junges, S., Katoen, J.: Parameter synthesis for Markov models: Faster than ever. CoRR abs/1602.05113 (2016)
 [15] Dijkstra, E.W.: A Discipline of Programming. Prentice Hall (1976)
 [16] McIver, A., Morgan, C.: Abstraction, Refinement And Proof For Probabilistic Systems. Springer (2004)
 [17] Jansen, N., Kaminski, B.L., Katoen, J., Olmedo, F., Gretz, F., McIver, A.: Conditioning in probabilistic programming. Electr. Notes Theor. Comput. Sci. 319 (2015) 199–216
 [18] Kaminski, B.L., Katoen, J.P.: On the hardness of almostsure termination. In: MFCS. Volume 9234 of LNCS, Springer (2015)
 [19] Dehnert, C., Jansen, N., Wimmer, R., Ábrahám, E., Katoen, J.: Fast debugging of PRISM models. In: ATVA. Volume 8837 of LNCS, Springer (2014) 146–162
 [20] Jansen, N., Dehnert, C., Kaminski, B.L., Katoen, J., Westhofen, L.: Bounded model checking for probabilistic programs. CoRR abs/1605.04477 (2016)
 [21] Reiter, M.K., Rubin, A.D.: Crowds: Anonymity for web transactions. ACM Trans. on Information and System Security 1(1) (1998) 66–92
 [22] Herman, T.: Probabilistic selfstabilization. Inf. Process. Lett. 35(2) (1990) 63–67
 [23] Brauer, F., CastilloChavez, C.: Mathematical Models in Population Biology and Epidemiology. Texts in Applied Mathematics. Springer New York (2001)
 [24] ErdÃ¶s, P., RÃ©nyi, A.: On a classical problem of probability theory. Publ. Math. Inst. Hung. Acad. Sci., Ser. A 6 (1961) 215 – 220
 [25] Olmedo, F., Kaminski, B., Katoen, J.P., Matheja, C.: Reasoning about recursive probabilistic programs. In: LICS. (2016) [to appear].
 [26] Gretz, F., Katoen, J.P., McIver, A.: PRINSYS  on a quest for probabilistic loop invariants. In: QEST. Volume 8054 of LNCS, Springer (2013) 193–208
 [27] Kaminski, B., Katoen, J.P., Matheja, C.: Inferring covariances for probabilistic programs. In: QEST. Volume 9826 of LNCS, Springer (2016) [to appear].
 [28] Ábrahám, E., Becker, B., Dehnert, C., Jansen, N., Katoen, J., Wimmer, R.: Counterexample generation for discretetime Markov models: An introductory survey. In: SFM. Volume 8483 of Lecture Notes in Computer Science, Springer (2014) 65–121
 [29] Wimmer, R., Jansen, N., Abraham, E., Katoen, J.P.: Highlevel counterexamples for probabilistic automata. Logical Methods in Computer Science 11(1:15) (2015)
 [30] Bartocci, E., Grosu, R., Katsaros, P., Ramakrishnan, C.R., Smolka, S.A.: Model repair for probabilistic systems. In: TACAS. Volume 6605 of Lecture Notes in Computer Science, Springer (2011) 326–340
 [31] Pathak, S., Ábrahám, E., Jansen, N., Tacchella, A., Katoen, J.P.: A greedy approach for the efficient repair of stochastic models. In: NFM. Volume 9058 of LNCS, Springer (2015) 295–309