Fluid Model Checking
In this paper we investigate a potential use of fluid approximation techniques in the context of stochastic model checking of CSL formulae. We focus on properties describing the behaviour of a single agent in a (large) population of agents, exploiting a limit result known also as fast simulation. In particular, we will approximate the behaviour of a single agent with a time-inhomogeneous CTMC, which depends on the environment and on the other agents only through the solution of the fluid differential equation, and model check this process. We will prove the asymptotic correctness of our approach in terms of satisfiability of CSL formulae. We will also present a procedure to model check time-inhomogeneous CTMC against CSL formulae.
Keywords: Stochastic model checking; fluid approximation; mean field approximation; reachability probability; time-inhomogeneous Continuous Time Markov Chains
In recent years, there has been a growing interest in fluid approximation techniques in the formal methods community [1, 2, 3, 4, 5, 6]. These techniques, also known as mean field approximation, are useful for analysing quantitative models of systems based on continuous time Markov Chains (CTMC), possibly described in process algebraic terms. They work by approximating the discrete state space of the CTMC by a continuous one, and by approximating the stochastic dynamics of the process with a deterministic one, expressed by means of a set of differential equations. The asymptotic correctness of this approach is guaranteed by limit theorems [7, 8, 9], showing the convergence of the CTMC to the fluid ODE for systems of increasing sizes.
The notion of size can be different from domain to domain, yet in models of interacting agents, usually considered in computer science, the size has the standard meaning of population number. All these fluid approaches, in particular, require a shift from an agent-based description to a population-based one, in which the system is described by variables counting the number of agents in each possible state and so individual behaviours are abstracted. In fact, in large systems, the individual choices of single agents have a small impact, hence the whole system tends to evolve according to the average behaviour of agents. Therefore, the deterministic description of the fluid approximation is mainly related to the average behaviour of the model, and information about statistical properties is generally lost, although it can be partially recovered by introducing fluid equations of higher order moments of the stochastic process (moment closure techniques [10, 11, 12]).
Differently to fluid approximation, the analysis of quantitative systems like those described by process algebras can be carried out using quantitative model checking. These techniques have a long tradition in computer science and are powerful ways of querying a model and extracting information about its behaviour. As far as stochastic model checking is considered, there are some consolidated approaches based mainly on checking Continuous Stochastic Logic (CSL) formulae [13, 14, 15], which led to widespread software tools . All these methods, however, suffer (in a more or less relevant way) from the curse of state space explosion, which severely hampers their practical applicability. In order to mitigate these combinatorial barriers, many techniques have been developed, many of them based on some notion of abstraction or approximation of the original process [17, 18].
In this paper, we will precisely target this problem, trying to see to what extent fluid approximation techniques can be used to speed up the model checking of CTMC. We will not tackle this problem in general, but rather we will focus on a restricted subset of system properties: We will consider population models, in which many agents interact, and then focus on the behaviour of single agents. In fact, even if large systems behave almost deterministically, the evolution of a single agent in a large population is always stochastic. Single agent properties are interesting in many application domains. For instance, in performance models of computer networks, like client-server interaction, one is often interested in the behaviour and in quality-of-service metrics of a single client (or a single server), such as the waiting time of the client or the probability of a time-out.
Single agent properties may also be interesting in other contexts. For instance, in ecological models, one may be interested in the chances of survival or reproduction of an animal, or in its foraging patterns . In biochemistry, there is some interest in the stochastic properties of single molecules in a mixture (single molecule enzyme kinetics [20, 21]). Other examples may include the time to reach a certain location in a traffic model of a city, or the chances to escape successfully from a building in case of emergency egress .
The use of fluid approximation in this restricted context is made possible by a corollary of the fluid convergence theorems, known by the name of fast simulation [23, 9], which provides a characterization of the behaviour of a single agent in terms of the solution of the fluid equation: the agent senses the rest of the population only through its “average” evolution, as given by the fluid equation. This characterization can be proved to be asymptotically correct.
Our idea is simply to use the CTMC for a single agent obtained from the fluid approximation instead of the full model with interacting agents. In fact, extracting metrics from the description of the global system can be extremely expensive from a computational point of view. Fast simulation, instead, allows us to abstract the system and study the evolution of a single agent (or of a subset of agents) by decoupling its evolution from the evolution of its environment. This has the effect of drastically reducing the dimensionality of the state space by several orders of magnitude.
Of course, in applying the mean field limit, we are introducing an error which is difficult to control (there are error bounds but they depend on the final time and they are very loose ). However, this error in practice will not be too large, especially for systems with a large pool of agents. We stress that these are the precise cases in which current tools suffer severely from state space explosion, and that can mostly benefit from a fluid approximation. However, we will see in the following that in many cases the quality of the approximation is good also for small populations.
In the rest of the paper, we will basically focus on how to analyse single agent properties of three kinds:
Next-state probabilities, i.e. the probability of jumping into a specific set of states, at a specific time.
Reachability properties, i.e. the probability of reaching a set of states , while avoiding unsafe states .
Branching temporal logic properties, i.e. verifying CSL formulae.
A central feature of the abstraction based on fluid approximation is that the limit of the model of a single agent has rates depending on time, via the solution of the fluid ODE. Hence, the limit models are time-inhomogeneous CTMC (ICTMC). This introduces some additional complexity in the approach, as model checking of ICTMC is far more difficult than the homogeneous-time case. To the best of the author’s knowledge, in fact, there is no known algorithm to solve this problem in general, although related work is presented in Section 2. We will discuss a general method in Sections 4, 5 and 6, based on the solution of variants of the Kolmogorov equations, which is expected to work for small state spaces and controlled dynamics of the fluid approximation. The main problem with ICTMC model checking is that the truth of a formula can depend on the time at which the formula is evaluated. Hence, we need to impose some regularity on the dependency of rates on time to control the complexity of time-dependent truth. We will see that the requirement, piecewise analyticity of rate functions, is intimately connected not only with the decidability of the model checking for ICTMC, but also with the lifting of convergence results from CTMC to truth values of CSL formulae (Theorems 6.1 and 6.3).
The paper is organized as follows: in Section 2 we discuss work related to our approach. In Section 3, we introduce preliminary notions, fixing the class of models considered (Section 3.1) and presenting fluid limit and fast simulation theorems (Sections 3.2 and 3.3). In Section 4 we present the algorithms to compute next-state probability. In Section 5, instead, we consider the reachability problem, presenting a method to solve it for ICTMC. In both cases, we also discuss the convergence of next-state and reachability probabilities for increasing population sizes. In Section 6, instead, we focus on the CSL model checking problem for ICTMC, exploiting the routines for reachability developed before. We also consider the convergence of truth values for formulae about single agent properties. Finally, in Section 7, we discuss open issues and future work. All the proofs of propositions, lemmas, and theorems of the paper are presented in Appendix A. A preliminary version of this work has appeared in .
2 Related work
Model checking (time homogeneous) Continuous Time Markov Chains (CTMC) against Continuous Stochastic Logics (CSL) specifications has a long tradition in computer science [13, 14, 15]. At the core of our approach to study time-bounded properties there are similarities to that developed in , because we consider a transient analysis of a Markov chain whose structure has been modified to reflect the formula under consideration. But the technical details of the transient analysis, and even the structural modification, differ to reflect the time-inhomogeneous nature of the process we are studying.
In contrast, the case of time-inhomogeneous CTMCs has received much less attention. To the best of the authors’ knowledge, there has been no previous proposal of an algorithm to model check CSL formulae on a ICTMC. Nevertheless model checking of ICTMCs has been considered with respect to other logics. Specifically, previous work includes model checking of HML and LTL logics on ICTMC.
In , Katoen and Mereacre propose a model checking algorithm for Hennessy-Milner Logic on ICTMC. Their work is based on the assumption of piecewise constant rates (with a finite number of pieces) within the ICTMC. The model checking algorithm is based on the computation of integrals and the solution of algebraic equations with exponentials (for which a bound on the number of zeros can be found).
LTL model checking for ICTMC, instead, has been proposed by Chen et al. in . The approach works for time-unbounded formulae by constructing the product of the CTMC with a generalized Büchi automaton constructed from the LTL formula, and then reducing the model checking problem to computation of reachability of bottom strongly connected components in this larger (pseudo)-CTMC. The authors also propose an algorithm for solving time bounded reachability similar to the one considered in this paper (for time-constant sets).
Another approach related to the work we present is the verification of CTMC against deterministic time automata (DTA) specifications , in which the verification works by taking the product of the CTMC with the DTA, which is then converted into a Piecewise Deterministic Markov Process (PDMP, ), and then solving a reachability problem for the so obtained PDMP. This extends earlier work by Baier et al.  and Donatelli et al. . These approaches were limited to considering only a single clock. This means that they are albe to avoid the consideration of ICTMC, in the case of , through the use of supplementary variables and subordinate CTMCs.
In , Chen et al. consider the verification of time-homogenenous CTMC against formulae in the the metric temporal logic (MTL). This entails finding the probability of a set timed paths that satisfy the formula over a fixed, bounded time interval. The approach taken is one of approximation, based on an estimate of the maximal number of discrete jumps that will be needed in the CTMC, , and timed constraints over the residence time within states of a path with up to steps. The probabilities are then determined by a multidimensional integral.
Our work is underpinned by the notion of fast simulation, which has previously been applied in a number of different contexts . One recent case is a study of policies to balance the load between servers in large-scale clusters of heterogeneous processors . A similar approach is adopted in , in the context of Markov games. These ideas also underlie the work of Hayden et al. in . Here the authors extend the consideration of transient characteristics as captured by the fluid approximation, to approximation of first passage times, in the context of models generated from the stochastic process algebra PEPA. Their approach for passage times related to individual components is closely related to the fast simulation result and the work presented in this paper.
In this section, we will introduce some backgound material needed in the rest of the paper. First of all, we introduce a suitable notation to describe the population models we are interested in. This is done in Section 3.1. In particular, models will depend parametrically on the (initial) population size, so that we are in fact defining a sequence of models. Then, in Section 3.2, we present the classic fluid limit theorem, which proves convergence of a sequence of stochastic models to the solution of a differential equation. In Section 3.3, instead, we describe fast simulation, a consequence of the fluid limit theorem which connects the system view of the fluid limit to the single agent view, providing a description of single agent behaviour in the limit. Finally, in Section 3.4, we recall the basics of Continuous Stochastic Logic (CSL) model checking.
3.1 Modelling Language
In the following, we will describe a basic language for CTMC, in order to fix the notation. We have in mind population models, where a population of agents, possibly of different kinds, interact together through a finite set of possible actions. To avoid a notational overhead, we assume that the number of agents is constant during the simulation, and equal to . Furthermore, we do not explicitly distinguish between different classes of agents in the notation.
In particular, let represent the state of agent , where is the state
space of each agent. Multiple classes of agents can be represented in this way by suitably partitioning
into subsets, and allowing state changes only within a single class. Notice that we made explicit the
dependence on , the total population size.
A configuration of a system is thus represented by the tuple . When dealing with population models, it is customary to assume that single agents in the same internal state cannot be distinguished, hence we can move from the agent representation to the system representation by introducing variables counting how many agents are in each state. With this objective, define
so that the system can be represented by the vector , whose dimension is independent of . The domain of each variable is obviously .
We will describe the evolution of the system by a set of transition rules at this global level. This simplifies the description of synchronous interactions between agents. The evolution from the perspective of a single agent will be reconstructed from the system level dynamics. In particular, we assume that is a CTMC (Continuous-Time Markov Chain), with a dynamics described by a fixed number of transitions, collected in the set . Each transition is defined by a multi-set of update rules and by a rate function . The multi-set111The fact that is a multi-set, allows us to model events in which agents in the same state synchronise. contains update rules of the form , where . Each rule specifies that an agent changes state from to . Let denote the multiplicity of the rule in . We assume that is independent of , so that each transition involves a finite and fixed number of individuals. Given a multi-set of update rules , we can define the update vector in the following way:
where is the vector equal to one in position and zero elsewhere. Hence, each transition changes the state from to . The rate function depends on the current state of the system, and specifies the speed of the corresponding transition. It is assumed to be equal to zero if there are not enough agents available to perform a transition. Furthermore, it is required to be Lipschitz continuous. We indicate such a model by , where is the initial state of the model.
Given a model , it is straightforward to construct the CTMC associated with it, exhibiting its infinitesimal generator matrix. First, its state space is . The infinitesimal generator matrix , instead, is the matrix defined by
We will indicate the state of such a CTMC at time by .
We introduce now the main running example of the paper: we will consider a model of a simple client-server system, in which a pool of clients submits queries to a group of servers, waiting for a reply. In particular, the client asks for information from a server and waits for it to reply. It can time-out if too much time passes. The server, instead, after receiving a request does some processing and then returns the answer. It can time-out while processing and while it is ready to reply. After an action, it always logs data. The client and server agents are visually depicted in Figure 1. The global system is described by the following 8 variables:
4 variables for the client states: , , , and .
4 variables for the server states: , , , and .
Furthermore, there are 9 transitions in total, corresponding to all possible arrows of Figure 1. We list them below, stressing that synchronization between clients and servers has a rate computed using the minimum, in the PEPA style . With we denote a vector of length which is equal to 1 for component and zero elsewhere.
request: , ;
reply: , ;
timeout (client): , ;
recover: , ;
think: , ;
logging: , ;
process: , ;
timeout (server processing): , ;
timeout (server replying): , ;
The system-level models we have defined depend on the total population and on the ration between server and clients, which is specified by the initial conditions. Increasing the total population (keeping fixed the client-server ratio), we obtain a sequence of models, and we are interested in their limit behaviour, for going to infinity.
In order to compare the models of such a sequence, we will normalize them to the same scale, dividing each variable by and thus introducing the normalized variables . In the case of a constant population, normalised variables are usually referred to as the occupancy measure, as they represent the fraction of agents in each state. Update vectors are scaled correspondingly, i.e. dividing them by . Furthermore, we will also require a proper scaling (in the limit) of the rate functions of the normalized models. More precisely, let be the -th non-normalized model and the corresponding normalized model. We require that:
initial conditions scale appropriately: ;
for each transition of the non-normalized model, we let be the rate function expressed in the normalised variables (i.e. after a change of variables). The corresponding transition in the normalized model is , with update vector equal to . We assume that there exists a bounded and Lipschitz continuous function on normalized variables (where contains all domains of all ), independent of , such that uniformly on .
In accordance with the previous subsection, we will denote the state of the CTMC of the -th non-normalized (resp. normalized) model at time as (resp. ).
Consider again the running example. If we want to scale the model with respect to the scaling parameter , we can increase the initial population of clients and servers by a factor (hence keeping the client-server ratio constant), similarly to . The condition on rates, in this case, automatically holds due to their (piecewise) linear nature. For non-linear rate functions, the convergence of rates can usually be enforced by properly scaling parameters with respect to the total population .
3.2 Deterministic limit theorem
In order to present the “classic” deterministic limit theorem, we need to introduce a few more concepts needed to construct the limit ODE. Consider a sequence of normalized models and let be the (non- normalised) update vectors. The drift of is defined as
Furthermore, let , be the limit rate functions of transitions of . We define the limit drift of the model as
It is easily seen that uniformly.
Let the sequence of Markov processes and be defined as before, and assume that there is some point such that in probability. Then, for any finite time horizon , it holds that:
3.3 Fast simulation
We now turn our attention back to a single individual in the population. Even if the system-level dynamics, in the limit of a large population, becomes deterministic, the dynamics of a single agent remains a stochastic process. However, the fluid limit theorem implies that the dynamics of a single agent, in the limit, becomes essentially dependent on the other agents only through the global system state. This asymptotic decoupling allows us to find a simpler Markov Chain for the evolution of the single agent. This result is often known in the literature  under the name of fast simulation .
To explain this point formally, let us focus on a single individual , which is a Markov process on the state space , conditional on the global state of the population . Let be the infinitesimal generator matrix of , described as a function of the normalized state of the population , i.e.
We stress that this is the exact Markov Chain for , conditional on , and that this process is not independent of . In fact, without conditioning on , is not a Markov process. This means that in order to capture its evolution in a Markovian setting, one has to consider the Markov chain .
Consider the running example, and suppose we want to construct the CTMC for a single client. For this purpose, we have to extract from the specification of global transitions a set of local transitions for the client. The state space of a client will consist of four states, .
Then, we need to define its rate matrix . In order to do this, we need to take into account all global transitions involving a client, and then extract the rate at which a specific client can perform such a transition. As a first example, consider the think transition, changing the state of a client from to . Its global rate is . As we have clients in state , the rate at which a specific one will perform a think transition is . Hence, we just need to divide the global rate of observing a think transition by the total number of clients in state . Notice that, as we are assuming that one specific client is in state , then , hence we are not dividing by zero.
Consider now a reply transition. In this case, the transition involves a server and a client in state . The global rate is , and (in the non-normalized model with total population ). Dividing this rate by , we obtain , which is defined for . If we switch to normalised variables, we obtain a similar expression: , which is independent of . However, in taking to the limit we must be careful: even if in the non-normalized model (and hence ) are always non-zero (if a specific agent is in state ), this may not be true in the limit: if only one client is in state , then the limit fraction of clients in state is zero (just take the limit of ). Hence, we need to take care of boundary conditions, guaranteeing that the single-agent rate is defined also in these circumstances. In this case, we can assume that the rate is zero if is zero (whatever the value of ), and that the rate is if is zero but .
In order to treat the previous set of cases in a homogeneous way, we make the following assumption about rates:
Let be a transition such that its update rule set contains the rule , with multiplicity . The rate is single-agent- compatible if there exists a Lipschitz continuous function on normalized variables such that the limit rate on normalized variables can be factorised as . A transition is single-agent compatible if and only if it is single-agent- compatible for any appearing in the left-hand side of an update rule.
Hence, the limit rate of observing a transition from to for a specific agent in state is , where the factor comes from the fact that it is one out of agents changing state from to due to .222The factor stems from the following simple probabilistic argument: if we choose at random agents out of , then the probability to select a specific agent is .
Then, assuming all transitions are single-agent compatible, we can define the rate as
It is then easy to check that
In the following, we fix an integer and let be the CTMC tracking the state of selected agents among the population, with state space . Notice that is fixed and independent of , so that we will track individuals embedded in a population that can be very large.
Let be the solution of the fluid ODE, and assume to be under the hypothesis of Theorem 3.1. Consider now and , the time-inhomogeneous CTMCs on defined by the following infinitesimal generators (for any ):
Notice that, while describes exactly the evolution of agents, and do not. In fact, they are CTMCs in which the agents evolve independently, each one with the same infinitesimal generator, depending on the global state of the system via the fluid limit.
However, the following theorem can be proved :
Theorem 3.2 (Fast simulation theorem).
For any , , and , as .
This theorem states that, in the limit of an infinite population, each fixed set of agents will behave independently, sensing only the mean state of the global system, described by the fluid limit . Furthermore, those agents will evolve independently, as if there was no synchronisation between them. This asymptotic decoupling of the system, holding for any set of agents, is also known in the literature under the name of propagation of chaos . In particular, this holds if we define the rate of the limit CTMC either by the single-agent rates for population () or by the limit rates (). Note that, when the CTMC has density dependent rates , then , as their infinitesimal generators will be the same.
We stress once again that the process is not a Markov process. It becomes a Markov process when considered together with . This can be properly understood by observing that it is the projection of the Markov process on the first coordinates, and recalling that a projection of a Markov process need not be Markov (intuitively, we can throw away some relevant information about the state of the process). However, being the projection of a Markov process, the probability of at each time is perfectly defined. Nevertheless, its non-Markovian nature has consequences for what concerns reachability probabilities and the satisfiability of CSL formulae.
Consider again the client-server example, and focus on a single client. As said before, its state space is , and the non-null rates of the infinitesimal generator for the process are:
(with appropriate boundary conditions);
In Figure 2, we show a comparison of the transient probabilities for the approximating chain for a single client and the true transient probabilities, estimated by Monte Carlo sampling of the CTMC, for different population levels . As we can see, the approximation is quite precise already for .
Single-agent consistency is not a very restrictive condition. However, there are cases in which it is not satisfied. One example is passive rates in PEPA . In this case, in fact, the rate of the synchronization of and is . In particular, the rate is independent of the exact number of agents. If we look at a single -agent rate, it equals . Normalising variables, we get the rate , which approaches infinity as goes to zero (for fixed). Hence, it cannot be extended to a Lipschitz continuous function. However, in the case and , if we look at a single agent, then the speed at which changes state is in fact infinite. We can see this by letting and , so that the rate of the transition from the point of view of is . Thus, in the limit, the state becomes vanishing.
The hypothesis of constant population, i.e. the absence of birth and death, can be relaxed. The fluid approximation continues to work also in the presence of birth and death events, and so does the fast simulation theorem. In our framework, birth and death can be easily introduced by allowing rules of the form (for birth) and (for death). In terms of a single agent, death can be dealt with by adding a single absorbing state to its local state space . Birth, instead, means that we can choose the time instant at which an agent enters the system (provided that there is a non-null rate for birth transitions at the chosen time).
Another solution would be to assume an infinite pool of agents, among which only finitely many can be alive, and the others are an infinite supply of “fresh souls”. Even if this is plausible from the point of view of a global model, it creates problems in terms of a single agent perspective (what is the rate of birth of a soul?). A solution can be to assume a large but finite pool of agents. But in this case birth becomes a passive action (and it introduces discontinuities in the model, even if in many cases one can guarantee to remain far away the discontinuous boundary), hence we face the same issues discussed in Remark 3.1.
3.4 Continuous Stochastic Logic
In this section we consider labelled stochastic processes. A labelled stochastic process is a random process , with state space and a labelling function , associating with each state a subset of atomic propositions true in that state: each atomic proposition is true in if and only if . We require that all subsets of paths considered are measurable. This condition will be satisfied by all subsets considered in the paper, as will always be either a CTMC, defined by an infinitesimal generator matrix (possibly depending on time), or the projection of a CTMC. From now on, we always assume we are working with labelled stochastic processes.
A path of is a sequence , such that the probability of going from to at time , is greater than zero. For CTMCs, this condition is equivalent to . Denote with the state of at time , with the i-th state of , and with the time of the -th jump in .
A time-bounded CSL formula is defined by the following syntax:
The satisfiability relation of with respect to a labelled stochastic process is given by the following rules:
if and only if ;
if and only if ;
if and only if and ;
if and only if .
if and only if .
if and only if and .
if and only if s.t. and , .
Notice that we are considering a fragment of CSL without the steady state operator and allowing only time-bounded properties. This last restriction is connected with the nature of convergence theorems 3.1 and 3.2, which hold only on finite time horizons. However, see Remark 6.3 for possible relaxations of this restriction.
Model checking of a next CSL formula is usually performed by computing the next-state probability via an integral, and then comparing the so-obtained value with the threshold . Model checking of an until CSL formula in a time-homogeneous CTMC , instead, can be reduced to the computation of two reachability problems, which themselves can be solved by transient analysis . In particular, consider the sets of states and and compute the probability of going from state to a state in time units, in the CTMC in which all -states are made absorbing, . Furthermore, consider the modified CTMC in which all and states are made absorbing, and denote by the probability of going from a state to a state in units of time in such a CTMC. Then the probability of the until formula in state can be computed as . The probabilities and can be computed using standard methods for transient analysis (e.g. by uniformisation  or by solving the Kolmogorov equations ). Then, to determine the truth value of the formula in state , one has just to solve the inequality . The truth value of a generic CSL formula can therefore be computed recursively on the structure of the formula.
4 Next State Probability
In this section, we start the presentation of the algorithmic procedures that underlie the CSL model checking algorithm. We will focus here on next state probabilities for single agents (or a fixed set of agents), in a population of growing size. In particular, we will show how to compute the probability that the next state in which the agents jumps belongs to a given set of goal states , constraining the jump to happen between time , where is the current time. This is clearly at the basis of the computation of the probability of next path formulae. More specifically, we provide algorithms for ICTMC (hence for the limit model ), focussing on two versions of the next state probability: the case in which the set is constant, and the case in which the set depends on time (i.e. a state may belong to or depending on time ).
Let be a CTMC with state space and infinitesimal generator matrix .
Let . The constant-set next state probability is the probability of the set of trajectories of jumping into a state in , starting at time in state , within time . is the next state probability vector on .
Let be a time-dependent set, identified with its indicator function (i.e. is the goal set at time ).
The time-varying-set next state probability is the probability of the set of trajectories of jumping into a state in at time , starting at time in state .
The interest in the time-varying sets is intimately connected with CSL model checking. In fact, the truth value of a CSL formula in a state for the (non-Markov) stochastic process depends on the initial time at which we start evaluating the formula. This is because depends on time via the state of . Furthermore, time- dependence of truth values of CSL formulae manifests also for the limit process , which is a time-inhomogeneous Markov Process. Therefore, in the computation of the probability of the next operator, which can be tackled with the methods of this section, we need to consider time-varying sets. As a matter of fact, we will see that dealing with time-varying sets (like those obtained by solving the inequality , for ) requires us to impose some additional regularity conditions on the rate functions of and on the time-dependence of goal sets.
In the following sections, we will first deal with the computation of next state probabilities for a generic ICTMC and then study the relationship between those probabilities for and .
4.1 Computing next-state probability
Consider a generic ICTMC , and focus for the moment on a constant set . For any fixed , the probability that ’s next jump happens at time and ends in a state of , given that is in state at time , is given by the following integral [39, 25]
where , is the cumulative exit rate of state from time to time , and is the rate of jumping from to a state at time .
Equation 4 holds for the following reason. Let be the event that we jump into a state in a time . Then for , and . We are interested in the probability of the event , which has probability
In order to compute for a given , we can numerically compute the integral, or transform it into a differential equation, and integrate the so-obtained ODE with standard numerical methods. This simplifies the treatment of the nested integral involved in the computation of . More specifically, we can introduce two variables, and , initialise and , and then integrate the following two ODEs from time to time :
However, for CSL model checking purposes, we need to compute
as a function of : . One way of doing this is to compute the integral (4) for any . A better approach is to use the differential formulation of the problem, and define a set of ODEs with the initial time as independent variable. First, observe that the derivative of with respect to is
Consequently, we can compute the next-state probability as a function of by solving the following set of ODEs:
We turn now to discuss the case of a time-varying next-state set . In this case, the only difference with respect to the constant-set case is that the function is piecewise continuous, rather than continuous. In fact, each time a state gains or loses membership of , the range of the sum defining changes, and a discontinuity can be introduced. However, as long as these discontinuities constitute a set of measure zero (for instance, they are finite in number), this is not a problem: the integral (4) is defined and absolutely continuous, and so is the solution of the set of ODEs (6) (because the functions involved are discontinuous with respect to time). It follows that the method for computing the next-state probability for constant sets works also for time-varying sets.
Now, if we want to use this algorithm in a model checking routine, we need to be able also to solve the equation , for and each . In particular, for obvious computability reasons, we want the number of solutions to this equation to be finite. This is unfortunately not true in general, as even a smooth function can be equal to zero on an uncountable and nowhere dense set of Lebesgue measure 0 (for instance, on the Cantor set ).
Consequently, we have to introduce some restrictions on the class of functions that we can use. In particular, we will require that the rate functions of and of are piecewise real analytic functions.
4.1.1 Piecewise Real analytic functions
A function , an open subset of , is said to be analytic  in if and only if for each point of there is an open neighbourhood of in which coincides with its Taylor series expansion around . Hence, is locally a power series. For a piecewise analytic function, we intend a function from , interval, such that there exists disjoint open intervals, with , such that is analytic in each . A similar definition holds for functions from to , considering their multi-dimensional Taylor expansion.
Analytic functions are a class of functions closed by addition, product, composition, division (for non-zero analytic functions), differentiation and integration. Piecewise analytic functions also satisfy these closure properties, by considering the intersections of their analytic sub-domains. Many functions are analytic: polynomials, the exponential, logarithm, sine, cosine. Using the previous closure properties, one can show that most of the functions we work with in practice are analytic.
Analytic functions have two additional properties that make them particularly suitable in this context:
The zeros of an analytic function in , different from the constant function zero, are isolated. In particular, if is bounded, then the number of zeros is finite. This is true also for the derivatives of any order of the function .
If is analytic in a set , then the solution of in is also analytic (this is a consequence of the Cauchy-Kowalevski theorem ).
This second property, in particular, guarantees that if the rate functions of and are piecewise analytic, then all the probability functions computed solving the differential equations, like those introduced in the previous section, are also piecewise analytic.
In the following, we will need the following straightforward property of piecewise analytic functions:
Let be a piecewise analytic function, with a compact interval. Let be the set of all values such that is not locally constantly equal to , where is the Lebesgue measure. Furthermore, let be the set of solutions of and let . Then