Towards Scalable Synthesis of Stochastic Control Systems
Abstract.
Formal control synthesis approaches over stochastic systems have received significant attention in the past few years, in view of their ability to provide provably correct controllers for complex logical specifications in an automated fashion. Examples of complex specifications of interest include properties expressed as formulae in linear temporal logic (LTL) or as automata on infinite strings. A general methodology to synthesize controllers for such properties resorts to symbolic abstractions of the given stochastic systems. Symbolic models are discrete abstractions of the given concrete systems with the property that a controller designed on the abstraction can be refined (or implemented) into a controller on the original system. Although the recent development of techniques for the construction of symbolic models has been quite encouraging, the general goal of formal synthesis over stochastic control systems is by no means solved. A fundamental issue with the existing techniques is the known “curse of dimensionality,” which is due to the need to discretize state and input sets and that results in an exponential complexity over the number of state and input variables in the concrete system. In this work we propose a novel abstraction technique for incrementally stable stochastic control systems, which does not require statespace discretization but only input set discretization, and that can be potentially more efficient (and thus scalable) than existing approaches. We elucidate the effectiveness of the proposed approach by synthesizing a schedule for the coordination of two traffic lights under some safety and fairness requirements for a road traffic model. Further we argue that this 5dimensional linear stochastic control system cannot be studied with existing approaches based on statespace discretization due to the very large number of generated discrete states.
1. Introduction
In the last decade many techniques have been developed providing controllers for control systems (both deterministic and, more recently, stochastic) in a formal and automated fashion against some complex logical specifications. Examples of such specifications include properties expressed as formulae in linear temporal logic (LTL) or as automata on infinite strings [BK08], and as such they are not tractable by classical techniques for control systems. A general scheme for providing such controllers is by leveraging symbolic models of original concrete systems. Symbolic models are discrete abstractions of the original systems in which each symbol represents an aggregate of continuous variables. When such symbolic models exist for the concrete systems, one can leverage the algorithmic machinery for automated synthesis of discrete models [dAHM01, MNA03] to automatically synthesize discrete controllers which can be refined to hybrid controllers for the original systems.
The construction of symbolic models for continuoustime nonprobabilistic systems has been thoroughly investigated in the past few years. This includes results on the construction of approximately bisimilar symbolic models for incrementally stable control systems [MZ12, PGT08], switched systems [GPT09], and control systems with disturbances [PT09], nonuniform abstractions of nonlinear systems over a finitetime horizon [TI09], as well as the construction of sound abstractions based on the convexity of reachable sets [Rei11], feedback refinement relations [RWR15], robustness margins [LO14], and for unstable control systems [ZPJT12]. Recently, there have been some results on the construction of symbolic models for continuoustime stochastic systems, including the construction of finite Markov decision process approximations of linear stochastic control system, however without providing a quantitative relationship between abstract and concrete model [LAB09], approximately bisimilar symbolic models for incrementally stable stochastic control systems [ZEM14], stochastic switched systems [ZAG15], and randomly switched stochastic systems [ZA14], as well as sound abstractions for unstable stochastic control systems [ZEAL13].
Note that all the techniques provided in [MZ12, PGT08, GPT09, PT09, TI09, Rei11, RWR15, LO14, ZPJT12, LAB09, ZEM14, ZA14, ZEAL13] are fundamentally based on the discretization of continuous states. Therefore, they suffer severely from the curse of dimensionality due to gridding those sets, which is especially irritating for models with highdimensional state sets. In this work we propose a novel approach for the construction of approximately bisimilar symbolic models for incrementally stable stochastic control systems not requiring any state set discretization but only input set discretization. Therefore, it can be potentially more efficient than the proposed approaches in [ZEM14] when dealing with higher dimensional stochastic control systems. We provide a theoretical comparison with the approach in [ZEM14] and a simple criterion that helps choosing the most suitable among two approaches (in terms of the sizes of the symbolic models) for a given stochastic control system. Another advantage of the technique proposed here is that it allows us to construct symbolic models with probabilistic output values, resulting in less conservative symbolic abstractions than those proposed in [ZEM14, ZA14, ZEAL13] that allow for nonprobabilistic output values exclusively. We then explain how the proposed symbolic models with probabilistic output values can be used for synthesizing hybrid controllers enforcing logic specifications. The proposed approaches in [ZAG15] also provide symbolic models with probabilistic output values and without any state set discretization. However, the results in [ZAG15] are for stochastic switched systems rather than stochastic control systems as in this work and they do not provide any intuition behind the control synthesis over symbolic models with probabilistic output values. The effectiveness of the proposed results is illustrated by synthesizing a schedule for the coordination of two traffic lights under some safety and fairness requirements for a model of road traffic which is a 5dimensional linear stochastic control system. We also show that this example is not amenable to be dealt with the approaches proposed in [ZEM14]. Although the main proposed results in this work are for incrementally stable stochastic control systems, the similar results for incrementally stable nonprobabilistic control systems can be recovered in the same framework by simply setting the diffusion term to zero.
Alongside the relationship with and extension of [ZAG15, ZEM14], this paper provides a detailed and extended elaboration of the results first announced in [ZTA14], including the proofs of the main results, a detailed discussion on how to deal with probabilistic output values and a generalization of the corresponding result with no requirement on compactness, and finally discussing a new case study on road traffic control.
2. Stochastic Control Systems
2.1. Notation
The identity map on a set is denoted by . The symbols , , , , , and denote the set of natural, nonnegative integer, integer, real, positive, and nonnegative real numbers, respectively. The symbols , , and denote the identity matrix, the zero vector, and the zero matrix in , , and , respectively. Given a vector , we denote by the –th element of , and by the infinity norm of , namely, , where denotes the absolute value of . Given a matrix , we denote by the trace of . We denote by and the minimum and maximum eigenvalues of a symmetric matrix , respectively. The diagonal set is defined as: .
The closed ball centered at with radius is defined by . A set is called a box if , where with for each . The span of a box is defined as . For a box and , define the approximation , where . Note that for any . Geometrically, for any with and , the collection of sets is a finite covering of , i.e. . We extend the notions of and approximation to finite unions of boxes as follows. Let , where each is a box. Define , and for any , define .
Given a measurable function , the supremum of is denoted by . A continuous function is said to belong to class if it is strictly increasing and ; is said to belong to class if and as . A continuous function is said to belong to class if, for each fixed , the map belongs to class with respect to and, for each fixed nonzero , the map is decreasing with respect to and as . We identify a relation with the map defined by iff . Given a relation , denotes the inverse relation defined by . Given a finite sequence , we denote by the infinite sequence generated by repeating infinitely, i.e. .
2.2. Stochastic control systems
Let be a probability space endowed with a filtration satisfying the usual conditions of completeness and right continuity [KS91, p. 48]. Let be a dimensional adapted Brownian motion.
Definition 2.1.
A stochastic control system is a tuple , where

is the state space;

is a bounded input set;

is a subset of the set of all measurable functions of time from to ;

satisfies the following Lipschitz assumption: there exist constants such that: for all and all ;

satisfies the following Lipschitz assumption: there exists a constant such that: for all .∎
A continuoustime stochastic process is said to be a solution process of if there exists satisfying the following stochastic differential equation (SDE) almost surely (a.s.)
(2.1) 
where is known as the drift and as the diffusion. We also write to denote the value of the solution process at time under the input curve from initial condition a.s., in which is a random variable that is measurable in . Let us emphasize that the solution process is unambiguously determined, since the assumptions on and ensure its existence and uniqueness [Oks02, Theorem 5.2.1, p. 68].
3. Incremental Stability
We recall a stability notion for stochastic control systems, introduced in [ZEM14], on which the main results presented in this work rely.
Definition 3.1.
A stochastic control system is incrementally inputtostate stable in the moment (ISSM), where , if there exist a function and a function such that for any , any valued random variables and that are measurable in , and any , , the following condition is satisfied:
(3.1) 
It can be easily verified that a ISSM stochastic control system is ISS [Ang02] in the absence of any noise as in the following:
(3.2) 
for , some , and some .
Similar to the characterization of ISS in terms of the existence of socalled ISS Lyapunov functions in [Ang02], one can describe ISSM in terms of the existence of socalled ISSM Lyapunov functions, as shown in [ZEM14] and defined next.
Definition 3.2.
Consider a stochastic control system and a continuous function that is twice continuously differentiable on . The function is called a ISSM Lyapunov function for , where , if there exist functions , , , and a constant , such that

(resp. ) is a convex (resp. concave) function;

for any , ;

for any , , and for any ,
where is the infinitesimal generator associated to the process , and where and are solution processes of the SDE (2.1) [Oks02, Section 7.3]. The symbols and denote first and secondorder partial derivatives with respect to and , respectively. ∎
Although condition in the above definition implies that the growth rate of functions and is linear, this requirement does not restrict the behavior of and to be linear on a compact subset of . Note that condition is not required in the context of nonprobabilistic control systems for the corresponding ISS Lyapunov functions [Ang02]. The following theorem, borrowed from [ZEM14], describes ISSM in terms of the existence of ISSM Lyapunov functions.
Theorem 3.3.
A stochastic control system is ISSM if it admits a ISSM Lyapunov function. ∎
One can resort to available software tools, such as SOSTOOLS [PAV13], to search for appropriate ISSM Lyapunov functions for systems of polynomial type. We refer the interested readers to the results in [ZEM14] for the discussion of special instances where these functions can be easily computed, and limit ourselves to mention that, as an example, for linear stochastic control systems (with linear drift and diffusion terms), one can search for appropriate ISSM Lyapunov functions by solving a linear matrix inequality (LMI).
3.1. Noisy and noisefree trajectories
In order to introduce the symbolic models in Subsection 5.2 (Theorems 5.7 and 5.9) for a stochastic control system, we need the following technical result, borrowed from [ZEM14], which provides an upper bound on the distance (in the moment) between the solution process of and the solution of a derived nonprobabilistic control system obtained by disregarding the diffusion term . From now on, we use the notation to denote the solution of ^{1}^{1}1Here, we have abused notation by identifying with the map ., starting from the nonprobabilistic initial condition and under the input curve , which satisfies the ordinary differential equation (ODE) .
Lemma 3.4.
Consider a stochastic control system such that and . Suppose that and that there exists a ISSM Lyapunov function for such that its Hessian is a positive semidefinite matrix in and , for any , and some positive semidefinite matrix . Then for any and any , we have
(3.3) 
where
and where is the Lipschitz constant, introduced in Definition 2.1, and is the function appearing in (3.1).∎
It can be readily seen that the nonnegativevalued function tends to zero as , , or as , and is identically zero if the diffusion term is identically zero (i.e. , which is the case for ). The interested readers are referred to [ZEM14], which provides results in line with that of Lemma 3.4 for (linear) stochastic control systems admitting a specific type of ISSM Lyapunov functions.
4. Systems and Approximate Equivalence Relations
4.1. Systems
We employ the abstract and general notion of “system,” as introduced in [Tab09], to describe both stochastic control systems and their symbolic models.
Definition 4.1.
A system is a tuple where is a set of states (possibly infinite), is a set of initial states (possibly infinite), is a set of inputs (possibly infinite), is a transition relation, is a set of outputs, and is an output map.∎
A transition is also denoted by . For a transition , state is called a successor, or simply a successor, of state . We denote by the set of all successors of a state . For technical reasons, we assume that for any , there exists some successor of , for some — let us remark that this is always the case for the considered systems later in this paper.
A system is said to be

metric, if the output set is equipped with a metric ;

finite (or symbolic), if and are finite sets;

deterministic, if for any state and any input , .
For a system and given any initial state , a finite state run generated from is a finite sequence of transitions:
(4.1) 
such that for all . A finite state run can be directly extended to an infinite state run as well. A finite output run is a sequence such that there exists a finite state run of the form (4.1) with , for . A finite output run can also be directly extended to an infinite output run as well.
4.2. Relations among systems
We recall the notion of approximate (bi)simulation relation, introduced in [GP07], which is cruicial when analyzing or synthesizing controllers for deterministic systems.
Definition 4.2.
Let and be metric systems with the same output sets and metric . For , a relation is said to be an approximate simulation relation from to if, for all , the following two conditions are satisfied:

;

in implies the existence of in satisfying .
A relation is said to be an approximate bisimulation relation between and if is an approximate simulation relation from to and is an approximate simulation relation from to .
System is approximately simulated by , or approximately simulates , denoted by , if there exists an approximate simulation relation from to such that:

for every , there exists with .
System is approximately bisimilar to , denoted by , if there exists an approximate bisimulation relation between and such that:

for every , there exists with ;

for every , there exists with .∎
5. Symbolic Models for Stochastic Control Systems
5.1. Describing stochastic control systems as metric systems
In order to show the main results of the paper, we use the notion of system introduced above to abstractly represent a stochastic control system. More precisely, given a stochastic control system , we define an associated metric system where:

is the set of all valued random variables defined on the probability space ;

is a subset of the set of valued random variables that are measurable over ;

;

if and are measurable in and , respectively, for some and , and there exists a solution process of satisfying and a.s.;

;

.
We assume that the output set is equipped with the metric , for any and some . Let us remark that the set of states and inputs of are uncountable and that is a deterministic system in the sense of Definition 4.1, since (cf. Subsection 2.2) the solution process of is uniquely determined. Note that for the case of nonprobabilistic control system , one obtains , where , is a subset of , , iff for some , , , and the metric on the output set reduces to the natural Euclidean one: , for any .
Notice that, since the concrete system is uncountably infinite, it does not allow for a straightforward discrete controller synthesis with the techniques in the literature [dAHM01, MNA03]. We are thus interested in finding a finite abstract system that is (bi)similar to the concrete system . In order to discuss approximate (bi)simulation relations between two metric systems, they have to share the output space (cf. Definition 4.2). System inherits a classical tracebased semantics (cf. definition of output run after (4.1)) [BK08], however the outputs of (and necessarily those of any approximately (bi)similar one) are random variables. This fact is especially important due to the metric that the output set is endowed with: for any nonprobabilistic point one can always find a nondegenerate random variable that is as close as desired to the original point in the metric .
To further elaborate the discussion in the previous paragraph, let us consider the following example. Let be a set (of nonprobabilistic points). Consider a safety problem, formulated as the satisfaction of the LTL formula^{2}^{2}2We refer the interested readers to [BK08] for the formal semantic of the temporal formula expressing the safety property over set . , where is a label (or proposition) characterising the set . Suppose that over the abstract system we are able to synthesize a control strategy that makes an output run of the abstraction satisfy . Although the run would in general be consisting of random variables , the fact that means that has a Dirac probability distribution centered at , that is is a degenerate random variable that can be identified with a point in . Note that since any nonprobabilistic point can be regarded as a random variable with a Dirac probability distribution centered at that point, can be embedded in , which we denote as with a slight abuse of notation. As a result, satisfying precisely means that the output run of the abstraction indeed stays in the set forever. On the other hand, suppose that the original system is approximate bisimilar to the abstraction. If we want to interpret the result obtained over the abstraction, we can guarantee that the corresponding output run of the original system satisfies , that is any output of the run of the original system is within distance from the set : . Note that although the original set is a subset of , its inflation is not a subset of anymore and hence contains nondegenerate random variables. In particular, and is in fact bigger than the latter set of nonprobabilistic points. As a result, although satisfying does not necessarily mean that a trajectory of always stays within some nonprobabilistic set, it means that the associated random variables always belong to and, hence, are close to the nonprobabilistic set with respect to the th moment metric.
We are now able to provide two versions of finite abstractions: one whose outputs are always nonprobabilistic points – that is degenerate random variables, elements of , and one whose outputs can be nondegenerate random variables. Recall, however, that in both cases the output set is still the whole and the semantics is the same as for the original system .
5.2. Main results
This subsection contains the main contributions of the paper. We show that for any ISSM (resp. ISS) stochastic control system (resp. nonprobabilistic control system ), and for any precision level , we can construct a finite system that is approximate bisimilar to (resp. ) without any state set discretization. The results in this subsection rely on additional assumptions on the model that are described next. We restrict our attention to stochastic control systems with input sets that are assumed to be finite unions of boxes (cf. Subsection 2.1). We further restrict our attention to sampleddata stochastic control systems, where input curves belong to set , which contains exclusively curves that are constant over intervals of length , i.e.
Let us denote by a subsystem of obtained by selecting those transitions of corresponding to solution processes of duration and to control inputs in . This can be seen as the time discretization of . More precisely, given a stochastic control system and the corresponding metric system , we define a new associated metric system
where , , , , , and

if and are measurable, respectively, in and for some , and there exists a solution process of satisfying and a.s..
Similarly, one can define as the time discretization of . Notice that a finite state run
of , where and a.s. for , captures the solution process of at times , started from the initial condition and resulting from a control input obtained by the concatenation of the input curves (i.e. for any ), for .
Let us proceed introducing two fully symbolic systems for the concrete model . Consider a stochastic control system and a tuple of parameters, where is the sampling time, is the input set quantization, is a temporal horizon, and is a source state. Given and , let us introduce the following two symbolic systems:
consisting of:

;

;

;

, where , if and only if ;

is the set of all valued random variables defined on the probability space ;

.
Note that the transition relation in admits a compact representation in the form of a shift operator. We have abused notation by identifying with the constant input curve with domain and value , and by identifying with the concatenation of control inputs (i.e. for any ) for . Notice that the proposed abstraction is a deterministic system in the sense of Definition 4.1. Note that and are mappings from a nonprobabilistic point to the random variable and to the one with a Dirac probability distribution centered at , respectively. Finally, note that in the case of a nonprobabilistic control system , one obtains the symbolic system , where , , , , and are the same as before, but where the output set reduces to .
Note that the idea behind the definitions of symbolic models and hinges on the ISSM property. Given an input , all solution processes of under the input forget the mismatch between their initial conditions and converge to each other with respect to the moment metric. Therefore, the longer the applied inputs are, the less relevant is the mismatch between initial conditions. Then, the fundamental idea of the introduced abstractions consists in taking the applied inputs as the state of the symbolic model.
The control synthesis over (resp. ) is simple as the outputs are nonprobabilistic points, whereas for it is perhaps less intuitive. Hence, we discuss it in more details later in Subsection 5.3.
Example 5.1.
An example of an abstraction with and is depicted in Figure 1, where the initial states are shown as targets of sourceless arrows. Note that, regardless of the size of the state set and of its dimension, only has eight possible states, namely:
In order to obtain some of the main results of this work, we raise an assumption on the ISSM Lyapunov function we will work with, as follows:
(5.1) 
for any , and for some and concave function . As long as one is interested to work in a compact subset of , the function in (5.1) can be readily computed. Indeed, for all , where is compact, one can readily apply the mean value theorem to the function to get
In particular, for the ISSM Lyapunov function , for some positive definite matrix and for all , one obtains [Tab09, Proposition 10.5], which satisfies (5.1) globally on . Note that for nonprobabilistic control systems, the concavity assumption of is not required.
Before providing the main results of the paper, we need the following technical lemmas.
Lemma 5.2.
Consider a stochastic control system , admitting a ISSM Lyapunov function , and consider its corresponding symbolic model . We have that
(5.2) 
where
(5.3) 
The proof of Lemma 5.2 is provided in the Appendix. The next lemma provides similar result as the one in Lemma 5.2, but without explicitly using any Lyapunov function.
Lemma 5.3.
The proof of Lemma 5.3 is provided in the Appendix. The next two lemmas provide similar results as Lemmas 5.2 and 5.3, but by using the symbolic model with probabilistic output values rather than with nonprobabilistic output values.
Lemma 5.4.
Consider a stochastic control system , admitting a ISSM Lyapunov function , and consider its corresponding symbolic model . One has:
(5.5) 
where
(5.6) 
Proof.
Lemma 5.5.
Proof.
Remark 5.6.
It can be readily verified that by choosing sufficiently large, and can be made arbitrarily small. One can as well try to reduce the upper bound for (in (5.2) for example) by selecting the initial point as follows:
(5.8) 
We can now present the first main result of the paper, which relates the existence of a ISSM Lyapunov function to the construction of an approximately bisimilar symbolic model.
Theorem 5.7.
Consider a stochastic control system with and , admitting a