Stochastic Weak Passivity Based Stabilization of Stochastic Systems with Nonvanishing Noise^{†}^{†}thanks: This work was supported by the National Natural Science Foundation of China under Grant No. 11271326 and 61611130124, and the Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20130101110040.
Abstract
For stochastic systems with nonvanishing noise, i.e., at the desired state the noise port does not vanish, it is impossible to achieve the global stability of the desired state in the sense of probability. This bad property also leads to the loss of stochastic passivity at the desired state if a radially unbounded Lyapunov function is expected as the storage function. To characterize a certain (globally) stable behavior for such a class of systems, the stochastic asymptotic weak stability is proposed in this paper which suggests the transition measure of the state to be convergent and the ergodicity. By defining stochastic weak passivity that admits stochastic passivity only outside a ball centered around the desired state but not in the whole state space, we develop stochastic weak passivity theorems to ensure that the stochastic systems with nonvanishing noise can be globally locally stabilized in weak sense through negative feedback law. Applications are shown to stochastic linear systems and a nonlinear process system, and some simulation are made on the latter further.
Key words. Stochastic differential systems, transition measure, ergodicity, stochastic weak passivity, asymptotic weak stability, stabilization
AMS subject classifications. 60H10, 62E20, 70K20, 93C10, 93D15, 93E15
1 Introduction
Stochastic phenomena have emerged universally in many physical systems due to noise, disturbance and uncertainty. The unpredictability to them leads to it a great challenge to stabilize a stochastic system. During the past decades, the stabilization of nonlinear stochastic systems had constituted one of central problems in stochastic process control both theoretically and practically. A great deal of methods emerge as the times require, among which stochastic passivity based control is a popular one. Rooting in the passivity theory [17, 2] and the stochastic version of Lyapunov theorem [6], the stochastic passivity theory [4] was developed for stabilization and control of nonlinear stochastic systems. By means of state feedback laws, the asymptotic stabilization in probability can be achieved for a stochastic affine system provided some rank conditions are fulfilled and the unforced stochastic affine system is Lyapunov stable in probability [4]. Following this study, Lin et. al. [8] explored the relationship between a stochastic passive system and the corresponding zerooutput system, and further established the global stabilization results. Parallelizing to the development of stochastic passivity in theory, Satoh et. al. [12] applied this methodology to portHamiltonian systems, and the solutions for stabilization of a large class of nonlinear stochastic systems are thus available. There are also some reports that stochastic passivity is applied to filtering problem [19] and controlling stochastic mechanical systems [10].
Despite the large success achieved, stochastic passivity based control seems to only work under the condition that the noise vanishes at the stationary solution (very often being at the origin) if a radially unbounded Lyapunov function is expected as the storage function. This means that if a stochastic system has nonzero noise port at the stationary solution or has persistent noise port, such a method may be out of action. One of the aims of this paper is to derive the necessary conditions that a stochastic system is stochastically passive, and further give the sufficient conditions to say a stochastic system losing stochastic passivity. Equivalently, we prove that there does not exist a radially unbounded Lyapunov function rendering the stochastic system to be globally asymptotically stable in probability provided the noise does not vanish at the desired state. The ubiquitousness of such a class of systems in the mechanical[13, 14] and biological[3] fields motivates us to define a kind of novel stability, termed as stochastic asymptotic weak stability, to characterize a certain (globally) stable behavior for them. The stochastic asymptotic weak stability requests the system state to be convergent in distribution and ergodic. The former means the state to evolve within a small region around the desired state in a large probability while the latter ensures that the state evolution almost always take place within this region.
On the face of it, the stochastic asymptotic weak stability is somewhat similar to the concept of stochastic bounded stability proposed in[13, 14] in that a stochastic system with persistent noise is considered for the same purpose. That concept also means that the state will evolve within a bounded region around a desired state with a large probability which depends on the region radius. Especially when the region radius goes infinite, the probability will be one. However, there is evident difference between these two kinds of stability. Stochastic bounded stability cannot characterize the ergodicity of the state. Namely, once the trajectory of the state runs out of the bounded region with a small probability, the coming evolution will take place in a larger bounded region to reach a “new” stochastic bounded stability with a larger probability. In addition, the stochastic asymptotic weak stability is different from stochastic noisetostate[3, 1] and inputtostate stability[9] too. The latter two kinds of stability also serves for characterizing the stable behavior of stochastic systems with nonvanishing noise. They describe the convergence of the expectation of the state, for which the transition measure is controlled by defining a particular function. Comparatively speaking, they say nothing about the ergodicity of the state, and do not mean either that the state must evolve within a small region around the desired state. Therefore, the stochastic asymptotic weak stability is able to provide more details on characterizing the “stable” evolution of the state.
In the concept of stochastic asymptotic weak stability, the convergence in distribution describes the evolution trend of the probability distribution of the stochastic system under consideration. As one may know, for a stochastic system the probability density function satisfies the FokkerPlanck equation [6]. Hence, a usual way to achieve convergence in distribution often starts from analyzing the properties of the solutions of the FokkerPlanck equation, including the existence, uniqueness and convergence. Based on this equation, Zhu et al. [21, 20] studied the exact stationary solution of distribution density function for stochastic Hamiltonian systems. Liberzon et. al. [7] developed a feedback controller to stabilize in distribution a class of nonlinear stochastic systems for which the steadystate distribution density function can be solved from the FokkerPlanck equation. In addition, probability analysis is another way to serve for achieving the weak stability. Zakai [18] presented a Lyapunov criterion to suggest the existence of stationary probability distribution and the convergence of transition probability measure for stochastic systems with globally Lipschitzian coefficients. Stettner [15] pointed out that the strongly feller and irreducible process are stable in distribution. Khasminskii [5] constructed a Markov chain to analyze the convergence of the probability distribution, and further obtained the Markov process to be convergent in distribution [6] if it is “mix sufficiently well” in an open domain and the recurrent time is finite. The conditions that renders the recurrent time to be finite give us large inspiration on developing the stabilizing ways in weak sense for stochastic systems with nonvanishing noise.
In this paper, we will show that the recurrent property of a stochastic system is highly relevant to the stochastic passivity behavior. Based on this comparison, we define the stochastic passivity not in the whole state space, but only outside a ball centered around the desired state, which is labeled as stochastic weak passivity in the context. Within the framework of stochastic weak passivity, we do not need to care whether the noise port of a stochastic system vanishes at the desired state or not. Therefore, it is suited to handle the stabilization issue of stochastic differential systems with nonvanishing noise. Further, we link the stochastic weak passivity with the stochastic asymptotic weak stability, and develop stabilizing controllers using the stochastic weak passivity to achieve the asymptotic weak stability of stochastic systems. The sufficient conditions for global and local asymptotic stabilization in weak sense are provided by means of negative feedback laws, respectively.
The rest of the paper is organized as follows. Section presents some preliminaries on stochastic passivity. In section , the loss of stochastic passivity is analyzed and the problem of interest is formulated. In section , we propose the framework of stochastic weak passivity theory and make a link between stochastic weak passivity and asymptotic weak stability. Some basic concepts and the main results (expressed as two stochastic weak passivity theorem and one refined version) for stabilizing stochastic systems in weak sense are given in this section. Section illustrates the efficiency of the stochastic weak passivity theory through two application examples. Finally, section concludes this paper and makes a prospect of future research.
2 Preliminaries of stochastic passivity
In this section, we will give a birdseye view of mathematical systems theory related to stochastic differential systems.
We begin with a stochastic differential equation written in the sense of Itô
\hb@xt@.01(2.1) 
where , , and are locally Lipschitz continuous functions, and is a standard Wiener process defined on a complete probability space. Assume to be the stochastic process solution and to be the equilibrium solution (if exists) of Eq. (LABEL:StochasticEquation), then we have
Definition 2.1 (Transition Measure [6])
The transition measure of , denoted by , is a function from to such that
\hb@xt@.01(2.2) 
where is the algebra of Borel sets in , is a Borel subset, and denotes the probability function.
Definition 2.2 (Invariant Measure [6])
Let be a measure defined on a Borel space , then is an probability invariant measure for a stochastic system of Eq. (LABEL:StochasticEquation) if and
\hb@xt@.01(2.3) 
Definition 2.3 (Stable in Probability [6])
The equilibrium solution
of Eq. (LABEL:StochasticEquation) is
stable
in probability if
locally asymptotically stable in probability if
globally asymptotically stable in probability if
In order to analyze the stability of stochastic systems, the stochastic version of the second Lyapunov theorem and passivity theorem were proposed in succession.
Theorem 1 (Stochastic Lyapunov Theorem[6])
If there exists a positive definite function with respect to such that
\hb@xt@.01(2.4) 
then the equilibrium solution of Eq. (LABEL:StochasticEquation) is stable in probability, where is a bounded open neighborhood of and is the infinitesimal generator of the solution of Eq. (LABEL:StochasticEquation), calculated through
\hb@xt@.01(2.5) 
If the equality in Eq. (LABEL:LV) holds if and only if , then is locally asymptotically stable in probability.
Further, if , (often said that the Lyapunov function is radially unbounded) and , then is globally asymptotically stable in probability.
The stochastic passivity theorem is not handed directly from the literature, but it may be obtained immediately from the definition of stochastic passivity.
Definition 2.4 (Stochastic Passivity [4])
An inputoutput stochastic differential system in the sense of Itô
\hb@xt@.01(2.6) 
is said to be stochastically passive if there exists a positive semidefinite function such that
\hb@xt@.01(2.7) 
where is the state, the input, the output, the drift term , the diffusion term and all satisfy the condition of local Lipschitz continuity, and , share the same meaning with those in Eq. (LABEL:StochasticEquation). The nonnegative real function is called the storage function, the state where is the stochastic passive state and the inner product is called the supply rate.
Result 1 (Stochastic Passivity Theorem)
The negative feedback connection of two stochastic passive systems is stochastically passive.
Proof. Let and in the form of subscripts represent these two stochastic passive systems, respectively, then we have
Define the storage function of their negative feedback connection by
and note the fact that
then we get
Therefore, the result is true.
Result 2 (Stochastic Passivity and Stability in Probability)
A stochastic passive system with a positive definite storage function is stable in probability if a stochastic passive controller with a positive definite storage function is connected in negative feedback.
Proof. Based on Result 1, the whole negative feedback connection is stochastically passive. As long as the input of the stochastic passive system (labeled by the subscript “”) is manipulated according to , then , which means . Here, the operator is the stochastic passive controller (labeled by the subscript “”) defined by . The stability in probability of is immediately from Theorem , so is that of .
Remark 1
Deterministic passive systems are a kind of special cases of stochastic passive systems. Therefore, the frequentlyused passive controllers [16], such as PID Controller, Model predictive Controller, etc., can all serve for stabilizing the stochastic passive systems in probability.
3 Loss of stochastic passivity and Problem setting
This section contributes to elaborating that stochastic passivity will vanish either in some stochastic systems or when some control problems are addressed, and further to formulating the problem of interest.
3.1 Loss of stochastic passivity
As can be known from Definition LABEL:Defstochasticpassivity, a key point to capture stochastic passivity lies in finding a storage function. We will derive the necessary condition for stochastic passivity in the following, and then get the sufficient condition to say the loss of stochastic passivity. For this purpose, we go back to the stochastic differential equation of Eq. (LABEL:StochasticEquation).
Theorem 2
If a stochastic differential equation given by Eq. (LABEL:StochasticEquation) has a global solution, then it must be not stable in probability at those states that result in the nonzero diffusion term.
Proof. Let the set of all states result in the nonzero diffusion term be given by
For any and , it is expected that
must be not true. Towards this purpose, we assume for simplicity but without loss of generality (which means ), and further construct a realvalued function in the form of
Based on this function, a positive definite, twice continuously differentiable and bounded real function mapping to is defined by
Clearly, and, moreover, in the neighborhood of .
In order to finish the proof, we impose the infinitesimal generator on , and are only concerned about the result at . From Eq. (LABEL:InfinitesimalGenerator), we have
On the other hand, from the definition of [6] we get
where appearing in indicates that the initial condition is . Since the stochastic differential equation (LABEL:StochasticEquation) has a global solution, there exist a time and a constant so that . Also, since
and
together with the fact that
where is any positive number, we have
We set to be sufficiently small so that
where .
From the definition, the leftmost term in the above inequality can be calculated by
where
is a stopping time. Then there exists at least one point, denoted by , on the surface of the ball such that
Note that Eq. (LABEL:StochasticEquation) is autonomous, therefore
Namely, for there always exist a to make the above inequality be true. Clearly, the above inequality suggests that Eq. (LABEL:StochasticEquation) must be not stable in probability at .
It is straightforward to write the inverse negative proposition of Theorem LABEL:Vanish1 as a corollary.
Corollary 1
For a stochastic differential equation in the form of (LABEL:StochasticEquation) with a global solution, if it is stable in probability at a desired state (may be not the equilibrium ), then must belong to which is defined by
\hb@xt@.01(3.1) 
Note that the above result depends on the condition that the stochastic differential equation (LABEL:StochasticEquation) has a global solution. However, under the condition of local Lipschitz continuity, Eq. (LABEL:StochasticEquation) has a unique solution only before explosion time. Based on this result, we will reveal that there is no explosion for some stochastic passive systems, so it must have a global solution. To this task, attention is turned to the Nonexplosion condition of a stochastic differential equation proposed by Narita [11].
Lemma 1 (Nonexplosion Condition [11])
Given a stochastic differential equation represented by Eq. (LABEL:StochasticEquation), if for , there exist two positive numbers and , and a scalar function such that
\hb@xt@.01(3.2) 
holds for all and , and moreover,
\hb@xt@.01(3.3) 
then the solutions of Eq. (LABEL:StochasticEquation) are of nonexplosion, i.e., the explosion time beginning at any and , denoted by , satisfying
In the following, that Lemma is applied to a stochastic passive system yields
Proposition 1
For a stochastic differential system governed by Eq. (LABEL:StochasticSystem), if there exists a radially unbounded Lyapuonv function so that is stochastically passive, then the unforced version of Eq. (LABEL:StochasticSystem) has a global solution.
Proof. Assume to be the radially unbounded Lyapuonv function that suggests to be stochastically passive, then by designating the zero controller to , i.e., , we have . It is naturally to observe that satisfies Eq. (LABEL:Nonexplosion2). Note that the state evolution of this unforced version of is just the same as Eq. (LABEL:StochasticEquation). Hence, the solutions of are of nonexplosion based on Lemma . Namely, Eq. (LABEL:StochasticSystem) has a global solution.
From Proposition , one can know that some stochastic passive systems must have a global solution without force. Combining this result with Corollary , we get the necessary condition for saying to be of stochastic passivity, which is expressed as follows.
Theorem 3 (Necessary Condition for Stochastic Passivity)
If there exists a radially unbounded Lyapunov function that can render a stochastic differential system described by Eq. (LABEL:StochasticSystem) to be stochastically passive, then the unforced diffusion term must vanish at the stochastic passive state.
Proof. From Theorem of Stochastic Lyapunov theorem, is stable in probability at the stochastic passive state with the zero controller. This together with Proposition and Corollary yields the result to be true.
We further express the inverse negative proposition of Theorem to get the sufficient condition for loss of stochastic passivity.
Corollary 2 (Sufficient Condition for Loss of Stochastic Passivity)
If the unforced diffusion term in a stochastic differential system in the form of (LABEL:StochasticSystem) does not vanish at any state , then there does not exist any radially unbounded Lyapunov function to ensure to be stochastically passive.
Remark 2
Corollary LABEL:Vanish3 implies that stochastic passivity will lose when the desired state makes and the storage function is expected to be a radially unbounded Lyapunov function, so it is impossible for one to use stochastic passivity theory, and further, stochastic Lyapunov theorem to analyze the globally asymptotical stability of at in the sense of probability.
3.2 Problem setting
The above analysis reveals that when , stochastic passivity will fail to capture the globally asymptotical stability (in the probability sense) of a stochastic differential system at the desired state , often set as the equilibrium state (if exists) in many control problems. In fact, the nonzero diffusion term is frequently encountered in many real stochastic systems, such as chemical reaction networks, tracking systems, etc. One case is that the noise is persistent in quite a few stochastic systems, which means that for and and thus does not exist at all; the other case is that some special control purposes are served for, such as the desired state being not so that , even if exists.
Apparently, the nonzero diffusion term in real stochastic systems restricts greatly the applications of stochastic passivity theory, a powerful tool for stabilization. However, what is even worse is that it may lead to the system under consideration being not stable at all in probability at the desired state, as stated in Theorem . These two awkward situations motivate us to find a new solution for stabilizing those stochastic systems with nonzero diffusion term at the desired state. On the one hand, it is impossible to stabilize some stochastic systems at any state in probability, on the other hand, the excellent performance of stochastic passivity is hoped to be used. Thus, we take a hack at the next best way to address the current control problem, including seeking the convergence in distribution and ergodicity instead of the convergence in probability, and finding the stochastic passivity behavior only outside a certain neighborhood of the desired state instead of in the whole state domain.
4 Stochastic weak passivity theory
The objective in this section is to present the theory of stochastic weak passivity with which some stochastic systems with nonzero diffusion term can be analyzed concerning the convergence of the transition measure and ergodicity. This theoretical framework includes some basic concepts related to stochastic weak passivity, properties of invariant measure, and results for stabilization which are parallel to those appearing in the stochastic passivity theory.
4.1 Basic concepts
We firstly give the definitions of convergence in distribution and of ergodicity.
Definition 4.1 (Convergence in distribution and Ergodicity)
Assume a stochastic differential equation described by Eq. (LABEL:StochasticEquation) to have an invariant measure . If there exists a subset of , denoted by , such that for any Borel subset with zero measure boundary the equation
\hb@xt@.01(4.1) 
is true, then the stochastic process is said to be locally convergent in distribution. If , then the convergence in distribution is globally.
If for any Borel subset the state satisfies
\hb@xt@.01(4.2) 
where “” represents “almost surely” and
then the stochastic process is said to be locally ergodic. Especially, when , the ergodicity is global.
Here, analogous to the definition of stability in probability, we also distinguish the local and global notations to emphasize the importance of the initial condition.
Remark 3
In the control view point, the convergence of the transition measure and ergodicity both describe certain senses of stable behaviors for stochastic systems. The former means that the distribution of the state will converge to an invariant measure as time goes infinite. Therefore, as long as the invariant measure is shaped to fasten on a small region around the desired state, then the state will evolve within this region with a large probability, i.e., not to deviate from the desired point too far with a large probability. The latter implies that the state evolution almost always take place within the mentioned region. Even if the trajectory sometimes run from the region, it will come back into the region immediately.
Clearly, the convergence of the transition measure and ergodicity reveal that the state of a stochastic system almost always evolves near the desired state if the invariant measure is assigned properly. We define this behavior as stochastic asymptotic weak stability.
Definition 4.2 (Stochastic Asymptotic Weak Stability)
A stochastic differential equation of Eq. (LABEL:StochasticEquation) is of localglobal stochastic asymptotic weak stability if its distribution locallyglobally converges to an invariant measure and its process is of localglobal ergodicity.
Next, we define the stochastic weak passivity that serves for stabilizing a stochastic differential system in weak sense. Note that the loss of stochastic passivity mainly originates from the nonzero diffusion term at the desired state, which further results in some unexpected behaviors appearing around it. Thus, a naive idea is to give up the stochastic passivity near the desired state, but only to suggest it outside a neighborhood of the desired state.
Definition 4.3 (Stochastic Weak Passivity)
A stochastic differential system , as described by Eq. (LABEL:StochasticSystem), is said to be of stochastic weak passivity if there exist a function , i.e., the storage function, such that for and the following inequality holds
where the state is the sole minimum point for and is called the stochastic passive radius.
Similar to the concept of the strict passivity, we may further define strict stochastic weak passivity.
Definition 4.4 (Strict Stochastic Weak Passivity)
Consider a stochastic weak passive system. Suppose that there exists a positive constant such that for and
The system is

strictly state stochastic weak passive if .

strictly input stochastic weak passive if .

strictly output stochastic weak passive if .
4.2 Properties of invariant measure
Definition reveals that the stochastic asymptotic weak stability is concerned with the convergence in distribution of the state and ergodic behavior. However, for a stochastic system, unlike its equilibrium it is not quite obvious to know something about its invariant measure, such as the existence, uniqueness, etc. We separate this subsection to analyze the properties of the invariant measure of the stochastic differential equation under consideration.
In fact, it is not a new research issue to analyze the properties of the invariant measure of a stochastic system [5, 15]. A sufficient condition to say it convergent in distribution was reported as follows.
Theorem 4 (cf. [15])
If a right Markov process on is strongly Feller, i.e., the transition semigroup transforms bounded Borel functions into , and moreover is irreducible, i.e., and any open set there is , then any probability measure converges to the invariant measure (if exists). Moreover the invariant measure (if exists) is equivalent to each transition measure , , .
This theorem provides a solution to capture the convergence in distribution for a right Markov process. However, it is not easy to verify the conditions of “strongly Feller” and “irreducible” in practical applications. As an alternative, Khasminskii [6] proposed a more practical way to say a stochastic system to be convergent in distribution, which works if a Markov process is “mix sufficiently well” in an open domain and the recurrent time is finite (cf. Theorems , and Corollary in [6]). Here, we will combine this practical way with Zakai’s work [18], and give a Lyapunov criterion to say stochastic asymptotic weak stability. However, the drift and diffusion terms of the stochastic system are set to have local Lipschitz continuity instead of global Lipschitz continuity in [18].
Lemma 2 (Finite Mean Recurrent Time [18])
For a stochastic differential equation (LABEL:StochasticEquation) having a global solution , if there exist a function , a state , and two positive numbers and such that
\hb@xt@.01(4.3) 
then for all the first passage time from to the sphere , denoted by , satisfies
\hb@xt@.01(4.4) 
Proof. At the time of , by Dynkin’s formula we have
Note that , so . The inequality (LABEL:IneqFMRT) naturally holds due to the monotone convergence.
Theorem 5
For a stochastic equation in the form of (LABEL:StochasticEquation), if there exists a nonnegative function satisfying the following conditions:

;

;

, if , then .
then there is a unique finite invariant measure such that for any Borel subset with zero measure boundary
\hb@xt@.01(4.5) 
and for any Borel subset
\hb@xt@.01(4.6) 
i.e., Eq. (LABEL:StochasticEquation) being globally asymptotical stable in weak sense.
Proof. According to Lemmas and , the first two conditions could suggest that Eq. (LABEL:StochasticEquation) has a unique global solution and for any initial state satisfying , we have
Hence, for any compact subset we get
Further based on the strong maximum principle for solutions of elliptic equations, the third condition implies the system (LABEL:StochasticEquation) to be irreducible (cf. Lemma 4.1 in [6]), which combining the above inequality suggests that an ergodic Markov chain can be induced for this stochastic process by constructing a circle. The ergoic property of the Markov chain will ensure that there exists a sole invariant measure to which the transition measure converges (cf. Theorems and , and Corollary in [6]), and the ergodicity of the system under consideration is true (cf. Theorem in [6]). Namely, Eqs. (LABEL:TransitionMeasureConvergence) and (LABEL:Eq_global_ergodicity) hold.
The theorem provides a Lyapunov function based method to address the issues of the existence and uniqueness of the invariant measure together with the convergence of the transition probability measure and ergodicity for a stochastic differential equation, so it can be associated with the Lyapunov stability theory conveniently.
Remark 4
There are two differences between the above theorem and the corresponding result in [18]. One is that the nonsingularity of is not necessary in the whole state space but only holds in an open ball (). The latter is believed to be achieved more easily in practice. The other is that the storage function must be radially unbounded here. In fact, this is not a necessary condition, which can be removed if the drift term and diffusion term in the stochastic equation are assumed to be globally Lipschitz continuous.
For a stochastic asymptotic weak stable system, to ensure the state to evolve within a small region around the desired point, the invariant measure needs to be assignable or at least partially shaped by the control to fasten on this region. In the sequel, we will prove that the invariant measure can be shaped purposefully by controlling the change rates of the nonnegative function and the radius of the ball .
Lemma 3
For a stochastic differential equation (LABEL:StochasticEquation) admitting a global solution , if and such that
and
\hb@xt@.01(4.7) 
then for any we have
where satisfying , represents the first time at which the state hits the region after , is the first time at which the trajectory reaches the surface of after , and means the initial time.
Proof. (1) According to Dynkin’s formula, for any
Also, since
we have
Therefore, we get
(2) On the other side, for any we have
i.e.,
By monotone convergence theorem, the inequality is true.
(3) Based on the results of (1) and (2), we have that for any
Besides, Eq. (LABEL:eq_infinite_loops) and the result (2) imply there’re almost surely infinite many , so the notations “” and “” in the following are not in vain. Applying Fatou’s lemma yields
\hb@xt@.01(4.8) 
Let , utilizing which we have
Hence,