New Approach to General Nonlinear Discrete-Time Stochastic H_{\infty} Control

New Approach to General Nonlinear Discrete-Time Stochastic Control

Xiangyun Lin,  Tianliang Zhang,  Weihai Zhang,   and  Bor-Sen Chen X. Lin is with the College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, 266590, China.T. Zhang is with the School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China.W. Zhang is with the College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, 266590, China.B. S. Chen is with the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan. Corresponding author. Email: w_hzhang@163.com.
Abstract

In this paper, a new approach based on convex analysis is introduced to solve the problem for discrete-time nonlinear stochastic systems. A stochastic version of bounded real lemma is proved and the state feedback control is studied. Two examples are presented to show the effectiveness of our developed theory.

ptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptptpt

New Approach to General Nonlinear Discrete-Time Stochastic Control


Xiangyun Lin,  Tianliang Zhang,  Weihai Zhang,   and  Bor-Sen Chen


00footnotetext: X. Lin is with the College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, 266590, China.00footnotetext: T. Zhang is with the School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China.00footnotetext: W. Zhang is with the College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, 266590, China.00footnotetext: B. S. Chen is with the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, 30013, Taiwan.00footnotetext: Corresponding author. Email: w_hzhang@163.com.

Key words: control, bounded real lemma, convex analysis, internal stability, external stability.

1 Introduction

theory was initially formulated by Zames [1] in the early 1980’s for linear time-invariant systems, where the norm, defined in the frequency-domain form for a stable transfer matrix, plays an important role in robust linear control design; see [2] and [3]. A breakthrough of the classical theory in [4] initiated the time-domain state-space approach in the study, and turned the controller design into solving two algebraic Riccati equations (AREs). After the appearance of [4], control theory has made a great progress in the 1990’s [5]. Up to now, control has been successfully applied to network control [6], synthetic biology design [7, 8], etc..

Instead of solving two Riccati equations or Riccati inequalities as in [4] , Gahinet and Apkarian[9] introduced the linear matrix inequality (LMI) approach to the controller design, which is more convenient due to the usage of LMI Toolbox. In the time-domain framework, the control theory is first extended to nonlinear deterministic systems expressed by ordinary differential equations(ODEs). For example, based on the solutions of Hamilton-Jacobi equations or inequalities, the state feedback control [10] and output feedback control [11], [12], were discussed, respectively. The reference [13] first systematically studied the stochastic control of linear Itô systems, where a stochastic bounded real lemma was obtained in terms of linear matrix inequalities (LMIs), and the dynamic output feedback problem was also discussed. At the same time, the state feedback control for linear time-invariant Itô systems with state-dependent noise was also discussed in [14] based on stochastic differential game. We refer the reader to the monograph [15] for the early development in the control theory of linear Itô systems. Except for the estimation, the extended Kalman filtering on stochastic Itô systems was also discussed in [16]. By means of completing the squares and stochastic dynamic programming, the state-feedback control and robust filtering were extensively investigated in [17] and [18] for affine stochastic Itô systems. It can be founded that starting from 1998, the stochastic control has become a popular research field [19], which has been extended to other stochastic systems such as Markovian jumps [20, 21, 22], Poisson jumps [23] and Lévy processes [24].

With the development of control theory of continuous-time Itô systems, the discrete-time control has also attracted considerable attention. For deterministic linear systems, Basar and Bernhard [2] have developed the discrete-time counterpart of the continuous-time design. Based on the dissipation inequality, differential game, and LaSalle’s invariance principle, Lin and Byrnes [25] developed the control theory for general nonlinear discrete-time deterministic systems. Bouhtouri, Hinrichsen and Pritchard [26] first studied the -type control for discrete-time linear stochastic systems with multiplicative noise. The infinite horizon mixed control for discrete-time stochastic systems with state and disturbance dependent noise can be found in [27], which turned out that the mixed controller design is associated with the solvability of the four coupled matrix-valued equations. For the disturbance attenuation problem of linear discrete-time multiplicative noise systems with Markov jumps, we refer the reader to [28]. Berman and Shaked [29] first explored the general discrete-time stochastic control problem, and presented a bounded real lemma in terms Hamilton-Jacobi inequality, where the Hamilton-Jacobi inequality contains the supremum of some conditional mathematical expectation. As an application, for a class of discrete-time time-varying nonlinear stochastic systems with multiplicative noises, a relatively easily testing criterion was derived via taking the Lyapunov function to be a quadratic form. In [30], we considered the finite horizon control for the following affine nonlinear system

(1)

The references [31] and [32] discussed the filtering design for some uncertain discrete-time affine nonlinear systems with time delays by means of Hamilton-Jacobi inequalities or matrix inequalities.

However, there are still some essential difficulties in nonlinear stochastic control design due to the following reasons:

Even for affine nonlinear discrete-time multiplicative noise systems (a special class of nonlinear stochastic systems), in order to separate the control input from unknown exogenous disturbance , the selection of the Lyapunov candidate function has to be a quadratic function, which often leads to conservative results [19].

Because the Hamilton-Jacobi inequality depends on the supremum of a conditional mathematical expectation function (see (8) of [29]) or the mathematical expectation of the state trajectory (see (30) of [30]), which makes the given controller be not easily constructed. So the general discrete-time nonlinear stochastic theory merits further study, and new methods should be introduced in this field.

Even for the affine nonlinear system (1), as said in [19], the completing the squares technique is no longer applicable except for special quadratic Lyapunov functions. Different from linear system case, the nonlinear discrete system cannot be iterated. In addition, different from Itô systems where an infinitesimal generator can be used, how to give practical criteria for general nonlinear discrete-time stochastic systems which are not dependent on the mathematical expectation of the trajectory is a challenging problem.

This paper will make a contribution to the theory of general nonlinear discrete-time stochastic systems. It is well-known that the bounded real lemma plays a key role in the study of control, so we will first establish a bounded real lemma for the following discrete-time nonlinear stochastic state-disturbance system

(2)

where , , and are measurable vector/matrix-valued functions. , and represent respectively the system state, external disturbance and the regulated output with appropriate dimensions. Throughout this paper, is a sequence of independent -dimensional random variables with an identical distribution defined on the complete probability space , and the corresponding filtration is , where is the -field generated by . Based on the obtained bounded real lemma, we pay our attention to the control of the following controlled system

(3)

where and are respectively measurable vector-valued functions. is the control input sequence. and are adapted sequences with respect to .

For affine systems with multiplicative noises, when using the method of completing the squares as used in [29], the usual conditions are supposed that has the form of quadratics or is twice differentiable which will be used in Taylor’s expansion, see [17] and [31]. The main purposes of those assumptions are to separate from other variables(eg. or ). The same difficulty which is always the main one, also exists in solving problems of stochastic nonlinear system (1) and (1). Concretely, for system (1), separating from is the key that will solve problems to obtain some important results such as well known bounded real lemmas; and for system (1), separating from and is also the key problem in designing controller. In order to overcome those difficulties of dividing-variables, we find that the following properties of convex function

can be used in the analysis of control problems to separate from or . Based on this idea, we introduce a convex method to discuss the control problems of system (1) and (1).

This paper is organized as follows: In section 2, the stability theory for discrete-time nonlinear systems and martingale properties are retrospected, which will be used in the discussion of control. In section 3, the internal stability and external stability for system (1) are discussed. Based on the convex properties of the auxiliary Lyapunov function, the bounded real lemma for system (1) is obtained. In section 4, the state-feedback control is discussed via the convex analysis method, and then the state-feedback controller is designed. In section 5, numerical simulations are given to show the validity of the obtained results.

Throughout this paper, we adopt the following notations:

: the set of all real numbers; : the set of all positive real numbers including ; : the -dimensional real vector space with the norm

for ; : the set of all real matrices; : the set of all positive integers including ; : the dimension of vector ; : the set of all symmetric matrices; : the set of all real positive definite symmetric matrices; : the maximum(minimum) eigenvalue of ; (): the symmetric matrix is positive semi-definite (definite); : the -measurable second-order moment random variable space with the norm

: the space of stochastic sequence with the norm

where , .

2 Preliminaries

Throughout this paper, let be a complete probability space and is an -valued independent random variable sequence. Denote the event set that has zero probability. Let the -field generated by , i.e.,

and ( is the empty set, is the sample space). Obviously, , and we set . Now, we first review some results on the conditional expectation which will be used latter. The following lemma is the special case of Theorem 6.4 in [33].

Lemma 2.1.

If -valued random variable is independent of the field , and valued random variable is measurable, then, for every bounded function , there exists

We firstly retrospect the stability theory for the following discrete-time stochastic system

(4)

where is a measurable function with . From the definition of system (4), it is easy to see that the solution is adapted. Denote or the solution of (4) at time with the initial state starting at , where .

Definition 2.1.

The equilibrium solution of (4) is said to be

(1) almost surely asymptotically stable, if, for all ,

(5)

(2) asymptotically -stable, if

(6)

The following lemma is the LaSalle-type theorem for the discrete-time stochastic system (4); see [34] for details.

Lemma 2.2.

Suppose is a positive function and , , are the Lyapunov functions satisfying

(7)

and

(8)

is the solution sequence of (4). Then

and

Under the condition that is proper and continuous positive definite, the following corollary can be obtained directly by LaSalle-type theorem.

Corollary 2.1.

Suppose there exist a proper and continuous positive definite function and a Lyapunov function sequence satisfying the conditions of Lemma 2.2, then

3 A discrete-time version of the bounded real lemma

Now we consider the discrete-time system (1), where is the solution of (1) with the initial state , is the exogenous disturbances to be rejected, and is the regulated output. Without loss of generality, we also assume that is the equilibrium of and , i.e., . In this section, we denote or the solution of (1) with the initial state and external disturbance starting at , and denote the controlled output as or corresponding to for . Throughout the paper, we assume that all random variables such as and are elements in , i.e., and .

Definition 3.1.

The system (1) is called internally stable if there exists such that

where .

For every positive function and disturbance , we define the difference operator of system (1) as

Because we assume that is independently identically distributed, so

i.e., the difference operator is identical for all . Specially, for , the operator reduces to

Lemma 3.1.

Suppose there exist a positive function , and two positive constants and , such that

(9)
(10)

then system (1) is internally stable. Moreover, if is positive definite, then for every , we have

(11)
Proof.

Since

is -measurable and is independent of , by Lemma 2.1, we have

By condition (9), it shows that

For every , taking the summation on both sides of the above inequality for from to , we obtain that

Since is a positive function, the above inequality yields

(12)

In view of (10), by letting on the left-hand side of (12), we have

(13)

Since , the internal stability is shown from (13).

As far as (11), it can be obtained directly by Lemma 2.2 and the positive definiteness of the function . ∎

Now, we will show the converse of Lemma 3.1 which is characterized by the following lemma.

Lemma 3.2.

Suppose system (1) is internally stable. Then there exists a positive function satisfying (9) and (10).

Proof.

For every , define

(14)

Because, for every , the following fact holds:

which implies that

Using the above property for the solution of system (1), we have

Hence, we obtain the following equations for all :

(15)

Below, we prove that for any , the following holds:

Because, for every , , , and and are independently identically distributed, which implies that and are also identically distributed. So

Similarly, the following relationship holds:

By the definition of in (14), we have

which implies that is identical for all . Therefore, if we let

(16)

then, by the above discussion, it follows that . As so, the equation (15) reduces to

(17)

Taking , we have proved that defined by (16) satisfies (9).

As far as satisfies (10), it can be obtained directly by the internal stability of system (1) and Definition 3.1. ∎

By the equations (16) and (17), we have the following corollary.

Corollary 3.1.

Suppose system (1) is internally stable. Then there exists a positive function satisfying (17). Moreover, there also exists

(18)
Proof.

Obviously, it only remains to show that (18). By definition of in (16), we have

In view of the fact that , (18) is hence proved. ∎

Combining Lemma 3.1 and Lemma 3.2, the following proposition 3.1 is obtained, which presents a necessary and sufficient condition of the internal stability of system (1). Denote

(19)
Proposition 3.1.

System (1) is internally stable if and only if there exist a positive function and a positive constant such that

(20)
(21)
Definition 3.2.

The system (1) is said to be externally stable or -input-output stable if, for every ,

and there exists a positive real number such that

or equivalently,

(22)
Remark 3.1.

Suppose is a given positive real number. If inequality (3.2) or (22) holds, system (1) is also said to have -gain less than or equal to [25]. Moreover, suppose that system (1) is externally stable. Define an operator

by

then operator is called the perturbation operator of (1). Its norm is defined as

(23)

So, on one hand, is a measure of the -gain of system (1), but on the other hand, it is also a measure of the worst case effect that the stochastic disturbance may have on the controlled output . Therefore, it is important to find a way to determine or estimate the norm .

Proposition 3.2.

Suppose, for , there exist a convex positive function and a real number , such that

(24)
(25)

where is defined by

(26)

Then . Moreover, if satisfies (10), then system (1) is also internally stable.

Proof.

Let , then . By the convexity of , it follows

By conditions of (24) and (25), it follows that

Denote the solution of (1) with initial state for , is the corresponding output. Then, we have

Since and are -measurable, by Lemma 2.1, the above inequality can also be written as

i.e.

Taking the mathematical expectation on both sides of the above inequality, we have

(27)

For every , taking a summation on both sides of (27) from to , we have

Since , and , we obtain that

Let , we get

This proves that (1) is externally stable and .

Now, we will prove that system (1) is also internally stable. Since

this implies

By (24) and above inequality, we obtain

i.e.

By Proposition 3.1, we proves that system (1) is internally stable. ∎

Remark 3.2.

Denote a set of all positive convex functions defined on satisfying (10) and

Define

(28)

From the proof of Proposition 3.2, we can see that . This can be used to estimate the upper bound of operator norm , though given by (28) is not necessarily the best one. But it is the locally best one, this is because that is confined to which is a subset of convex functions.

In order to induce the bounded real lemma for system (1), we introduce the definition of convexity of vector-valued function as following.

Definition 3.3.

Let and . The vector-valued function is said convex with respect to , or is called convex if the compound function is convex, i.e., for every and , there exists

(29)
Remark 3.3.

The definition of convexity can be seen as an extension of logarithmic convexity used in [35].

In this paper, the following assumption is needed and will be used in the latter discussion.

(): For every , and are convex, where is defined by .

Lemma 3.3.

Suppose Assumption () holds and system (1) is internally stable. Then defined by (16) is a convex function.

Proof.

Let the solution of system (1) starting at with initial state for . Since, for every and

applying the convexity of and , we have

and

Now we use the inductive method to prove that, for all , the following two inequalities are true:

and

(30)

Firstly, for , by the just above discussions, we see that (3) and (30) are true.

Suppose, for , the inequalities of (3) and (30) are true. Then, for , keeping is convex in mind, we have

Similarly, we can prove that (30) is true for . By induction, we prove that (3) and (30) are true.

For every , taking summation on both sides of (3) for from to , we obtain