# Control for Networked Control Systems with Remote and Local Controllers over Unreliable Communication Channel\thanksreffootnoteinfo

###### Abstract

This paper is concerned with the problems of optimal control and stabilization for networked control systems (NCSs), where the remote controller and the local controller operate the linear plant simultaneously. The main contributions are two-fold. Firstly, a necessary and sufficient condition for the finite horizon optimal control problem is given in terms of the two Riccati equations. Secondly, it is shown that the system without the additive noise is stabilizable in the mean square sense if and only if the two algebraic Riccati equations admit the unique solutions, and a sufficient condition is given for the boundedness in the mean square sense of the system with the additive noise. Numerical examples about the unmanned aerial vehicle model are shown to illustrate the effectiveness of the proposed algorithm.

China]Xiao Liang, China]Juanjuan Xu

School of Control Science and Engineering, Shandong University, Jinan, Shandong, P.R.China

Key words: Networked control systems; optimal control; stabilization; local control and remote control.

^{1}

^{1}footnotetext: This work has been submitted to Automatica on July 25, 2017 and this is the first version for review. This work is the first version. This work is supported by the Taishan Scholar Construction Engineering by Shandong Government, the National Natural Science Foundation of China (61120106011, 61633014, 61403235, 61573221). Corresponding author Juanjuan Xu.

## 1 Introduction

Networked control systems (NCSs) are control systems consisting of controllers, sensors and actuators which are spatially distributed and coordinated via a certain digital communication networks [Zhang et al., 2001]-[Hespanha et al., 2007]. Recently, NCSs have received considerable interest due to their applications in different areas, such as automated highway systems, unmanned aerial vehicles and large manufacturing systems [Horowitz et al., 2007]-[Faezipour et al., 2012]. Comparing with the classical feedback control systems, NCSs have vast superiority including low cost, reduced power requirements, simple maintenance and high reliability. However, the packet dropout occurred in the communication channels of NCSs brings in challenging problems [Wu et al., 2007]-[Ahmadi et al., 2014]. Therefore, it is of great significance to study NCSs with unreliable communication channels where the packet dropout happens.

The research on the packet dropout have been studied in [Nahi, 1969]-[Xiong et al., 2007]. [Sinopoli et al., 2004] introduced the Kalman filter with intermittent observations and the optimal estimator is defined as with conditioning on the arrival process . [Qi et al., 2016] derived the optimal estimator without conditioning on the arrival process and obtained the optimal controller for the systems subject to the state packet dropout. In [Gupta et al., 2007], the optimal linear quadratic Gaussian control for system involving the packet dropout is studied by decomposing the problem into a standard LQR state-feedback controller design, along with an optimal encoder-decoder design. The stabilization problem is investigated in [Xiong et al., 2007] for NCSs with the packet dropout. Nevertheless, the aforementioned literatures are merely involved in a single controller.

Inspired by the previous work [Ouyang et al., 2016], the NCSs under consideration of this paper are depicted as in Fig. 1, which is composed of a plant, a local controller, a remote controller and an unreliable communication channel. The state can be perfectly observed by the local controller. Then, the state is sent to the remote controller via an unreliable communication channel where the packet dropout occurs with probability . We define as the observed signal of the remote controller. When the remote controller observes the signal , an acknowledgement is sent from the remote controller to the local controller whether the state is received. Hence, the local controller can observe the signal as well. The two controllers will not perform their control actions until observe the signal . It is stressed that the remote control is not available for the local control at the same time k. Besides, the channels from the controllers to the plant are assumed to be perfect. The aim of the optimal control is to minimize the quadratic performance cost of NCSs and stabilize the linear plant.

The NCSs model stems from increasing applications that appeal for remote control of objects over Internet-type or wireless networks where the communication channels are prone to failure. The local controller can be an integrated chip on the unmanned aerial vehicle (UAV) that implements moderate control and is poor in the transmission capability, while the remote controller can be a ground-control center which is equipped with complete communication installation and is capable of powerful transmission. Therefore, the communication channel from the local controller to the remote controller is prone to failure. Inversely, the transmission channel from the remote controller to the local controller is normally substantial.

For two decision-makers, the general control strategies are Nash equilibrium and Stackelberg strategy; see [Sheng et al., 2014]-[Mehraeen et al., 2013]. The relationship between Nash equilibrium strategies and the finite horizon control of time-varying stochastic systems subject to Markov jump parameters and multiplicative noise has been studied in [Sheng et al., 2014]. [Kandil et al., 1993] investigated the necessary conditions for the Nash game in terms of two coupled and nonsymmetric Riccati equations. [Simaan et al., 1973] investigated the properties of the Stackelberg solution in static and dynamic nonzero-sum two-play games. In [Xu et al., 2007], the optimal open-loop Stackelberg strategy was designed for a two-player game in terms of three decoupled and symmetric Riccati equations. For the Nash equilibrium, it is necessary for each controllers to access the optimal control strategies of each other, which is different from the idea of this paper. Although the Stackelberg strategy is an available method for two decision-makers, it is assumed that one player is capable of announcing his strategy before the other, which is not applicable in this work. Due to the asymmetric information for the remote controller and the local controller, the analysis and synthesis for the optimal control remain challenging. What’s more, one controller can not obtain the current action of the other controller which makes the optimal control problem more difficult. More recently, [Ouyang et al., 2016] studied the similar NCSs model as in this paper. However, in [Ouyang et al., 2016], it is noted that only sufficient condition was given for the optimal control and the stabilization problem was missing which is of great significance in applications.

In this paper, we shall study the optimal control and stabilization problems for the NCSs with remote and local controllers over unreliable communication channel. The key technique is to apply the Pontryagin’s maximum principle to develop a direct approach based on the solution to the forward backward stochastic difference equations (FBSDEs), which will lead to a non-homogeneous relationship between the state estimation and the costate. The main contributions of this papers are summarized as follows: (1) An explicit solution to the FBSDEs is presented with the Pontryagin’s maximum principle. Using this solution, a necessary and sufficient condition for the finite horizon optimal control problem is given in terms of the solutions to the two Riccati equations. (2) For the stochastic systems without the additive noise, a necessary and sufficient condition for stabilizing the systems in the mean-square sense is developed. For the stochastic systems with the additive noise, a sufficient condition is derived for the boundedness in the mean-square sense of the systems.

The rest of the paper is organized as follows. The finite horizon optimal control problem is studied in Section II. In Section III, the infinite horizon optimal control and the stabilization problem are solved. Numerical examples about the unmanned aerial vehicle are given in Section IV. The conclusions are provided in Section V. Relevant proofs are detailed in Appendices.

Notation: denotes the -dimensional Euclidean space. presents the unit matrix of appropriate dimension. denotes the transpose of the matrix . denotes the -algebra generated by the random variable . means that is a positive semi-definite (positive definite) matrix. Denote as mathematical expectation operator. represents the trace of matrix .

## 2 Finite Horizon Case

### 2.1 Problem Formulation

Consider the discrete-time system with two controllers as shown in Fig.1. The corresponding linear plant is given by

(1) |

where is the state, is the local control, is the remote control, is the input noise and , , are constant matrices with appropriate dimensions. The initial state and are Gaussian and independent, with mean and covariance respectively.

As can be seen in Fig.1, let be an independent identically distributed (i.i.d) Bernoulli random variable describing the state signal transmitted through the unreliable communication channel, i.e., denotes that the state packet has been successfully delivered, and signifies the dropout of the state packet. Then,

(2) |

Observing Fig.1, the remote control can obtain the signals . Accordingly, we have that is -measurable. The local control has access to the states and the signals . In view of (1), we have that is -measurable. For simplicity, we denote and as and respectively.

The associated performance index for system (2.1) is given by

(3) |

where , , and are positive semi-definite matrices. takes the mathematical expectation over the random process , and the random variable .

Thus, the optimal control strategies to be addressed are stated as follows:

### 2.2 Optimal Controllers Design

Following the results in [Zhang et al., 2012], we apply Pontryagin’s maximum principle to the system (2.1) with the cost function (3) to yield the following costate equations:

(4) | ||||

(5) | ||||

(6) | ||||

(7) |

where is the costate.

It is noted that the key to obtain the optimal controllers is to solve the costate equations (4)-(7). To this end, we define the following two Riccati equations:

(8) | ||||

(9) |

where

(10) | ||||

(11) | ||||

(12) | ||||

(13) | ||||

(14) |

with terminal values .

It can be observed that the equation (8) of is a standard Riccati difference equation. By using the solutions to these two Riccati equations, we firstly propose the following lemma.

###### Lemma 1

###### Proof 1

The proof is put into Appendix A.

The optimal control strategies for and are given in the theorem below.

###### Theorem 1

###### Proof 2

The proof of is put into Appendix B.

###### Remark 1

Based on the results in this paper, the following points are highlighted to make comparison with [Ouyang et al., 2016]. (1) We study both the finite-horizon optimal control and the stabilization problems for the NCSs with remote and local controllers, while the stabilization problem is not considered in [Ouyang et al., 2016]. (2) The key technique herein is the Pontryagin’s maximum principle, which is different from the dynamic programming adopted in [Ouyang et al., 2016]. (3) The results obtained in this paper are necessary and sufficient which solve the problem completely, while only sufficient condition is given in [Ouyang et al., 2016]. (4) More specific, the weighting matrices in the finite-horizon cost function are positive semi-definite which are more general than the positive definite ones in [Ouyang et al., 2016].

## 3 Infinite Horizon Case

### 3.1 Problem Formulation

In this section, we will solve the infinite horizon optimal control and stabilization problem.

Since the additive noise is present, only a sufficient condition for the boundedness in the mean square sense of the system can be derived; see [Lin et al., 2017] and [Imer et al., 2006]. In other words, system (2.1) cannot be stabilizable in the mean square sense due to the existence of the additive noise. In order to derive a necessary and sufficient condition for the stabilization of the system, we firstly discard the additive noise of the system (2.1). The stabilization problem for the system (2.1) will be discussed later.

Thus, the system (2.1) without the additive noise can be written as

(23) |

where the initial value is Gaussian random vector with mean and covariance .

Let . We change the finite horizon cost function (3) to the infinite horizon cost function as follows:

(24) |

We start with the following definitions.

###### Definition 1

The system (23) with and is said to be asymptotically mean-square stable if for any initial values , the corresponding state satisfies

###### Definition 2

From [Huang et al., 2008], we make the following two assumptions:

###### Assumption 1

, and for some matrices .

###### Assumption 2

is observable.

The problem to be addressed in this section is stated below:

###### Problem 2

### 3.2 Solution to Problem 2

We now present the main results of this section.

###### Theorem 2

Under Assumptions 1 and 2, the system (23) is stabilizable in the mean-square sense if and only if there exist the unique solutions and to the following Riccati equations, satisfy , and for any initial value :

(25) | ||||

(26) |

where

(27) | ||||

(28) | ||||

(29) | ||||

(30) | ||||

(31) |

In this case, the optimal controllers

(32) | ||||

(33) |

stabilize the system (23) in the mean square sense and minimize the cost function (24). The optimal cost is given by

(34) |

###### Proof 3

The proof is put into Appendix C.

Next we shall consider the stabilization problem for the system with the additive noise.

###### Lemma 2

Consider the system (2.1), under Assumptions 1-2, and the system (23) is stabilizable in the mean-square sense, then for any initial condition, the estimator error covariance matrix converges to a unique positive definite matrix if and only if , where is the eigenvalue of matrix with the largest absolute value, and satisfy (30) and (31).

###### Proof 4

The proof is put into Appendix D.

###### Corollary 1

###### Proof 5

The proof is put into Appendix E.

## 4 Numerical Examples

The UAV systems have recently received significant attention in the controls community due to its numerous applications, including space science missions, surveillance, terrain mapping and formation flight [Pachter et al., 2001] and [Stachnik et al., 1984]. The UAV systems have considerable advantages, such as reducing the energy cost, improving the aviation safety and so on. Besides, the UAV are used because they can outperform human pilots, remove humans from dangerous situations, and perform repetitive tasks that can be automated. In this section, we consider a simple UAV system as an example of the system model of Section II.

Consider a simple UAV system of a unmanned aerial vehicle and a ground-control center which is depicted as in Fig. 2.

Denote and as the location and the velocity of the unmanned aerial vehicle at time (for simplicity, we assume that the unmanned aerial vehicle files in the straight line). Accordingly, at time , the location of the unmanned aerial vehicle can be written as

where denotes the interference during the flight, e.g. wind. The initial location and are Gaussian and independent, with mean and covariance respectively.

At any time , the unmanned aerial vehicle can perfectly observe its location. Meanwhile, the observed location is sent to the ground-control center through an unreliable communication channel with the packet dropout probability . Then, the ground-control center sends the control command to the unmanned aerial vehicle as well as the acknowledgement whether it receives the location information. The unmanned aerial vehicle makes a local decision about its velocity based on the local observation and the received information from the ground-control center. Due to the sufficient equipment of the ground-control center, the communication channel from the ground-control center to the unmanned aerial vehicle is perfect.

The aim of the UAV system is to make the unmanned aerial vehicle reach the Destination while the energy cost is minimum. Thus, we denote the above aim by a cost function

(39) |

where the first term is the sum of quadratic distance between the real-time location and the Destination , the second term is the sum of the quadratic real-time velocity, and are the weighting coefficients.

This UAV system can be described by applying the NCS model in Section II. Define , . Then, the corresponding linear plant is given by

(40) |

Accordingly, ignoring the cross terms, (39) can be written as

(41) |

### 4.1 Finite horizon case

By applying Theorem 1, the optimal strategies are derived as follows:

where

with initial value

The optimal cost is as

(42) |

Then, by applying Corollary 2, Fig. 3 and Fig. 4 are portrayed. Fig. 3 indicates the velocity of the unmanned aerial vehicle for packet dropout probability . It can be seen that there is no huge difference in the velocity of the unmanned aerial vehicle for different values of packet dropout probability . Fig. 4 shows the optimal energy cost for different values of . Obviously, the energy cost increases greatly with the packet dropout probability.

### 4.2 Infinite horizon case

The stabilization performance of the UAV system is to be shown. Firstly, we will consider the case without the additive noise .

Let and other variables have the same values as in the finite horizon case. Using Theorem 2, Fig. 5 is drawn as the dynamic behavior of . It can be seen that the regulated state is mean-square stable.

Now, we shall deal with the case with the additive noise . Let and other variables have the same values as in the finite horizon case. By applying Corollary 1, the dynamic behavior of is presented in Fig. 6 which indicates that the regulated state is mean-square bounded.

## 5 Conclusion

In this paper, the optimal control and stabilization problems for NCSs have been studied. In NCSs, the linear plant is controlled by the remote controller and the local controller. The local controller perfectly observes the state signal and sends the state signal to the remote controller with packet dropout. Then, the remote controller sends an acknowledge to the local controller. It is stressed that remote control is not available for the local control at time . By applying the Pontryagin’s maximum principle, a non-homogeneous relationship between the state estimation and the costate is developed. Based on this relationship, a necessary and sufficient condition for the finite horizon optimal control problem is given in terms of the solutions to the Riccati equations. For the infinite horizon case, a necessary and sufficient condition for stabilizing the systems without the additive noise in the mean-square sense is developed. For the systems with the additive noise, a sufficient condition is derived for the boundedness in the mean-square sense of the systems. Furthermore, we apply the obtained results to a simple UAV system which shows the effectiveness of this algorithm.

## Acknowledgements

The authors would like to thank Prof. Huanshui Zhang for his valuable discussions.

## References

## References

- [Zhang et al., 2001] W. Zhang, M. S. Branicky, and S. M. Phillips (2001), Stability of networked control systems. IEEE Control Systems Magazine vol. 21, no. 1, (84–99).
- [Schenato et al., 2007] L. Schenato, B. Sinopoli, M. Franceschetti, K. Poolla, and S. Sastry (2007), Foundations of control and estimation over lossy networks. Proceedings of the IEEE vol. 95, no. 1, (163–187).
- [Hu et al., 2003] S. Hu and W. Zhu (2003), Stochastic optimal control and analysis of stability of networked control systems with long delay. Automatica vol. 39, (1877–1884).
- [Yang et al., 2011] R. Yang, P. Shi, G. P. Liu, and H. Gao (2011), Network-based feedback control for systems with mixed delays based on quantization and dropout compensation. Automatica vol. 47, no. 12, (2805–2809).
- [Hespanha et al., 2007] J. P. Hespanha, P. Naghshtabrizi, and Y. Xu (2007), A survey of recent results in networked control systems. Proceedings of the IEEE vol. 95, no. 1, (138–162).
- [Horowitz et al., 2007] R. Horowitz and P. Varaiya (2000), Control design of an automated highway system. Proceedings of the IEEE vol. 88, no. 7, (913–925).
- [Seiler, 2001] P. J. Seiler (2001), Coordinated control of unmanned aerial vehicles. PhD thesis University of California, Berkeley.
- [Zhang et al., 2013] L. Zhang, H. Gao, and O. Kaynak (2013), Network-induced constraints in networked control systems¡ªA survey. IEEE Trans. Ind. Informat. vol. 9, no. 1, (403–416).
- [He et al., 2009] X. He, Z. Wang, and D. Zhou (2009), Robust fault detection for networked systems with communication delay and data missing. Automatica vol. 45, no. 11, (2634–2639).
- [Faezipour et al., 2012] M. Faezipour, M. Nourani, A. Saeed, and S. Addepalli (2012), Progress and challenges in intelligent vehicle area networks. Commun. ACM vol. 55, no. 2, (90–100).
- [Wu et al., 2007] J. Wu and T. Chen (2007), Design of networked control systems with packet dropouts. IEEE Trans. Autom. Control vol. 52, no. 7, (1314–1319).
- [Pang et al., 2016] Z. H. Pang, G. P. Liu, D. Zhou, and D. Sun (2016), Data-based predictive control for networked nonlinear systems with network-induced delay and packet dropout. IEEE Trans. Ind. Electron. vol. 63, no. 2, (1249–1257).
- [Sun et al., 2016] S. L. Sun, T. Tian, and H. L. Lin (2016), Optimal linear estimators for systems with finite-step correlated noises and packet dropout compensations. IEEE Trans. Signal Process vol. 64, no. 21, (5672–5681).
- [Ahmadi et al., 2014] A. A. Ahmadi, F. Salmasi, M. Noori Manzar, and T. Najafabadi (2014), Speed sensorless and sensor-fault tolerant optimal PI regulator for networked dc motor system with unknown time-delay and packet dropout. IEEE Trans. Ind. Electron. vol. 61, no. 2, (708–717).
- [Nahi, 1969] N. E. Nahi (1969), Optimal Recursive Estimation with Uncertain Observation. IEEE Trans. Inf. Theory vol. 15, no. 4, (457–462).
- [Sinopoli et al., 2004] B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla, M. Jordan, and S. Sastry (2004), Kalman filtering with intermittent observations. IEEE Trans. Autom. Control vol. 49, no. 9, (1453–1464).
- [Hadidi et al., 1979] M. T. Hadidi and C. S. Schwartz (1979), Linear Recursive State Estimators under Uncertain Observations. IEEE Trans. Autom. Control vol. 24, no. 6, (944–948).
- [Qi et al., 2016] Q. Qi and H. Zhang (2016), Output feedback control and stabilization for networked control systems with packet losses. IEEE Trans. Cybern. DOI: 10.1109/TCYB.2016.2568218.
- [Zhang et al., 2007] W. A. Zhang and L. Yu (2007), Output feedback stabilization of networked control systems with packet dropouts. IEEE Trans. Autom. Control vol. 52, no. 9, (1705–1710).
- [Gupta et al., 2007] V. Gupta, B. Hassibi, and R. M. Murray (2007), Optimal LQG control across packet-dropping links. Syst. Control Lett. vol. 56, no. 6, (439–446).
- [Liang et al., 2016] X. Liang and H. Zhang (2016), Linear optimal filter for system subject to random delay and packet dropout. Optim. Control Appl. Meth. OI: 10.1002/oca.2295.
- [Xiong et al., 2007] J. Xiong and J. Lam (2007), Stabilization of linear systems over networks with bounded packet loss. Automatica. vol. 43, no. 1, (80–87).
- [Sheng et al., 2014] L. Sheng, W. Zhang and M. Gao (2014), Relationship between Nash equilibrium strategies and control of stochastic Markov jump systems with multiplicative noise. IEEE Trans. Autom. Control vol. 59, no. 9, (2592–2597).
- [Kandil et al., 1993] H. Abou Kandil, G. Freiling, and G. Jank (1993), Necessary conditions for constant soultions of coupled Riccati equations in Nash games. Syst. Control Lett. vol. 21, (295–306).
- [Basar et al., 1995] T. Basar and G. J. Olsder (1995), Dynamic Noncooperative Game Theory New York: Academic.
- [Freiling et al., 1999] G. Freiling, G. Jank, and H. Abou Kandil (1999), Discrete-time Riccati equations in open-loop Nash and Stackelberg games. Eur. J. Control. vol. 5, no. 1, (56–66).
- [Simaan et al., 1973] M. Simaan and J. B. Cruz (1973), On the stackelberg strategy in nonzero-sum games. J. Optim. Theory Appl. vol. 11, no. 5, (533–555).
- [Jungers, 2008] M. Jungers (2008), On linear-quadratic Stackelberg games with time preference rates. IEEE Trans. Autom. Control vol. 53, no. 2, (621–625).
- [Xu et al., 2007] J. Xu, H. Zhang, and T. Chai (2015), Necessary and sufficient condition for two-player Stackelberg strategy. IEEE Trans. Autom. Control vol. 60, no. 5, (1356–1361).
- [Mehraeen et al., 2013] S. Mehraeen, T. Dierks, S. Jagannathan, and M. L. Crow (2013), Zero-sum two-player game theoretic formulation of affine nonlinear discrete-time systems using neural networks. IEEE Trans. Cybern. vol. 43, no. 6, (1641–1655).
- [Zhang et al., 2012] H. Zhang, H. Wang, and L. Li (2012), Adapted and casual maximum principle and analytical solution to optimal control for stochastic multiplicative-noise systems with multiple input-delays. in Proc. 51th IEEE Conf. Decision Control Maui, HI, USA, (2122–2127).
- [Lin et al., 2017] H. Lin, H. Su, Z. Shu, P. Shi, R. Lu and Z. G. Wu (2017), Optimal estimation and control for lossy network: stability, convergence, and performance. IEEE Trans. Autom. Control DOI: 10.1109/TAC.2017.2672729.
- [Imer et al., 2006] O. C. Imer, S. Yüksel, and T. Başar (2006), Optimal control of LTI systems over unreliable communication links. Automatica vol. 42, no. 9, (1429–1439).
- [Huang et al., 2008] Y. Huang, W. Zhang, and H. Zhang (2008), Infinite horizon linear quadratic optimal control for discrete-time stochastic systems. Asian J. Control vol. 10, no. 5, (608–615).
- [Liang et al., 2016] X. Liang, J. Xu and H. Zhang (2016), Optimal Control and Stabilization for Networked Control Systems with Packet Dropout and Input Delay. IEEE Trans. Circuits Syst. II, Exp. Briefs DOI: 10.1109/TCSII.2016.2642986.
- [Pachter et al., 2001] M. Pachter, J.J. D¡¯Azzo, and A.W. Proud (2001), Tight formation flight control. Journal of Guidance, Control, and Dynamics vol. 24, no. 2, (246–254).
- [Stachnik et al., 1984] R.V. Stachnik, K. Ashlin, and S. Hamilton (1984), Space station-SAMSI: A spacecraft array for michelson spatial interferometry. Bulletin of the American Astronomical Society vol. 16, no. 3, (818–827).
- [Ouyang et al., 2016] Y. Ouyang, S. M. Asghari, and A. Nayyar (2016), Optimal local and remote controllers with unreliable communication. in Proc. 55th IEEE Conf. Decision Control Las Vegas, NV, USA, DOI: 10.1109/CDC.2016.7799194.
- [Bouhtouri et al., 1999] A. El Bouhtouri, D. Hinrichsen, and A. J. Pritchard (1999), type control for discrete-time stochastic systems. Int. J. Robust. Nonlin. Control vol. 9, no. 13, (923–948).

## A Proof of Lemma 1

###### Proof 6

Before proceeding the proof of Lemma 1, we will firstly introduce the following definition. Noting that the local controller has access to the states and the signals , we define

(A.1) |

and

(A.2) |

Obviously, we have that

(A.3) |

Since the local controller cannot obtain the remote controller at the same time , (A.1)-(A.3) are very useful in the following derivation. Next, we shall rewrite the costate equations (5) and (6).

Taking mathematical expectation on both sides of (5) with , it yields that

(A.4) |

which implies that . Then noting (A.2), (5) can be rewritten as

(A.5) |

Augmented with (6) and (A.4), we have that

(A.6) |

Thus, the costate equations (5) and (6) can be rewritten as (A.5) and (A.6).

Next, we will show by induction that has the form as (15) for all .

For , by making use of (A.7), (7) and (A.3), (A.6) becomes

Hence, the optimal controller is given by

(A.8) |

Using (A.7), (7) and (A.3), we have (A.5) as

Thus, the optimal controller is derived as

(A.9) |

By applying (A.7), (7), (A.3), (A.8) and (A.9), it follows from (4) that

Noting (8) and (9), can be written as

which implies that (15) holds for .

To complete the induction proof, we take any with and assume that are as (15) for all . We shall show that (15) also holds for .

Using (15), and letting , can be written as

(A.10) |

By virtue of (16), (A.7) and (A.3), can be calculated as follows,

(A.11) |

With (17), (A.7) and (A.3), it can be obtained that

(A.12) |

Thus, substituting (A.11) and (A.12) into (A.10), we have that

(A.13) |

Plugging (A.13) into (A.6), and using (A.3), it yields that

The optimal controller is derived as