A APPENDIX

# Optimal Distributed Controller Synthesis for Chain Structures: Applications to Vehicle Formations

## Abstract

We consider optimal distributed controller synthesis for an interconnected system subject to communication constraints, in linear quadratic settings. Motivated by the problem of finite heavy duty vehicle platooning, we study systems composed of interconnected subsystems over a chain graph. By decomposing the system into orthogonal modes, the cost function can be separated into individual components. Thereby, derivation of the optimal controllers in state-space follows immediately. The optimal controllers are evaluated under the practical setting of heavy duty vehicle platooning with communication constraints. It is shown that the performance can be significantly improved by adding a few communication links. The results show that the proposed optimal distributed controller performs almost as well as the centralized linear quadratic Gaussian controller and outperforms a suboptimal controller in terms of control input. Furthermore, the control input energy can be reduced significantly with the proposed controller compared to the suboptimal controller, depending on the vehicle position in the platoon. Thus, the importance of considering preceding vehicles as well as the following vehicles in a platoon for fuel optimality is concluded.

\IEEEoverridecommandlockouts\overrideIEEEmargins

## 1 Introduction

The systems to be controlled are, in many application domains, getting larger and more complex. When there is interconnection between different dynamical systems, conventional optimal control algorithms provide a solution where centralized state information is required. However, it is often preferable and sometimes necessary to have a decentralized controller structure, since in many practical problems, the physical or communication constraints often impose a specific interconnection structure. Hence, it is interesting to design decentralized feedback controllers for systems of a certain structure and examine their overall performance.

The control problem in this paper is motivated by systems, generally referred to as vehicle platooning, involving a chain of closely spaced heavy duty vehicles (HDVs). Information technology is paving its path into the transport industry, enabling the possibility of automated control strategies. Governing vehicle platoons by an automated control strategy, the overall traffic flow is expected to improve [11] and the road capacity will increase significantly [8]. With radar sensors, each vehicle is able to measure the relative distance and velocity of the preceding vehicle. The radar measurements are conveyed further down the chain of vehicles through wireless communication. By traveling at a close intermediate spacing, the air drag is reduced for each vehicle in the platoon. Thereby, the control effort and inherently the fuel consumption can be reduced significantly. However, as the intermediate spacing is reduced the control becomes tighter due to safety aspects; mandating an increase in control action through additional acceleration and braking. Hence, it is of vast interest for the industry to find a fuel optimal control. Thus, with limited information and control input constraints, the control objective is to maintain a predefined headway to the vehicle ahead based upon local state measurements, which makes it a decentralized control problem.

Decentralized control problems are still intractable in general. One approach has been to classify specific information patterns leading to linear optimal controllers. In [22], sufficient conditions are given under which optimal controllers are linear in the linear quadratic setting. An important result was given in [10] which showed that for a new information structure, referred to as partially nested, the optimal policy is linear in the information set. In [12], stochastic linear quadratic control problem was solved under the condition that all the subsystems have access to the global information from some time in the past. [5], showed that the constrained linear optimal decision problem for infinite horizon linear quadratic control, can be posed as an infinite dimensional convex optimization problem, given that the considered system is stable. Control for chain structures in the context of platoons has been studied through various perspectives, e.g., [4, 6, 13, 3, 17, 18, 20]. It has been shown that control strategies may vary depending on the available information within the platoon. However, communication constraints have not in general been considered in control design for platooning applications.

The aim of this study is to synthesize controllers for a practical decentralized system composed of interacting systems over a chain. We minimize a quadratic cost under the partially nested information structure. This problem is known to have a linear optimal policy, [10] and [21]. However, most existing approaches do not provide explicit optimal controller formulae and, the order of the controllers can be large [9], which makes the implementation difficult. Some work has been focused on finding numerical algorithms to these problems, [15] and [24]. Recently, state-space solutions to the so-called two-player state-feedback version of this problem have been given in [19]. Also, in [16], using concepts from order theory, a control architecture has been proposed for systems having the structure of a partially ordered set. In contrast, we construct conditional estimates based on the information shared among the controllers. Thereby, we show how to decompose the states, control inputs, and as a result, the cost function into independent terms. Having the cost function decomposed into individual pieces, analytical derivation of the optimal controllers follows immediately.

The main contribution of this paper is to introduce a simple decomposition scheme to construct optimal decentralized controllers with low computational complexity for chain structures which is applicable to intelligent transportation systems in terms of automated platooning. Derived from the characteristics of actual Scania HDVs, we present a discrete system model that includes physical coupling with a preceding vehicle. In the context of HDV platooning, we explicitly study systems composed of two and three interconnected subsystems over a chain structure. The proposed control scheme accounts for a constrained communication pattern among the vehicles and hence reduces the communications compared to a centralized information pattern where full state information is available to each controller. We also evaluate the performance of the optimal controllers for a typical scenario in HDV platooning under normal operating conditions, with respect to the imposed information constraints.

The outline of the remainder of this paper is as follows. First we specify the problem that we are considering in Section 2. Then, the finite and infinite horizon optimal controller formulation for the simplest case, the two-vehicle problem, will be presented in Section 3. In Section 4, we will show how the decomposition scheme can be extended to the case of three interconnected subsystems. We apply the three-vehicle optimal distributed controller to the example of HDV platooning in Section 5 where we evaluate the proposed controller in comparison with the optimal centralized controller and a suboptimal decentralized controller.

Notation. We denote a matrix partitioned into blocks by , where denotes the block matrix of in block position The submatrix of formed by row partitions through and column partitions through will be denoted by :

 A[i:j,k:l]=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣AikAi(k+1)⋯AilA(i+1)kA(i+1)(k+1)⋯A(i+1)l⋮⋮…⋮AjkAj(k+1)⋯Ajl⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

The expected value of a random variable is denoted by . The conditional expectation of given is denoted by . The trace of a matrix is denoted by , and the sequence , is denoted by .

## 2 System Model and Problem Statement

In this section we present the physical properties of the system that we are considering. We state the nonlinear dynamics of a single vehicle and the model for the aerodynamics, which induces the physical coupling. Then we present the linear discrete system model for a heterogeneous HDV platoon and its associated cost function. Finally, the problem formulation is given.

### 2.1 System Model

We consider an HDV platoon as depicted in Figure 1. The state equation of a single HDV is modeled as [14],

 ˙s =v, (1) mt˙v =Fengine−Fbrake−Fairdrag(v) −Froll(α)−Fgravity(α), =kuu−kbFbrake−kdv2 −kfrcosα−kgsinα,

where is the vehicle velocity, denotes the accelerated mass and denotes the net engine torque. , and denote the characteristic vehicle and environment coefficients for the engine, brake, air drag, road friction, and gravitation respectively.

The aerodynamic drag has a strong impact on an HDV, since it can amount up to 50 % of the total resistive forces at full speed. When traveling at short intermediate spacings, the wind resistance is reduced significantly. Hence, a physical coupling is induced between each vehicle in a platoon. To account for the aerodynamics the air drag characteristic coefficient in (1) can be modeled as

 ~kd=kd(1−Φ(d)100),

where , is the longitudinal relative distance between two vehicles, and are adjusted according to the graphical model given in Figure 2.

The velocities do not deviate significantly for the vehicles with respect to the lead vehicle’s velocity in an automated HDV platoon. Thus, a linearized model should give a sufficient description of the system behavior. By linearizing and applying a one step forward discretization to (1), the discrete model with respect to a set reference velocity, an engine torque which maintains the velocity, a fixed spacing between the vehicles, and a constant slope is hence given by

 x(t+1)=Ax(t)+Bu(t)+w(t), (2)

where

 A=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Θ10000⋯00011−100⋯0000δ2Θ200⋯0000011−1⋯000000δ3Θ3⋯000⋮⋮⋮⋮⋮⋱⋮⋮⋮00000⋯ΘM−10000000⋯11−100000⋯0δMΘM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,
 B=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ku100⋯0000⋯00ku20⋯0000⋯000ku3⋯0⋮⋮⋮⋱⋮000⋯0000⋯kuM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,x=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣v1d12v2d23v3⋮vM−1d(M−1)MvM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,u=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣u1u2u3⋮uM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,Θ1=Ts(1−2kdv0),Θi=−Ts2kdΦ(d0)v0,i=2,…,M,δi=−Tsκ1kdv20, (3)

where denotes the physical coupling with a preceding vehicle and is the sampling time. The derived HDV platoon model in (3) has a lower block triangular structure, which can generally be stated as

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣x1(t+1)x2(t+1)x3(t+1)⋮xM(t+1)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A1100⋯0A21A220⋯00A32A33⋯0⋮⋮⋮⋱⋮000⋯AMM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣x1(t)x2(t)x3(t)⋮xM(t)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ +⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣B100⋯00B20⋯000B3⋯0⋮⋮⋮⋱⋮000⋯BM⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣u1(t)u2(t)u3(t)⋮uM(t)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦+ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣w1(t)w2(t)w3(t)⋮wM(t)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (4)

where the corresponding vehicle states for each subsystem are

 x1(t)=v1(t),xi(t)=[di−1,ivi],i=2,…,M.

### 2.2 Performance Criteria

The performance criteria of an HDV platoon can be mapped into quadratic costs. Hence, we formulate the weight parameters for a quadratic cost function based upon performance and safety objectives. The objective of the lead vehicle is to minimize the fuel consumption and control input, while maintaining a set reference velocity. The objective of the follower vehicles in addition, is to follow the preceding vehicles velocity, while maintaining a set intermediate spacing. The intermediate spacing reference could be constant or, as in this case, time varying. It is determined by setting a desired time gap  s, which in turn determines the spacing policy as

 dref(t)=τv(t).

Thereby, the vehicles will maintain a larger intermediate spacing at higher velocities. Hence, the weights for an HDV platoon can be set up as

 Unknown environment '% (5)

where

 Qi=⎡⎢ ⎢⎣wΔvi0−wΔvi0wdi+wτi−τwτi−wΔvi−τwτiτ2wτi+wΔvi+wvi⎤⎥ ⎥⎦,Ri=wuii. (6)

The weights in (5) give a direct interpretation of how to enforce the objectives for a vehicle traveling in a platoon. The value of determines the importance of not deviating from the desired time gap. Hence, a large puts emphasis on safety. creates a cost for deviating from the velocity of the preceding vehicle, and punishes the control effort which is proportional to the fuel consumption. The following terms, , put a cost on the deviation from the linearized states. Note that the main objective is to maintain a set intermediate distance, while maintaining a fuel efficient behavior. Therefore, and must be set larger than the remaining weights. The weights are chosen such that is positive semidefinite and is positive definite.

### 2.3 Problem Formulation

Although the approach used in this paper is applicable for systems over general acyclic graphs, for simplicity we will concentrate on two simple chain structures, which we refer to as two- and three-vehicle chains. The aim is to synthesize controllers under imposed communication constraints.

For the two-vehicle chain the system matrices have the sparsity structure as

 A=[A110A21A22],B=[B100B2]. (7)

Assume is a sequence of mutually independent Gaussian vectors with zero mean values and covariance given by

 E{w(k)wT(l)}=[W100W2]δ(k−l).

It is assumed that

In this system, the dynamics of subsystem 1 (Vehicle 1) propagates to subsystem 2 (Vehicle 2) but not vice-versa. If both subsystems have access to the global state measurements the information structure would be classical, and the optimal linear controller could be obtained from the linear quadratic control theory. However, in the practical setting of HDV platooning the lead vehicle only has its own state information, whereas the follower vehicle can also measure the states of the preceding vehicle through radar sensors. Therefore, we consider the case in which has access to the overall measurement history, while has access to its own measurements. Let denote the information set of controller at time . Then

 It1={x1(0:t)},It2={x(0:t)}. (8)

This information pattern is not classical anymore and is a simple case of a partially nested information structure. This is one of a few non-classical information patterns for which the optimal policy is known to be unique and linear in the information set. For the chain of three vehicles, the matrices are given by

 A=⎡⎢⎣A1100A21A2200A32A33⎤⎥⎦,B=⎡⎢⎣B1000B2000B3⎤⎥⎦. (9)

Here, is a Gaussian disturbance vector with covariance given by

 E{w(k)wT(l)}=⎡⎢⎣W1000W2000W3⎤⎥⎦δ(k−l).

To maintain partially nestedness, the information set for the controllers is given by

 It1={x1(0:t)},  It2={x1(0:t),x2(0:t)},It3={x1(0:t),x2(0:t),x3(0:t)}. (10)

where only one communication link is needed from vehicle 1 to vehicle 3, since vehicle 2 and 3 can measure the preceding vehicle states with on-board radar sensors.

Thus, the problem that we solve is finding an analytical formulation for optimal controllers constrained to specified information sets that minimize the infinite-horizon quadratic cost

 limN→∞1NEN−1∑t=0(xT(t)Qx(t)+uT(t)Ru(t)), (11)

subject to the given system dynamics and performance objectives. We first give an explicit solution for the two-vehicle problem defined by (7) and (8), where the intuition behind the solution is derived. To show how the proposed technique can be applied to more general chains, we then present an explicit solution for the three-vehicle problem with dynamics given in (9) subject to constraints in (10).

## 3 Two-Vehicle Chain

The aim of this section is to present the optimal control synthesis for the simplest case of the problem which is a chain of two vehicles. The derivation given in this section explains the decomposition idea and the structure of the controllers. First, we shall present the optimal controller in Section 3.1. Next, the derivation of the time-varying and the stationary controller will be explained in Section 3.2. Finally, we conclude with some remarks in Section 3.3.

### 3.1 Main Result

###### Theorem 1

Assume that

1. is stabilizable,

2. is stabilizable,

3. is detectable,

4. is detectable.

Then, the optimal controller for the two-vehicle chain is given by:

 η(t+1) Unknown environment '% u(t) =−[L12L22−L2]η(t)−[L110L21L2][x1(t)x2(t)].

and the optimal cost is

 Tr(X11W1)+Tr(YW2).

The matrices and are the positive semidefinite stabilizing solutions to the Riccati equations

 X =ATXA+Q−ATXB(BTXB+R)−1BTXA, Y =AT22YA22+Q22−AT22YB2(BT2YB2+R22)−1BT2YA22,

and the matrix is partitioned into blocks compatible with the partitions of :

 X=[Xij],  i,j=1,2.

The gain matrices and are given by

 L1 =(R+BTXB)−1BTXA, L2 =(R22+BT2YB2)−1BT2YA22,

and is partitioned into blocks according to

 L1=[Lij],  i,j=1,2.

Before giving the proof of the theorem, we need to state the following lemma and corollary.

###### Lemma 1

Consider the system described by (2), we introduce the following Riccati equation

 P(t)= ATP(t+1)A+Q−ATP(t+1)B× (BTP(t+1)B+R)−1BTP(t+1)A,

for , with the end condition , where is positive semidefinite. Then,

 xT(N)Qx(N)+N−1∑t=0(xT(t)Qx(t)+uT(t)Ru(t))= xT(0)P(0)x(0)+N−1∑t=0(u(t)+L(t)x(t))T ×(BTP(t+1)B+R)(u(t)+L(t)x(t)) +N−1∑t=02(wT(t)P(t+1)(Ax(t)+Bu(t))) +N−1∑t=0wT(t)P(t+1)w(t)

where is given by

 L(t)=(R+BTP(t+1)B)−1BTP(t+1)A.
{proof}

See for example [2].

###### Corollary 1

Assume that , is a sequence of uncorrelated Gaussian variables with the covariance , and is independent of and . Then,

 E{xT(N)Qx(N)+N−1∑t=0(xT(t)Qx(t)+uT(t)Ru(t))}= E{N−1∑t=0(u(t)+L(t)x(t))T(BTP(t+1)B+R)× (u(t)+L(t)x(t))}+N−1∑t=0Tr(P(t+1)W).

where and are given in Lemma 1.

### 3.2 Optimal Controller Derivation

Based on the information constraints in (8), we want to find the controllers restricted to the following structure:

 u1(t)=f1(x1(0:t)),u2(t)=f2(x(0:t)), (12)

where , , denote linear functions in their arguments.

To derive the optimal controller, we will first consider a finite-horizon version of the problem with the cost function given by

 J=E xT(N)Qx(N)+EN−1∑t=0(xT(t)Qx(t)+uT(t)Ru(t)).

To find a structure for the controllers, we decompose the state variable into two independent terms as

 x(t)=z1(t)+z2(t),

where , and . The term is the conditional estimate of given the information shared between the controllers, namely , and is the estimation error. Let these vectors be partitioned as . Clearly, the first component of is . Hence

 z1(t)=[x1(t)z12(t)],  z2(t)=[0z22(t)].

Analogously, the control input is decomposed as , where and are independent terms defined by

 u1(t):=E{u(t)|x1(0:t)},  u2(t):=u(t)−u1(t).
###### Lemma 2

The update equations for and are given by:

 z1(t+1) =Az1(t)+Bu1(t)+[w1(t)0] z2(t+1) =Az2(t)+Bu2(t)+[0w2(t)]
{proof}

See Appendix. Now, considering on the form given by (12) we find that

 u1(t) =E{u(t)|x1(0:t)} =E{[f1(x1(0:t))f2(x(0:t))]∣∣∣x1(0:t)} =[f1(x1(0:t))f2(z1(0:t))],

where the last equality follows from the fact that . Thus, has the structure

 u2(t)=[0f2(z2(0:t))].

By partitioning these vectors as , it can be seen that , so the control input for subsystem 1 is given as the first component of the vector , while subsystem 2’s input is separated into the two independent terms, namely , and . In other words, we have

 [u1(t)u2(t)]=[u1(t)u12(t)]u1(t)+[0u22(t)]u2(t).

Decomposition of the states and inputs into independent terms and having and given as functions of and (which are independent terms) implies that the vectors and are independent. As a result, can be decomposed as:

 E(z1(N))TQz1(N)+N−1∑t=0(z1(t))TQz1(t)+(u1(t))TRu1(t)J1 +E(z2(N))TQz2(N)+N−1∑t=0(z2(t))TQz2(t)+(u2(t))TRu2(t)J2

Note that having and equal to zero implies that only the second component of is nonzero. The dynamics for this component can be written as

 z22(t+1)=A22z22(t)+B2u22(t)+w2(t).

Noting that is independent of , , we can apply Corollary 1 to transform and :

 EN−1∑t=0(u1(t)+L1z1(t))T(BTX(t+1)B+R)× (u1(t)+L1z1(t))+EN−1∑t=0(u22(t)+L2z22(t))T× (BT2Y(t+1)B2+R22)(u22(t)+L2z22(t))+ N−1∑t=0(Tr(X(t+1)[W1000])+Tr(Y(t+1)W2)), (13)

where we also used . The matrices and are computed recursively by

 X(t)= ATX(t+1)A+Q−ATX(t+1)B× (BTX(t+1)B+R)−1BTX(t+1)A, Y(t)= AT22Y(t+1)A22+Q22−AT22Y(t+1)B2× (BT2Y(t+1)B2+R22)−1BT2Y(t+1)A22,

with the end conditions , . The gain matrices and are given by

 L1(t) =(R+BTX(t+1)B)−1BTX(t+1)A, L2(t) =(R22+BT2Y(t+1)B2)−1BT2Y(t+1)A22.

Quadratic minimization of (3.2) simply gives the optimal inputs , and as

 u1∗(t)=−L1(t)z1(t),  u2∗2(t)=−L2(t)z22(t).

Let partitioned into appropriately sized blocks, , then the optimal cost becomes,

 J∗=N−1∑t=0(Tr(X11(t+1)W1)+Tr(Y(t+1)W2))

To find a mapping from to , let partitioned into the blocks so we get the control action on the form

 u1∗(t)=−L11x1(t)−L12z12(t),u12∗(t)=−L21x1(t)−L22z12(t),

and the update equation for becomes

 z12(t+1) =(A22−B2L22)z12(t)+(A21−B2L21)x1(t).

Finally, noting that is given by , the optimal controller can be rewritten on the form

 [u∗1(t)u∗2(t)]=−L1(t)[x1(t)z12(t)]−[0L2(t)](x2(t)−z12(t)).

Having derived the time-varying representation for the controllers, we now let go to infinity and obtain the steady-state form of the controller. Given the pairs and are stabilizable, and the pairs and , are detectable, and converge the unique stabilizing solution to corresponding Riccati equations and as a result, and will tend to the steady-state values and given in Theorem 1. This will yield the controller representation given in the Theorem.

Finally, the optimal cost is computed as

 limN→∞1NN−1∑t=0(Tr(X11(t+1)W1)+Tr(Y(t+1)W2)) =Tr(X11W1)+Tr(YW2).

### 3.3 Discussion

The state vector, , is fed into the controller by a lower-triangular gain matrix, and hence is not dependent on .

Note that (same variable as in Theorem 1) is the minimum-mean square estimate of based on that is . Therefore, represents the error of this estimation.

For convenience, let denote the estimate of based on history of and let represent the estimation error, then we can write the controllers on a more intuitive form:

 u∗1(t) =−L11x1(t)−L12^x2|1(t), u∗2(t) =−L21x1(t)−L22^x2|1(t)−L2e2|1(t).

Thus, both controllers use instead of in the form of an optimal centralized control, however, controller 2 contains an additional term which is constructed based on the estimation error .

We see that the order of each controller is equal to the state dimension of subsystem 2. It is easy to see that in a centralized information pattern where the value of is known to controller 1, the error term disappears and the controller reduces to a static gain similar to a classical linear quadratic regulator problem.

## 4 Three-Vehicle Chain

The optimal controller synthesis for the three-vehicle version of the problem will be studied here. This section extends the result of Theorem 1 to three interconnected subsystems. Although the approach is similar, here the information available to the controllers shall be decomposed into three components instead of two, and hence the cost function will be decomposed accordingly. Since the scheme has been explained in detail in Section 3, a more concise derivation will be given here.

### 4.1 Main Result

###### Theorem 2

Assume that

1. , , and are stabilizable,

2. , and are detectable.

Then, the optimal controller for the three-vehicle chain is given by:

 [η1(t+1)η2(t+1)] =(A−BL1)[2:3,1:3]⎡⎢⎣x1(t)η1(t)η2(t)⎤⎥⎦ η3(t+1) =(~A−~BL2)[2,1:2][x2(t)−η2(t)η3(t)] ⎡⎢⎣u1(t)u2(t)u3(t)⎤⎥⎦ =−L1⎡⎢⎣x1(t)η1(t)η2(t)⎤⎥⎦−⎡⎢⎣0L2[x2(t)−η1(t)η3(t)]⎤⎥⎦ − ⎡⎢⎣00L3(x3(t)−η2(t)−η3(t))⎤⎥⎦,

and the optimal cost is

 Tr(X111W1)+Tr(X211W2)+Tr(X3W3).

The matrices , , and are the positive semidefinite stabilizing solutions to the Riccati equations

 X1= ATX1A+Q−ATX1B(BTX1B+R)−1BTX1A X2= ~ATX2~A+~Q−~ATX2~B(~BTX2~B+~R)−1~BTX2~A X3= AT33X3A33+Q33 − AT33X3B3(BT3X3B3+R33)−1BT3X3A33

where , , and . The matrix