Scalable computation for optimal control of cascade systems with constraints

# Scalable computation for optimal control of cascade systems with constraints

## Abstract

A method is devised for numerically solving a class of finite-horizon optimal control problems subject to cascade linear discrete-time dynamics. It is assumed that the linear state and input inequality constraints, and the quadratic measure of performance, are all separable with respect to the spatial dimension of the underlying cascade of sub-systems, as well as the temporal dimension of the dynamics. By virtue of this structure, the computation cost of an interior-point method for an equivalent quadratic programming formulation of the optimal control problem can be made to scale linearly with the number of sub-systems. However, the complexity of this approach grows cubically with the time horizon. As such, computational advantage becomes apparent in situations where the number of sub-systems is relatively large. In any case, the method is amenable to distributed computation with low communication overhead and only immediate upstream neighbour sharing of partial model data among processing agents. An example is presented to illustrate an application of the main results to model data for the cascade dynamics of an automated irrigation channel.

## 1 Introduction

The application of model predictive control involves solving finite-horizon optimal control problems in a receding horizon fashion [1, 2, 3, 4]. When the penalty function used to quantify performance and the inequality constraints on the system states and inputs all separate along the prediction horizon, additional structure in the equality constraints that encode the system dynamics can be exploited to devise efficient methods for computing the solutions. In particular, methods with computation costs that scale linearly with the time horizon and cubically with the number of states and inputs are well-known [5, 6, 7, 8, 9]. The cubic scaling of these methods in the spatial dimension of the problem, however, can be a limiting factor within the context of controlling large-scale interconnections of sub-systems. In this paper, interconnection structure is exploited over temporal structure to devise a more scalable method for problems with cascade dynamics in particular; i.e., when the system to control is the series interconnection of numerous sub-systems, each with a control input, an interconnection input, and an interconnection output. Such models arise in the study of irrigation and drainage networks [10, 11, 12], mutli-reservoir and hydro-power systems [13, 14], vehicle platoons [15], and supply chain management [16], for example.

The proposed method for solving finite-horizon optimal control problems with cascade dynamics is closely related to the interior-point method developed in [5, 6]. Interchanging the roles of the temporal and spatial dimensions of such problems yields linear scaling of computation cost with the number of sub-systems along the cascade. However, the complexity grows cubically with the time horizon, despite the causal flow of information in the temporal dimension. The development illuminates the difficulty of overcoming such cubic dependence. Computational advantage over methods that exploit temporal structure, rather than the spatial structure exploited here, arises when the length of the cascade is relatively large compared to the prediction horizon. In any case, the method is amenable to distributed computation over a linear network of processing agents, one for each sub-system, with limited neighbour-to-neighbour communication, and only partial sharing of model information between neighbouring agents.

The paper is organized as follows. The class of finite-horizon optimal control problems studied is defined in Section 2.1. The formulation of an equivalent quadratic program (QP) is given in Section 2.2. A scalable interior point method for computing an optimal solution of the structured QP is developed in Section 3. Proofs are deferred to the Appendix. Finally, a numerical example based on model data for an automated irrigation channel is presented Section 4. Some concluding remarks are provided in Section 5.

## 2 Problem Formulation

A class of finite-horizon optimal control problems with cascade dynamics is defined in this section. In addition to the directed interconnection structure of the underlying cascade of sub-systems, a defining characteristic of this class is the separability of the state and input inequality constraints and performance index across both the spatial and temporal dimensions of the system dynamics. An equivalent QP with computationally favorable structure is formulated in Section 2.2.

### 2.1 Constrained finite-horizon optimal control

Consider the cascade of linear discrete-time dynamical sub-systems modelled by

 xj(t+1) =Aj(t)xj(t)+Bj(t)uj(t)+Ej(t)xj−1(t), (1)

given initial conditions and model data , , and , with so that the spatial boundary value is effectively zero, for and . The parameter is a specified time (or prediction) horizon. The problem of interest is to set , for each sub-system index and sample time (indexed by ), in order to minimize the separable penalty function

 (2)

subject to separable inequality constraints

 Mj(t)xj(t) +Lj(t)uj(t)≤cj(t)  for  j=1,…,N, and t=0,…,T, (3)

where

 [Qj(t)Sj(t)⊤Sj(t)Rj(t)]⪰0, (4)

, and , with , , , , and , are given for and . Note that (4) implies and , since .

It is possible to reformulate the optimal control problem defined above in a number of ways that result in standard QPs. Following the style of QP reformulation in [5] leads to an interior-point method involving the solution of linear algebra problems with favourable block tridiagonal structure. This is exploited in Section 3 to devise an algorithm with per-iteration computation cost that scales linearly with cascade length .

### 2.2 A QP formulation

First note that the equality constraint corresponding to the dynamics (1) can be reformulated as follows. Define, for ,

 ^uj

Then

 −^Aj^xj+^Ej^xj−1+^Bj^uj+^Hjξj=0, (5)

where

 ^Aj=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣I0⋯⋯0−Aj(0)I⋱⋮0⋱⋱⋱⋮⋮⋱⋱⋱00⋯0−Aj(T−1)I⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,^Ej=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0⋯⋯⋯0Ej(0)⋱⋮0Ej(1)⋱⋮⋮⋱⋱⋱⋮0⋯0Ej(T−1)0⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

for . The boundary condition for (5) is effectively as . Moreover, with

 ^Qj =diag(Qj(0),…,Qj(T))∈Rnj(T+1)×nj(T+1), ^Rj =diag(Rj(0),…,Rj(T−1))∈RmjT×mjT, ^Sj ^Mj =diag(Mj(0),…,Mj(T))∈Rνj(T+1)×nj(T+1), ^Lj =[diag(Lj(0),…,Lj(T−1))⊤0]⊤∈Rνj(T+1)×mjT,

and , the performance index and constraints can be reformulated as

 J=12N∑j=1(^x⊤j^Qj^xj+^u⊤j^Rj^uj+2^u⊤j^Sj^xj) (6)

and

 ^Mj^xj+^Lj^uj−^cj≤0 for j=1,…,N, (7)

respectively. To summarize, the problem of minimizing (2) subject to the equality constraints (1), and inequality constraints (3), is now in the form of the following standard QP:

 min(^u1,…,^uN)∈Rm1T×⋯×RmNT(^x1,…,^xN)∈Rn1(T+1)×⋯×RnN(T+1)(???) subject to % (???) and (???).

## 3 Developing an interior-point method that scales linearly with cascade length

A primal-dual interior-point method for solving a QP involves the application of Newton’s iterative method to solve the Karush–Kuhn–Tucker (KKT) conditions [17, 18]. For the case at hand, the KKT conditions take the following form, for , where (since ), , , , and denotes a column vector of ones:

 ^Qj^xj+^S⊤j^uj−^A⊤jpj+^M⊤jλj+^E⊤j+1pj+1 =0; ^Sj^xj+^Rj^uj+^B⊤jpj+^L⊤jλj =0; −^Aj^xj+^Ej^xj−1+^Bj^uj+^Hjξj =0; ^Mj^xj+^Lj^uj−^cj+θj =0; ΛjΘj1 =0; [λ⊤iθ⊤j]⊤ ≥0.

Given , with for , the iteration at Newton step is given by

 s[k+1]=s[k]+α[k]⋅δ[k], (8)

where is a sufficiently small step-size parameter selected online so that , , and is the solution of the linearized KKT conditions

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣D1[k]−Υ⊤20…0−Υ2D2[k]−Υ⊤3⋱⋮0−Υ3⋱⋱0⋮⋱⋱DN−1[k]−Υ⊤N0⋯0−ΥNDN[k]⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣δ1[k]δ2[k]⋮⋮δN[k]⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ρ1[k]ρ2[k]⋮⋮ρN[k]⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (9)

and the following hold for , with (since ):

 Dj[k] =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣^Qj^S⊤j−^A⊤j^M⊤j0^Sj^Rj^B⊤j^L⊤j0−^Aj^Bj000^Mj^Lj00I000Θj[k]Λj[k]⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦; (10)
 Υj =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣0000000000−^Ej00000000000000⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦; (11)
 ρj[k]=−Υjsj−1[k]+Dj[k]sj[k]−Υ⊤j+1sj+1[k]+[0⊤0⊤(^Hjξj)⊤^c⊤jσj[k]⊤]⊤; (12)

; ; ; is a centring parameter; and is the duality gap. Given an appropriate initialization , the linear equation (9) has a unique solution for each iteration (8), as seen below. Construction of this solution is facilitated by the block tridiagonal structure; e.g., see [19] and [20]. Indeed, the computation cost of solving the set of equations (9) at each iteration can be made to scale linearly with ; ignoring structure would incur order complexity. Proofs of the following results, which underpin this assertion, can be found in the Appendix.

###### Lemma 1.

Dropping explicit dependence on the algorithm iteration index , the unique solution of (9) is given by the backward and forward recursions

 ~ρj={ρNfor j=Nρj+Υ⊤j+1Σ−1j+1~ρj+1for j=N−1,…,1, and  δj={Σ−11~ρ1for j=1Σ−1j(~ρj+Υjδj−1)for j=2,…,N,

respectively, where

 Σj ={DNfor j=NDj−Υ⊤j+1(Σj+1)−1Υj+1for j=N−1,…,1. (13)

In particular, each , where , is non-singular as required.

###### Remark 1.

Using Lemma 1 to solve (9) incurs a computation cost that scales linearly with the number of sub-systems . Unfortunately, the -block of (c.f. in the proof of Lemma 1) becomes a full matrix for , despite the sparsity of , , , , , , , , and . Therefore, inverting in the recursions above incurs cost that scales cubically with , yielding an overall computation cost of order .

The following result encapsulates a method for inverting the matrix . This method involves the inverses of block diagonal and block bi-diagonal matrices of order , and the inverse of one unstructured positive-definite matrix; note that typically is smaller than and . The computation cost is order and , respectively, for an overall cost that scales cubically with the time horizon. The matrix is not required to be non-singular, as needed to follow steps used to invert similarly structured matrices in [8, 9], for example. Furthermore, the method is amenable to distributed implementation with low communication overhead, as also discussed in more detail subsequently.

###### Lemma 2.

With defined according to (13), the unique solution of the linear equations

 Σj⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3X4X5⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣^Qj−^E⊤j+1(Σ−1j+1)33^Ej+1^S⊤j−^A⊤j^M⊤j0^Sj^Rj^B⊤j^L⊤j0−^Aj^Bj000^Mj^Lj00I000ΘjΛj⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3X4X5⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Y1Y2Y3Y4Y5⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (14)

given , , , , , can be constructed as follows, for with :

 ⎡⎢⎣X1X2X3⎤⎥⎦ =ΨjΩjΨ⊤j⎛⎜ ⎜ ⎜⎝⎡⎢⎣Y1Y2Y3⎤⎥⎦+⎡⎢ ⎢ ⎢⎣^M⊤jΘ−1jΛj−^M⊤jΘ−1j^L⊤jΘ−1jΛj−^L⊤jΘ−1j00⎤⎥ ⎥ ⎥⎦[Y4Y5]⎞⎟ ⎟ ⎟⎠ and [X4X5] =[−Θ−1jΛjΘ−1jI0](−[^Mj^Lj00][X1X2]+[Y4Y5])

where

 Ψj =⎡⎢ ⎢ ⎢⎣I00−~R−1j~SjI−~R−1j~B⊤j~A−⊤j00~A−⊤j⎤⎥ ⎥ ⎥⎦, (15) Ωj =⎡⎢ ⎢ ⎢⎣~Zj0−~W⊤j0~R−1j0−~Wj0−~Wj~Qj⎤⎥ ⎥ ⎥⎦, (16) ~Zj =~A−1~B(~R+~B⊤~A−⊤~Q~A−1~B)−1~B⊤~A−⊤, (17) ~Wj =I−~Qj~Zj, (18) ~Hj =[¯Qj~S⊤j~Sj~Rj]=⎡⎣^Qj−^E⊤j+1(Σ−1j+1)33^Ej+1^S⊤j^Sj^Rj⎤⎦+⎡⎢⎣^M⊤j^L⊤j⎤⎥⎦Θ−1jΛj[^Mj^Lj]⪰0, (19)

, is non-singular and .

###### Remark 2.

The cost of computing is order because of the (lower) block bi-diagonal structure of . The symmetric positive semi-definite matrix is full, on the other hand. So the computation cost of inversion is order , whereby an approach to computing based on Lemma 2 scales as . An alternative approach based on inversion of , assuming that it is non-singular by requiring

 [^Qj^S⊤j^Sj^Rj]≻0,

for example, would also involve computation cost that scales as since the matrix in the -block of is full for . By constrast, as seen in [8, 9], for example, taking such an approach can be fruitful within the context of exploiting temporal structure in optimal control problems, since the -block (i.e., block) of the corresponding matrix is sparse in this case.

## 4 Example

Numerical results are obtained by applying the preceding developments to model data for an automated irrigation channel, within the context of a water-level reference planning problem. A state-space model of the form (1) can be constructed for channels that operate under decentralized distant-downstream control [21, 11]; each sub-system corresponds to a stretch of channel between adjacent flow regulators called a pool. An optimal control problem can be formulated to determine the water-level reference input for each pool, across a planning horizon for which a load forecast may be known, subject to hard constraints on the water-levels and flows. For this example, the model data of the pools, the distributed controllers, the discretization sample-period, and constraint levels are all borrowed from [22]; the many pool channels considered here are constructed by concatenating sections of the channel considered there. In the model for each sub-system (i.e., pool) there is one control input, the water-level reference, and four states, including two for the water-level dynamics and two for the PI-type feedback controller that sets the upstream inflow on the basis of the measured downstream water-level error. Box type constraints on the water-level and controller flow output states are to be satisfied in addition to the water-level reference input constraints. For each sub-system , the following hold: ; ; and . The other model parameters (e.g., entries of state-space matrices) are not uniform along the channel.

Figure 1 shows the MATLAB 2014b computation time (in seconds with forced single computation thread) of exactly sixteen iterations of the interior point method described above for different ways of solving (9). The number of sub-systems is varied from to , with fixed time-horizon . In all cases the duality gap is less than after sixteen iterations. The following cases are considered: Direct solution of (9) with sparsity exploitation disabled by perturbing the block tridiagonal matrix on the left-hand side (i.e., (X+eps)rho); Direct solution of (9) with sparsity exploitation enabled (i.e., X=sparse(X); Xrho); Solution of (9) via Lemma 1 and Lemma 2; and Solution of (9) via a method tailored to exploit the temporal structure also present in , along the lines of Lemma 1. As expected, the approach that does not exploit structure incurs a computation time that grows as . The use of Lemma 1 with Lemma 2, on the other hand, scales linearly with . Moreover, the performance achieved is as good as enabling MATLAB to exploit structure when solving (9) directly, which of course requires the MATLAB environment. Also note that a method tailored to the temporal structure of does not scale linearly. By contrast, Figure 2 shows the computation time for fixed and varying time-horizon . As expected, the approaches in cases and scale as . This is similar to the cubic scaling of computation cost with the dimension of the state in methods that are tailored to exploit the temporal structure of optimal control problems; e.g., [8, 9]. Direct solution of (9) with sparsity exploitation enabled appears to asymptotically scale linearly with , in a way that is consistent with the method that is tailored to exploit temporal structure.

## 5 Conclusion

The main contribution is a scalable interior-point method for computing the solution of constrained discrete-time optimal control problems with cascade dynamics, over a finite prediction horizon. By exploiting the spatial structure arising from the cascade dynamics, the computation cost of each step scales linearly with the number of sub-systems along the cascade. By constrast, the method exhibits cubic growth of computation time as the prediction horizon increases. Direct application of standard methods, typically tailored to exploit the temporal structure of optimal control problems in order to achieve linear scaling with the time horizon, would yield complexity that grows as the cube of the number of system states, and thus, the number of sub-systems. The main developments are illustrated by numerical example on model data for an automated irrigation channel. A topic of ongoing research pertains to extending the main ideas to exploit directed and undirected spatial propagation of information in tree networks of dynamical systems. Another concerns the design of custom hardware for distributed algorithm implementation.

## Appendix A A technical lemma and proofs

###### Lemma 3.

Given , , , , , , , suppose that , , , and is non-singular. Let

 D=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣QS⊤−A⊤M⊤0SRB⊤L⊤0−AB000ML00I000ΘΛ⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,

where and . Then is non-singular. Moreover, the -block of is given by

 −~A−⊤~Q12(I+~Q12~A−1~B~R−1~B⊤~A−⊤~Q12)−1~Q12~A−1⪯0,

where

 [¯Q~S⊤~S~R]=[QS⊤SR]+[M⊤L⊤]Θ−1Λ[ML]⪰0,

and .

###### Proof.

First note that is invertible and that

 [0IΘΛ]−1=[−Θ−1ΛΘ−1I0].

As such, it follows that is invertible if and only if the Schur complement

 ⎡⎢ ⎢⎣¯Q~S⊤−A⊤~S~RB⊤−AB0⎤⎥ ⎥⎦=⎡⎢⎣QS⊤−A⊤SRB⊤−AB0⎤⎥⎦−⎡⎢⎣M⊤0L⊤000⎤⎥⎦[0IΘΛ]−1[ML0000]

is non-singular. Moreover, the inverse of this matrix, when it exists, is precisely the matrix comprising the first three block rows and columns of ; as such, the -blocks coincide.

Now note that , since the diagonal matrix and , and

 ⎡⎢ ⎢⎣~Q0−I0~R0−I0−~A−1~B~R−1~B⊤~A−⊤⎤⎥ ⎥⎦ (20)

where because

 [¯Q~S⊤~S~R]=[QS⊤SR]+[M⊤L⊤]Θ−1Λ[ML]⪰0.

Therefore, is non-singular if and only if

 (21)

is non-singular, which is the case if and only if is non-singular, or equivalently, if and only if . That later holds because

 spec(~Q~A−1~B~R−1~B⊤~A−⊤)∪{0}=spec(~Q12~A−1~B~R−1~B⊤~A−⊤~Q12)∪{0}⊂R≥0,

whereby is invertible. In particular, the -block of the inverse of the left-hand side of (21) is given by , which is congruent to the -block of via the transformation in view of (20), as claimed. ∎

### a.1 Proof of Lemma 1

###### Proof.

Recall that , , and is non-singular; n.b., the structure of is the same as the block bi-diagonal structure of with identity matrices along the block diagonal. Now using Lemma 3, observe that is invertible for . Given the structure of , the matrix is the same as except for the -block, which is in the case of the latter and in the former. By Lemma 3, , so that