# Joint Actuator-sensor Design for Stochastic Linear Systems

###### Abstract

We investigate the joint actuator-sensor design problem for stochastic linear control systems. Specifically, we address the problem of identifying a pair of sensor and actuator which gives rise to the minimum expected value of a quadratic cost. It is well known that for the linear-quadratic-Gaussian (LQG) control problem, the optimal feedback control law can be obtained via the celebrated separation principle. Moreover, if the system is stabilizable and detectable, then the infinite-horizon time-averaged cost exists. But such a cost depends on the placements of the sensor and the actuator. We formulate in the paper the optimization problem about minimizing the time-averaged cost over admissible pairs of actuator and sensor under the constraint that their Euclidean norms are fixed. The problem is non-convex and is in general difficult to solve. We obtain in the paper a gradient descent algorithm (over the set of admissible pairs) which minimizes the time-averaged cost. Moreover, we show that the algorithm can lead to a unique local (and hence global) minimum point under certain special conditions.

## I Introduction

The problem of system design is, roughly speaking, to optimize its intrinsic parameters, such as placement of actuator and/or sensor, so as to either minimize a certain cost function (e.g., energy consumption) or to maximize a certain performance measure (e.g., sensing accuracy). When the system is networked, comprised of several physical entities (such as a swarm of robots or unmanned aerial vehicles), allocation of communication resource can be also considered as an intrinsic parameter. Optimal resource allocation/communication scheduling has also been addressed widely in the literature (see, for example, [1, 2, 3, 4, 5, 6]).

We focus in the paper a joint actuator-sensor design problem for the following stochastic linear system over admissible actuator vector and sensor vector :

We aim to minimize an infinite horizon time-averaged quadratic cost function:

For fixed and , this is known as the linear-quadratic-Gaussian (LQG) control problem. The optimal feedback control law can be obtained via the separating principle. However, our goal here is not to reproduce the analysis for deriving such optimal feedback control law. But rather, we assume that such control law has been employed, and we address the problem of how to minimize the cost over the pairs under the constraint that and , i.e., the Euclidean norms of and of , are fixed. A precise formulation of the joint actuator-sensor design problem will be given shortly.

Literature review. We note here that similar problems of actuator-design or sensor-design (but not jointly) have also been addressed recently. We first refer the reader to [7] for the optimal sensor design problem. A gradient descent algorithm was derived there, which is proven to possess a unique exponentially stable equilibrium for the case where is Hurwitz and is relatively small. We also refer the reader to [8] for the problem of optimizing the actuator vector which requires minimal energy to drive the system from an initial condition in the unit sphere to the original in the worst case (with respect to the choice of the initial condition). A complete solution was provided for the case where is positive definite with distinct eigenvalues. We further refer to [9, 10, 11] for actuator/sensor design problems which are application-specific.

Amongst other related problems, we mention the actuator and/or sensor selection problem. The problem there is to select a small number of actuators/sensors out of a large discrete set so as to minimize the control energy or to maximize the sensing accuracy. For example, the authors in [13] established lower bounds for control energy for a given selection of actuators. Similar problem, but with the focus on sensing accuracy, was addressed in [14]. A key difference between the actuator/sensor design problem and the selection problem is that the solution space of the former is usually a non-convex continuous space while the latter is in general a combinatorial optimization problem. Thus, the techniques and mathematical tools used in these two classes of problems are quite different. We further note that greedy type of algorithms were widely used in sensor/actuator selection problems. For example, we refer the reader to [15] for the minimal controllability problem (i.e., the problem of selecting minimal number of variables so that the resulting linear system is controllable), and to [16] for the sensor selection problem for Kalman filtering.

Outline of contribution and organization of the paper. The contribution of the work is the following: First, we formulate the joint actuator-sensor problem in Section II. In particular, we provide an explicit expression of the cost function and identify the solution space as a coadjoint orbit equipped with the so-called normal metric. We then derive in Section III the gradient flow over the solution space with respect to the given metric. We also provide analytical results about the gradient flow. In particular, we characterize conditions for a point in the solution space to be an equilibrium of the gradient flow. To illustrate the type of analysis one needs to carry out, we focus on a special class of linear dynamics where the system matrix is negative definite and are relatively small. We show that in such case, there is a unique stable equilibrium of the associated gradient flow. In particular, the optimal actuator vector and sensor vector are aligned with the eigenvector of the matrix with respect to its largest eigenvalue. These results, as well as the analysis, are given in Section IV. We provide conclusions at the end.

## Ii Preliminaries and problem formulation

We formulate here the joint actuator-sensor design problem. To start, we first have a few preliminaries about the classic linear-quadratic-Gaussian (LQG) control problem.

### Ii-a Preliminaries about LQG control

Consider a continuous-time linear stochastic system with a continuous-time measurement output:

(1) |

where is the state, is the control input, is the measurement output, and , are independent standard Wiener processes. We call the vectors the actuator and sensor vectors, respectively. Next, consider the expected value of a quadratic cost function:

The so-called LQG control problem is about finding an optimal feedback control law which minimizes the above cost. It is well known that the optimal control problem can be solved via the celebrated separation principle: Let and be two differential Riccati equations defined as follows:

(2) |

where the boundary conditions are specified by and is the covariance matrix of . Then, an optimal feedback control law is given by

where is the minimum mean-squared-error estimate of , which is given by (see [17])

Moreover, under such an optimal control, the (minimized) cost function is given by

(3) |

where denotes the trace of a matrix.

Further, if the control system (1) is stabilizable and detectable, then the steady-states of the differential Riccati equations (2) exist, which are the unique positive semi-definite (PSD) solutions to the following algebraic Riccati equations (AREs):

(4) |

It follows that the limit of also exists, which we state in the following Lemma:

### Ii-B Problem formulation: joint actuator-sensor design.

Note that the value of defined in (5) depends on the actuator and sensor vectors and , via the solutions and to the AREs (8). We will thus write , , and on occasions to indicate such dependence explicitly. The optimal joint actuator-sensor design problem we address in the paper is an optimization problem about minimizing the function over all admissible pairs .

To proceed, we first note the fact that decreases if we increase the norms of and . Specifically, we fix a pair of actuator and sensor vectors , with stabilizable and detectable. With slight abuse of notation, we denote by and , for , the PSD solutions to the following AREs:

(6) |

Then, we have

We refer to [18] or Prop. 3 of [6] for a prove of the above inequalities. We also gave in [6] generic conditions on when the inequalities are strict. If we let be defined as in (5), with and replace by and , then we have the following fact:

###### Lemma 2.

For fixed and with stabilizable and detectable, we have

The inequalities are strict if and .

###### Proof.

We focus only on the proof for . By symmetry, the same argument can be applied to establish . For convenience, we let . We obtain by computation

where the second equality comes from (6), and the last inequality comes from the fact that for and . Here, and . The inequalities are strict if and . Note that if and only if . Also, note that by computation (see, for example, [6])

where we omit the argument in in the above expression. The integral exists because is detectable, and hence is Hurwitz. It then follows that if , then , and hence . ∎

The statement of Lemma 2 is not surprising. Indeed, the Euclidean norms of and can be thought as the actuation gain (e.g., specific impulse for spacecraft/rocket propulsion) and the signal-to-noise ratio (SNR), respectively. Increasing the actuation gain and/or the SNR yields a better performance, i.e., a smaller values of . We thus assume in the sequel that and are fixed positive numbers. We note that such an assumption is natural in system design as the actuation gain of the actuator and the SNR of the sensor are given, but their placements/embedding in the control system will matter for the performance measure.

With the preliminaries above, we now formulate the joint actuator-sensor design problem as follows:

Joint actuator-sensor design problem. Find a pair which minimizes under the constraint that and with .

We note here that unlike the LQG control problem, the optimal feedback control and the minimum mean-squared-error estimate can be solved “independently”, the arguments and in are coupled—they are coupled in the term . Hence, the joint actuator-sensor design problem cannot be solved by dividing it into subproblems of actuator and sensor design.

##
Iii Double bracket flow as

a gradient descent algorithm

We derive in the section a gradient descent algorithm which minimizes the potential function . To introduce such a gradient descent algorithm, we need to first identify the solution space, and then impose a metric on the solution space. This is done in the first Subsection. We note here that similar computation and argument has been carried out in [7]. We thus omit a few computational details.

### Iii-a Solution space and normal metric

1 Solution space. To proceed, we first identify the underlying solution space. First, note that the PSD solutions and to (8) (and hence ) depend on and in a way such that they depend only on and . Said more explicitly, if we normalize and as

(7) |

so that , then we can re-write (8) as

(8) |

For the above reason, we can write and , and hence without any ambiguity. The collection of such pair will be the solution space. Specifically, we let be the standard basis of , and

be the special orthogonal group. We then let

(9) |

to which the matrices and belong. The space is also known as a coadjoint orbit. Note that the above definition does not depend on the choice of as acts transitively on the unit sphere . Since , the solution space is then the product space .

2. Normal metric. A metric (tensor) on the space is such that at each point , is a positive definite bilinear form on (the tangent space of at ), and varies smoothly on . Equipped with a metric , is then a Riemannian manifold.

For the coadjoint orbit , there is a canonical metric, known as the normal metric, which will be characterized below. Let

be the set of skew-symmetric matrices. Denote by the commutator of matrices. Then, the tangent space of at a a matrix is given by

(10) |

Fix the matrix , and let be the linear map from to . The linear map is onto, and we denote by the kernel of . If one imposes the inner product on by , then the subspace of perpendicular to is defined. We denote it by . So, , and .

It then follows that , when restricted to , is a linear isomorphism between and . We can thus re-write (10) as

(11) |

The normal metric (tensor) on is then defined as follows:

(12) |

for .

In the case here, we have the solution space. One can simply extend the normal metric to the product space as

(13) |

for and .

### Iii-B Gradient descent algorithm

We derived here the gradient flow of over the solution space with respect to the normal metric defined in (13). Denote by the gradient of , determined by the following defining condition:

(14) |

where is the directional derivative along . We are now in the a position to state the first main result:

###### Theorem 1.

###### Remark 1.

Note that one can re-scale the gradient flow (15) by dividing :

(17) |

In particular, the two dynamics share the same set of equilibria.

###### Remark 2.

We also note that if the initial conditions and are chosen such that is stabilizable and is detectable (which is generically true), then is stabilizable and is detectable for all , where and are such that and . In particular, the matrices and in (16) can be expressed as follows:

(18) |

We provide below a proof of Theorem 1.

###### Proof of Theorem 1.

The proof follows from computation by matching the two sides of (14). Fix a pair in the solution space . Pick a tangent vector and we write , with and for and .

By computation, the directional derivative is given by

(19) |

where and are directional derivatives, which satisfy the following Lyapunov equations:

Since is stabilizable and is detectable, and are Hurwitz matrices, and hence and can be expressed as

Combining the above integral formula with (19) and using the fact that for any square matrices , , and , we obtain

(20) |

Now, for the left hand side of (14), we first note that must take the form

with and . This directly follows from (11). Thus, the gradient will be determined if and are known to us. By the definition of the normal metric in (13), we have

(21) |

By matching (20) and (20), we obtain

(22) |

provided that and belong to and , respectively. Note that if (22) holds, then the proof is done.

We now show that . The same argument can be applied to establish . Pick any , i.e., , we need to show that . This follows because

which completes the proof. ∎

## Iv Analysis of the gradient descent algorithm for symmetric, stable systems

We call a pair an equilibrium point of the gradient if . Equivalently, an equilibrium point is also a critical point of the potential . An optimal solution , i.e., a global minimum point of , is necessarily an equilibrium point of the gradient flow. Thus, characterizing equilibria (and especially, stable equilibria) is crucial. Although the gradient descent algorithm (15) (or the re-scaled version (17)) can be applied to arbitrary control systems, the analysis of its equilibria can be quite difficult in general. In the section, we focus on a special class of control systems—these systems are such that the system matrix is negative definite with distinct eigenvalues, and the Euclidean norms of and are relatively small.

The goal here is thus to demonstrate the type of analysis one needs to carry out for computing the set of equilibria, and provide insights into the analysis for a general case.

We note here that with a few more arguments, the results obtained here can be extended to the case where is negative semi-definite which, for example, includes the class of (weighted) Laplacian dynamics, i.e.,

(23) |

where a weighted irreducible Laplacian matrix, i.e., and for all . But for ease of exposition, we focus only on the case where is negative definite. For the case where is unstable, the analysis becomes more subtle, and the details will be discussed in a different paper.

### Iv-a The eigenvector problem

For given positive numbers and , we denote by the set of equilibria associated with (15). Note that the dynamics (17) shares the same set of equilibria with (15), except for the case where either or is zero. Indeed, if , then for all , which does not hold for (17). We thus have the point of singularity. On the other hand, one may treat as the set of equilibria associated with (17). In this way, is defined for all nonnegative and . The benefits for one to do this is the following: Note that (17) depends smoothly on and (the dependence is via , , , and ). Thus, by arguments of perturbation, one would expect that the set also varies smoothly over an open neighborhood of provided that equilibria satisfy certain non-degenerateness conditions.

We now characterize conditions for a pair to be an equilibrium point. By definition, we have

On the other hand, we have shown in the proof of Theorem 1 that and belong to and , respectively. Since and are linearly isomorphic to and , respectively, we have

Conversely, if the above equations hold, then is an equilibrium point of .

It is well known that two matrices commute if and only if they share the same set of eigenvectors. Now let be defined such that and , it follows from the above commutators that

(24) |

for , i.e., and are eigenvectors of and , respectively. Note that optimal actuator and sensor vectors and can be obtained as and . The above equation serves as the starting point of our analysis.

### Iv-B On the case where

The eigenvector problem posed in (24) is in general hard to solve. The difficulty lies in the fact that , , , and are nonlinear in and (and hence and ). Yet, such nonlinearity vanishes if ; indeed, we have in this case the following sets of equations:

and

(25) |

Since is symmetric, and satisfy the same equation and can be solved explicitly as

Now, let , with an orthogonal matrix and a diagonal matrix. For convenience, we define vectors and as follows:

(26) |

The normalization condition for and is such that

(27) |

We then solve and , using the newly defined variables and , as follows:

where and are diagonal matrices with and on their diagonals, and is a positive-definite Cauchy matrix [19] (note that the ’s are negative) given by

So, with the above closed expressions, we can re-write (24) as follows:

(28) |

with and .

###### Remark 3.

We note here that a similar set of equations has been investigated in [8]. Yet, the results there cannot be simply applied here. Specifically, the problem addressed in [8] is the following: Consider a deterministic linear control system with the diagonal matrix defined above (thus, is positive definite). The minimal energy consumption for driving the system from an initial condition in the unit sphere to the origin is given by . We posed and solved in [8] the minimax problem . In particular, a necessary and sufficient condition for a pair to be a minimax solution is such that

(29) |

where and do not have any single entry, and moreover, is the smallest eigenvalue of . We note that although the two sets of equations (28) and (29) are similar, the problems are different; indeed, as we will see that the optimal solution is such that and can have only one nonzero entry.

### Iv-C The set of equilibria.

We now characterize solutions to (28), which one-to-one correspond to the points in the set of equilibria via (26). We will see soon that can be divided into two subsets—one subset can be realized as the zeros of certain algebraic equations, and hence is an algebraic variety. Moreover, the points in this subset are global maxima of a potential function (over ) whose gradient flow is given by (17), and hence are unstable. The other set is comprised only of isolated points, and contains a unique stable equilibrium.

We introduce the following notation: For vectors and , we define and , as subsets of , to be the collections of indices of nonzero entries of and , respectively.

###### Proposition 1.

Suppose is a solution to (28), then either or .

###### Proof.

First, note that if , then , and hence

which implies that is a solution.

We now assume and prove . We first show that . The proof will be carried out by contradiction. Without loss of generality, we assume that the first entries of are nonzero. Then

(30) |

where is comprised of the first entries of and is the associated leading principal sub-matrix of . Partition and correspondingly, we then have

(31) |

Now, suppose that ; then, , and hence . But then, from (31), we have , and hence . On the other hand, the matrix is positive definite since is (see [8]). So, we must have , and hence which is a contradiction. It thus follows that . Now, we apply the same arguments but exchange the roles of and , and obtain that . We thus conclude that . ∎

The subset of pairs with can be characterized by the following algebraic equations:

(32) |

where the first equation comes from (27) and second comes from . We now show that any equilibrium corresponding a point in such subset is unstable under the dynamics (17). We have the following fact:

###### Proposition 2.

###### Proof.

We omit the proof that (17) is the gradient flow of . It directly follows from computation, and the derivation is similar to the proof of Theorem 1.

We show that . Since , , and is stable, from the Lyapunov equation (25), we have and . So,

Now, let be such that . Without loss of generality, we assume that the first entries of are nonzero. Then, the zero patterns of and are given by:

and hence , which implies that . ∎

For the remainder of the subsection, we focus on the case where . We fix a nonempty subset of , and assume that . Without loss of generality, we assume that , for .

Further, we denote by a finite abelian group defined as follows:

Let “” be the Hadamard product (i.e., entry-wise multiplication). Then, it should be clear that if , then , with

the identity element. In particular, for all

The group acts on the pair of vectors by

for any . One of the main purposes of introducing the abelian group is the following:

We omit the proof as it follows from computation. For a pair , we denote by the orbit under the group action, i.e.,

We further note that the potential function defined in Lemma 2 is invariant under the group action. Specifically, let and be two pairs in , and let and be defined by (26). If , then . We omit the computational details. It follows that if an equilibrium is stable/unstable under (17), then so is any pair in its orbit.

We will now state facts about solutions to (28) with . Let . For a given , we define a vector as follows:

where is the MooreÐPenrose inverse of . In the case here, we have that is a symmetric matrix with nonsingular. Then, .

We introduce a few notations here. For a vector , we let

where is the sign function. we write (resp. ) if each entry of is nonnegative (resp. positive). Furthermore, for any vector , we let

With the notations above, we state the following fact:

###### Proposition 3.

Let be a nonempty subset of . Then, for each , there exists at most one orbit , with

such that is a solution to (28). Moreover, such an orbit exists if and only if and can be chosen such that

(33) |