Algorithms for Joint Sensor and Control Nodes Selection in Dynamic Networks

# Algorithms for Joint Sensor and Control Nodes Selection in Dynamic Networks

Sebastian A. Nugroho Ahmad F. Taha Nikolaos Gatsis Tyler H. Summers Ram Krishnan Department of Electrical and Computer Engineering, The University of Texas at San Antonio, 1 UTSA Circle, San Antonio, TX 78249 Department of Mechanical Engineering, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX, USA 75080
###### Abstract

The problem of placing or selecting sensors and control nodes plays a pivotal role in the operation of dynamic networks. This paper proposes optimal algorithms and heuristics to solve the simultaneous sensor and actuator selection problem in linear dynamic networks. In particular, a sufficiency condition of static output feedback stabilizability is used to obtain the minimal set of sensors and control nodes needed to stabilize an unstable network. We show the joint sensor/actuator selection and output feedback control can be written as a mixed-integer nonconvex problem. To solve this nonconvex combinatorial problem, three methods based on (1) mixed-integer nonlinear programming, (2) binary search algorithms, and (3) simple heuristics are proposed. The first method yields optimal solutions to the selection problem—given that some constants are appropriately selected. The second method requires a database of binary sensor/actuator combinations, returns optimal solutions, and necessitates no tuning parameters. The third approach is a heuristic that yields suboptimal solutions but is computationally attractive. The theoretical properties of these methods are discussed and numerical tests on dynamic networks showcase the trade-off between optimality and computational time.

###### keywords:
Sensor and control nodes selection, static output feedback control, mixed-integer nonlinear programming, combinatorial heuristics
\mdfdefinestyle

MyFramelinecolor=black, outerlinewidth=1pt, roundcorner=1pt, innerrightmargin=5pt, innerleftmargin=5pt, \setitemizeitemsep=-1pt

## 1 Introduction

Consider an unstable dynamic network of interconnected nodes

 ˙x(t)=Ax(t)+Bu(t),y(t)=Cx(t) (1)

where has at least one unstable eigenvalue; and collect the state, input, and output vectors for all nodes. This paper studies the joint problems of (i) stabilization of dynamic network (1) through static output feedback control (SOFC) while simultaneously (ii) selecting or placing a minimal number of sensors and control nodes.

Problem (i) corresponds to finding a control law such that the closed loop system eigenvalues of are in the LHP (Astolfi2000, ). This type of control is advantageous in the sense that it only requires output measurements rather than full state information, is analogous to the simple proportional controller, and can be implemented without needing an observer or an augmented dynamic system. Problem (ii) corresponds to finding minimal number of sensors and actuators (SA) yielding a feasible solution for the static output feedback (SOF) stabilization problem. The joint formulations of Problems (i)–(ii) can be abstracted through this high-level optimization routine: \cref@addtoresetequationparentequation

 minΠ,Γ,F N∑k=1πk+γk (2a) s.t. real(eig(A+BΠFΓC))<0,πi,γj∈{0,1} (2b)

where and are binary variables selecting the -th actuator and -th sensor; and are diagonal matrices containing all and . These binary variables post- and pre-multiply and , thereby activating the optimal sensors and control nodes while designing a SOFC law. Even for small to mid-size dynamic networks, problem (2) is difficult to solve as the SOFC problem—without the SA selection—is known to be nonconvex (Crusius1999, ) (presumed to be NP-hard (PERETZ2016, )), and the SA selection introduces binary variables thereby increasing the nonconvexity. To that end, the objective of this paper is to develop optimal algorithms and heuristics to solve Problem (2). Next, we summarize the recent literature on solving variants of (2).

Hundreds of studies have investigated the separate problem of minimally selecting/placing sensors or actuators while performing state estimation or state-feedback control. This paper, as mentioned above, studies the joint SA selection in the sense that an observer-based controller, which invokes the separation principle and requires a dynamic system module to perform state estimation, is not needed. For this reason, we do not delve into the literature of separate sensor or actuator selection. Interested readers are referred to our recent work (Taha2017d, ; Nugroho2018, ) for a summary on methods that solve the separate SA selection problems. The literature of addressing the simultaneous sensor and actuator selection problem (SSASP) is summarized in what follows.

Several attempts have been made to address variants of the SSASP in dynamic networks through the more general dynamic output feedback control (DOFC) framework. Specifically, the authors in (de2000linear, ) investigate the minimization via DOFC with SA selection, in which a reformulated suboptimal problem in the form of mixed-integer semi-definite program (MI-SDP) is proposed and solved using a coordinate descent algorithm. In (Argha2017, ), the SSASP for multi-channel DOFC with regional pole placement is addressed. In particular, the authors develop an SDP framework and propose a sparsity-promoting algorithm to obtain sparse row/column feedback control matrices. This approach ultimately yields binary SA selection, without needing binary variables. The same algorithm is then employed in (Singh2018, ) for SSA selection with simpler formulations. The SA selection with control configuration selection problem is formulated in (pequito2015, ) using structural design and graph theory, which is proven to be NP-hard. Although this particular problem is similar to the SSASP with SOFC given in (2), the problem proposed in (pequito2015, ), along with the algorithms, are based on the information of structural pattern of the dynamic matrix. The limitations of these studies are discussed next.

First, the majority of these works (de2000linear, ; Argha2017, ; Singh2018, ) consider the control framework in conjunction with dynamic output feedback which requires an additional block of dynamical systems to construct the control action (which is not the case in SOFC). Second, the work in (de2000linear, ) assumes that the number of SA to be selected is known a priori, which for certain cases is not very intuitive. Third, the sparsity-promoting algorithm proposed in (Argha2017, ; Singh2018, ) is based on convex relaxation of the norm—called re-weighted norm—which is then solved iteratively until the solution converges, thus making it not suitable for larger dynamic networks. The other drawback of this method is that arbitrary convex constraints on the binary selection variables are not easy to include. Finally, the algorithm proposed in (pequito2015, )—which interestingly runs in polynomial-time if the structure of the dynamic matrix is irreducible—only computes the structure and the corresponding costs of the feedback matrix (along with the sets of selected SA).

As an alternative to the aforementioned methods, this paper proposes algorithms and heuristics to solve the SSASP for unstable dynamic networks via SOFC. Specifically, we use a sufficiency condition for SOFC from (Crusius1999, ) which reduces the SOF control problem—without the SSA selection—from a nonconvex problem into a simple linear matrix inequality (LMI) feasibility problem. The developed approaches are based on MI-SDP, binary search algorithms, and simple heuristics that use the problem structure to find good suboptimal solutions. A preliminary version of this work appeared in (Nugroho2018, ) where we focus mainly on the MI-SDP approach. Here, we significantly extend this approach with the addition of binary search algorithms, heuristics, thorough analytical discussion of the properties of the developed methods, and comprehensive numerical experiments. The paper contributions and organization are discussed as follows.

Firstly, we formally introduce the SOF stabilizability problem (Section 3). The SSASP through SOFC is then formulated and shown to be a nonconvex problem with mixed-integer nonlinear matrix inequality (MI-NMI) constraints (Section 4). We prove that the SSASP can be formulated as a MI-SDP, and the equivalence between the two is shown (Section 5). The MI-SDP, if solved using combinatorial optimization techniques, yields an optimal solution to the SSASP.

As a departure from the MI-SDP approach, we introduce a routine akin to binary search algorithms that computes an optimal solution for SSASP—the proof of optimality is given. The routine requires a database of binary SA combinations (Section 6).

A heuristic that scales better than the first two approaches is also introduced. The heuristic is based on constructing a simple logic of infeasible or suboptimal combinations of SA, while offering flexibility in terms of the tradeoff between the computational time and distance to optimality (Section 7). Thorough numerical tests showcasing the applicability of the proposed algorithms are provided (Section 8).

The presented algorithms in this paper have their limitations which are all discussed with future work and concluding marks (Section 9).

## 2 Notation

The symbols and denote column vectors with elements and real-valued matrices with size -by-. The set of symmetric and positive definite matrices are denoted and . Italicized, boldface upper and lower case characters represent matrices and column vectors— is a scalar, is a vector, and is a matrix. Matrix is a identity square matrix, while and represent zero vectors and matrices of appropriate dimensions. For a square matrix , the notation denotes the set of all eigenvalues of . The function extracts the real part of a complex number , whereas is used to construct a block diagonal matrix. For a matrix , the operator returns a stacked column vector of entries of , while returns a column vector of diagonal entries of square matrix . The symbol denotes the Kronecker product. For any , and denote the absolute value and ceiling function of . The cardinality of a set is denoted by , whereas denotes a -tuple with zero-valued elements.

## 3 Static Output Feedback Control Review

In this section, we present some necessary background including the definition of static output feedback stabilizability given a fixed SA combination. This formulation is instrumental in deriving the SSASP.

Consider a dynamic network consisting of nodes/sub-systems with defining the set of nodes. The network dynamics are given as: \cref@addtoresetequationparentequation

 ˙x(t) =Ax(t)+Bu(t) (3a) y(t) =Cx(t). (3b)

The state, input, and output vectors corresponding to each node are represented by , , and . The global state, input, and output vectors are written as , , and where , , and . Without loss of generality, we assume that the input and output at each node only correspond to that particular node. The global input-to-state and state-to-output matrices can be constructed as and where and . This assumption enforces the coupling among nodes to be represented in the state evolution matrix , which is realistic in various dynamic networks as control inputs and observations are often determined locally. Additionally, we also assume that and are full column rank and full row rank. This assumption eliminates the possibility of redundant control nodes and system measurements.

The notion of SOF stabilizability for the dynamic system (3) is provided first. Some needed assumptions are also given.

###### Assumption 1.

The system (3) satisfies the following conditions: (a) The pair is stabilizable; (b) the pair is detectable; (c) and are full rank.

###### Definition 1.

The dynamical system (3) is stabilizable via SOF if there exists with control law given as such that for every where .

The above definition and assumption are standard in the SOF control literature (syrmos1997static, ; KUCERA1995, ; rosinova2003necessary, ; Astolfi2000, ). In this paper, we consider a simple yet well known necessary and sufficient condition for SOF stabilizability—given next.

###### Proposition 1 (From (syrmos1997static, )).

The dynamical system (3) is SOF stabilizable with output feedback gain if and only if there exists such that

 A⊤P+PA+C⊤F⊤B⊤P+PBFC≺0. (4)

In Proposition 1, the matrix inequality (4) is nonconvex due to bilinearity in terms of and . Bilinear matrix inequalities (BMIs) are of great interest to many researchers specifically in systems and control theory during the past decades (Fukuda2001, ) because many control problems can be formulated as optimization problems with BMI constraints (vanantwerp2000tutorial, ). Indeed, many methods to address problems involving BMIs have been developed. Specifically, methods to solve the BMI for SOF stabilizability in a form similar to (4) are proposed in (Dinh2012, ; Hu2016, ). These methods, based on successive convex approximation, linearize the BMI constraints around a certain strictly feasible point such that the nonconvex problem can be approximated by solving a sequence of convex optimization problems with LMI constraints. Another less computationally intensive approach that can be used to address the SOF stabilizability problem is introduced in (Crusius1999, ). This approach allows the SOF problem to be solved in an LMI framework—presented in the following proposition.

###### Proposition 2 (From (Crusius1999, )).

The dynamic network (3) is SOF stabilizable if there exist , , , and such that the following LMIs are feasible \cref@addtoresetequationparentequation

 A⊤P+PA+C⊤N⊤B⊤+BNC≺0 (5a) BM=PB (5b)

with SOF gain .

The conditions presented in Proposition 2 are only sufficient for SOF stabilizability. Although this makes the SOF stabilization problem a lot easier to solve than the nonconvex BMI, this method has a drawback. In particular, the feasibility of solving (5) as an SDP depends on the state-space representation of system (3), meaning that it is not guaranteed for (5) to be feasible even if system (3) is known to be stabilizable by SOF; see (Crusius1999, ) for a relevant discussion. However, if the system is indeed stabilizable by SOF, then there exists a state-space transformation for system (3) that leads (5) to be feasible (Crusius1999, ). Further discussion about finding this transformation can be found in (Crusius1999, ; neto1998, ). With that in mind, and to develop tractable computational techniques to solve the SSASP, we use the LMI formulation of the SOF problem. The next section presents the problem formulation.

## 4 Problem Formulation

The SSA selection through SOF control can be defined as the problem of jointly selecting a minimal set—or subset—of SA while still maintaining the stability of the system through SOF control scheme. To formalize the SSASP, let and be two binary variables that represent the selection of SA at node of the dynamic networks. We consider that if the sensor of node is selected (or activated) and otherwise. Similarly, if the actuator of node is selected and otherwise. The augmented dynamics can be formulated as \cref@addtoresetequationparentequation

 ˙x(t) =Ax(t)+BΠu(t) (6a) y(t) =ΓCx(t), (6b)

where and are symmetric block matrices defined as \cref@addtoresetequationparentequation

 Π ≜Blkdiag(π1Inu1,π2Inu2,…,πNInuN) (7a) Γ ≜Blkdiag(γ1Iny1,γ2Iny2,…,γNInyN). (7b)

The simultaneous sensor and actuator selection problem (SSASP) through SOF stabilizability can be written as in (8). The optimization variables of (8) are with , ; and are the matrix forms of and as defined in (7). In the next sections, and will be used interchangeably with and . Constraints (8b) and (8c) are obtained by simply applying the sufficient condition for SOF stabilizability. Constraint (8e) is an additional linear logistic constraint which can be useful to model preferred activation or deactivation of SA on particular nodes and to define the desired minimum and maximum number of activated SA. This constraint is also useful in multi-period selection problems where certain actuators and sensors are deactivated due to logistic constraints. The objective of the paper is to develop computational methods to solve SSASP.

{mdframed}

 SSASP:minimizeπ,γ,NM,P N∑k=1πk+γk (8a) subject to A⊤P+PA+C⊤ΓN⊤ΠB⊤+BΠNΓC≺0 (8b) BΠM=PBΠ (8c) P≻0 (8d) Φ[πγ]≤ϕ (8e) π∈{0,1}N,γ∈{0,1}N . (8f)
###### Remark 0.

The solution of   SSASP guarantees that the dynamic network is stabilized using the minimal number of SA, as the closed loop stability is ensured by the sufficient condition for the existence of SOFC given in Proposition 2. This entails that the closed-loop eigenvalues are in the left-half plane and close to the -axis. If it is desired to move the closed-loop eigenvalues further away from the -axis, the matrix inequality (8b) can be upper bounded by where . Larger values for will result in closed-loop eigenvalues further away from marginal stability. Figure 1 shows this relationship for a random dynamic network (described in Section 8 with ) given that all SA are activated after solving the LMI feasibility problem defined by (8b) and (8c) for different values of .

###### Remark 0.

It is important to mention that our focus here is to find a SOF control gain that stabilizes dynamical system (6) with minimum number of SA. With that in mind, performance metrics such as robustness and energy cost functions are not considered in SSASP.

After solving (8), the selected SA are obtained and represented by . Due to SSA selection, the matrix will most likely not be full column rank, hence the existence of an invertible matrix is not assured. This is not the case when solving (5) due to the fact that being full column rank and ensure to be nonsingular—see Lemma 1. However, if (8) returns that is invertible, then the SSASP is solved with SOF gain to be computed as . Otherwise, can be computed as where and are the submatrices of and that correspond to activated SA. Proposition 3 ensures the SOF stabilizbilty with minimal SA after solving SSASP.

###### Lemma 1.

Let and be the solution of where and . If and , then is invertible.

###### Proposition 3.

Let , , , , and be the solution of SSASP with apropriate dimensions. Also, let , , , and , where and , be the matrices (or submatrices) representing the nonzero components of , , , and that correspond to activated SA. Then, the closed loop system is stable with SOF gain .

See A and B for the proofs of Lemma 1 and Proposition 3. SSASP (8) is nonconvex due to the presence of MI-NMI in the form of and mixed-integer BMI in (8c). Therefore, it cannot be solved by any general-purpose mixed-integer convex programming solver. To that end, we present three approaches that solve or approximate (8). The first approach—presented in the next section—is developed utilizing techniques from linear algebra and disjunctive programming principle (nemhauser1988integer, ; grossmann2002review, ). The other two approaches are developed based on binary search algorithm and heuristics, as presented in the Sections 6 and 7.

## 5 SSASP as a MI-SDP

In this section, we present the first approach to solve (8), which transforms SSASP from a mixed-integer nonconvex problem to a MI-SDP. The following theorem presents this result.

###### Theorem 1.

SSASP is equivalent to {mdframed}[style=MyFrame] \cref@addtoresetequationparentequation

 minimizeπ,γ,N,MP,Θ N∑k=1πk+γk (9a) subject to A⊤P+PA+C⊤Θ⊤B⊤+BΘC≺0 (9b) Ψ1(N,Θ)≤L1Δ1(Γ,Π) (9c) Ψ2(M,Ω(P))≤L2Δ2(Π) (9d) Ψ3(Ξ(P))≤L3Δ3(Π) (9e) P≻0,Φ[πγ]≤ϕ,π∈{0,1}N,γ∈{0,1}N, (9f)

where (9c),(9d),(9e) are linear constraints in which each function is specified as \cref@addtoresetequationparentequation

 Ψ1(N,Θ)≜ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Vec(Θ)−Vec(Θ)Vec(Θ)−Vec(Θ)Vec(Θ−N)−Vec(Θ−N)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (10a) Δ1(Γ,Π)≜ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Diag(Iny⊗Π)Diag(Iny⊗Π)Diag(Γ⊗Inu)Diag(Γ⊗Inu)Diag(2Inu×ny−Iny⊗Π−Γ⊗Inu)Diag(2Inu×ny−Iny⊗Π−Γ⊗Inu)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (10b) Ψ2(M,Ω(P))≜ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Vec(M)−Vec(M)Vec(Ω(P))−Vec(Ω(P))Vec(M−Ω(P))−Vec(M−Ω(P))⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (10c) Δ2(Π)≜ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣Diag(In2u−Inu⊗Π+Π⊗Inu)Diag(In2u−Inu⊗Π+Π⊗Inu)Diag(In2u+Inu⊗Π−Π⊗Inu)Diag(In2u+Inu⊗Π−Π⊗Inu)Diag(2In2u−Inu⊗Π−Π⊗Inu)Diag(2In2u−Inu⊗Π−Π⊗Inu)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (10d) Ψ3(Ξ(P))≜ [Vec(Ξ(P))−Vec(Ξ(P))] (10e) Δ3(Π)≜ (10f)

and are functions defined as \cref@addtoresetequationparentequation

 Ω(P) ≜(B⊤B)−1B⊤PB (11a) Ξ(P) ≜(I−B(B⊤B)−1B⊤)PB, (11b)

, is an additional optimization variable, and are predefined, sufficiently large constants.

###### Remark 0.

Although (9) is equivalent to  SSASP, the quality of the solution that comes out of (9) is very dependent on the choice of . This observation is corroborated by numerical test results discussed in Section 8.

The proof of Theorem 1 is given in C. This theorem allows the SSASP to be solved as a MI-SDP, which can be handled using a variety of optimization methods such as: branch-and-bound algorithms (gamrath2016scip, ; gally2017, ), outer approximations (LubinYamangilBentVielma2016, ), or cutting-plane methods (Kong2010, ). The next section presents a departure from MI-SDP to an algorithm that returns optimal solutions to SSASP, without requiring .

## 6 Binary Search Algorithm for SSA Selection

In this section we present an algorithm that is similar, in spirit, to binary search algorithms. The presented algorithm here seeks optimality for SSASP while not requiring any tuning parameters such as ; see Theorem 1.

### 6.1 Definitions and Preliminaries

In what follows, we provide some needed definitions.

###### Definition 2.

Let and be two -tuples representing the selection of SA, i.e., and . Then, the selection of SA can be defined as such that , , and where , , and are linear mappings. The number of nodes with activated SA is defined as where .

###### Definition 3.

Let be the candidate set such that it contains all possible combinations of SA where denotes the number of total combinations, i.e., . Then, the following conditions hold:

1. For every , it holds that

 S∈{S∈{0,1}2N|G(S) is feasible for (???)},
2. For every where , .

###### Definition 4.

For any such that , and are defined as the matrices containing the nonzero components of and that correspond to the activated SA. Then, we say that is feasible for (5) if and only if the triplet is feasible for (5).

The following example shows how the candidate set is constructed for a given simple logistic constraint.

###### Example 1.

Suppose that the dynamical system has two nodes. If the logistic constraint dictates that for all , then can be constructed as

 \mathbfcalS={ (1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1),(1,1,0,0),(1,0,1,0),(1,0,0,1), (0,1,1,0),(0,1,0,1),(0,0,1,1),(1,1,1,0),(1,1,0,1),(1,0,1,1),(0,1,1,1)}.

### 6.2 Binary Search Algorithm to Solve SSASP

The objective of this algorithm is to find an optimal solution such that for all where that is, for any , there exist a corresponding feedback gain that stabilizes dynamical system (6). Realize that any , is feasible for SSASP with objective function value equal to .

The routine to solve SSASP based on binary search algorithm is now explained and summarized in Algorithm 1. Let be the index of iteration and be the index of position in the ordered set . Hence at iteration , the candidate set containing all possible combinations of SA can be represented as , with , and any element of at position can be represented by . Also, let be the known best solution at iteration .

Next, obtain where and . At this step, we need to determine whether system (6) is SOF stabilizable with the particular combination of SA . To that end, we solve the LMIs (5). If is feasible for (5), then update . Since is feasible, then we can discard all combinations that have more or equal number of activated SA, i.e., the combinations that are suboptimal. Otherwise, if is infeasible for (5), can be discarded along with all combinations that (a) have less number of activated SA than and (b) the activated SA are included in . Realize that the above method reduces the size of in every iteration since one or more elements of are discarded. The algorithm now continues and terminates whenever . The details of this algorithm are given in Algorithm 1. Example 2 gives an illustration how is constructed in every iteration.

###### Example 2.

Consider again the dynamic system from Example 1. Let be the starting combination and, for the sake of illustration, assume that (5) is infeasible for this combination. Then, by Algorithm 1, combinations and are discarded. The candidate set now comprises the following elements

 \mathbfcalS2={ (0,1,0,0),(0,0,1,0),(1,1,0,0),(1,0,1,0),(0,1,1,0),(0,1,0,1), (0,0,1,1),(1,1,1,0),(1,1,0,1),(1,0,1,1),(0,1,1,1)}.

Let be the new starting point and assume that this combination is feasible for (5). Then, all combinations that have greater or equal number of activated SA can be discarded. The remaining possible candidates in the candidate set are

 \mathbfcalS3={ (0,1,0,0),(0,0,1,0)}.

This algorithm continues in a fashion similar to the above routine. If none of these combinations in is feasible, then Algorithm 1 returns as the solution.

In what follows, we discuss the optimality of Algorithm 1 through Theorem 2—see D for the proof.

###### Theorem 2.

Algorithm 1 yields an optimal solution of SSASP.

###### Remark 0.

from Algorithm 1 might not be unique. This is the case since there could be multiple binary combinations of SA yielding the same number of activated SA and hence the same objective function value .

## 7 Heuristics to Solve SSASP

The binary search algorithm in the previous section requires the construction of the candidate set in an off-line database, while leading to an optimal solution for the SSASP. Seeking optimality and constructing an off-line database might be impractical for large-scale dynamic networks. Moreover, the other approach presented in Section 5 entails solving (9), a MI-SDP, which might consume large computational resources. This motivates the development of a heuristic for the SSASP that forgoes optimality—the focus of this section.

In short, the heuristic builds a dynamic, virtual database of all possible combinations—not by generating all of these combinations, but by having a procedure that identifies suboptimal/infeasible candidates—while attempting to find a SA combination that has the least number of activated SA that makes system (6) SOF stabilizable. The high-level description of this heuristic is given as follows:

1. Generate a random SA string ;

2. If is in the forbidden set (the set of suboptimal/infeasible combinations), repeat Step 1;

3. If is infeasible for (5), add to the forbidden set and repeat Step 1;

4. If is feasible for (5), add to a set of candidate strings and discard suboptimal candidates, repeat Step 1 with fewer activated SA;

5. Three metrics guide how many times these steps are repeated: , , .

We now introduce the details of the heuristic. Define

 \mathbfcalW≜{S∈{0,1}2N|G(S) is not feasible for (???)},

that is, is a finite set that comprises all combinations of SA that do not satisfy the logistic constraint (8e). Since we are interested in finding a candidate that is feasible for (5), all elements in do not need to be known. Instead, we just need to check whether using the logistic constraint (8e). Next, from the logistic constraint (8e), we define and for all . That is, and represent the required minimum and maximum number of activated SA so that any candidate must satisfy . More importantly, and can also be used to bound the search space of a potential candidate . In contrast with Algorithm 1, the heuristic constructs and updates—in each iteration—a set that contains combinations of SA that are known to be infeasible for (8). This finite set is referred to as the forbidden set and symbolized by . Clearly, . Thus, any candidate must not belong in because any is infeasible for (5) and/or . To get a potential candidate for this heuristic, we can randomly generate such that .

The heuristic is described as follows and summarized in Algorithm 2. An essential part of the algorithm is a simple procedure to obtain a candidate such that —this procedure is shown in Algorithm 3. First, from the logistic constraint, , , and are initialized. Let denote the iteration index and denote the desired number of activated SA for the candidate such that . Then, a candidate at iteration with number of activated SA can be denoted by . The next step is to generate a candidate such that . As mentioned earlier, one simple method to obtain is to randomly generate with number of activated SA such that —see Algorithm 3 for the detailed steps. If such candidate cannot be obtained after some combinations of SA have been randomly generated, then . When this happens, we can assert that the majority of combinations of SA with or less than number of activated SA most likely belong to the forbidden set . Given this condition, the required minimum number of activated SA can then be increased and updated. If is nonzero, then we must check whether is feasible for (5). Then, if is feasible for (5), we update ; otherwise, update the forbidden set so that . Unlike Algorithm 1, here we define that allows (5) to be solved repeatedly with different candidates while having the same number of activated SA. This process is repeated until there exists a candidate that makes (5) feasible or is reached. If (5) is still infeasible, we increase the required number of activated SA for the next candidate, hoping that adding more activated SA will increase the chance for (5) being feasible. The algorithm continues and terminates when maximum iteration, denoted by , is reached or there is no more candidates that can be generated. At the end of Algorithm 2, the best suboptimal combination of SA is given as .

The algorithm in its nature allows the trade-off between the computational time and distance to optimality. This trade-off can be designed via selecting large values for and . The parameter depends on how the user is willing to wait before the algorithm terminates, imposes an upper bound on how many times a random SA candidate is generated such that it is does not belong to the forbidden set. Finally, defines how many LMI feasibility problems are solved with a fixed number of activated SA.

## 8 Numerical Experiments

Numerical experiments are presented here to tests the proposed approaches on two dynamic networks. The first system is a random dynamic network adopted from (Motee2008, ; MihailoSiteUnstableNodes, ), whereas the second is a mass-spring system (lin2013design, ). Both systems are initially unstable and the latter has a sparser structure than the former. All simulations are performed using MATLAB R2016b running on a 64-bit Windows 10 with 3.4GHz Intel Core i7-6700 CPU and 16 GB of RAM. All optimization problems are solved using MOSEK version 8.1 (mosekAps, ) with YALMIP (lofberg2004yalmip, ). All the MATLAB codes used in the paper are available for the interested reader upon request.

### 8.1 Comparing the Proposed Algorithms

In the first part of our numerical experiment, we focus on testing the performance of the proposed approaches to solve SSASP on a relatively small dynamic network where optimality for the SSASP can be determined. Specifically, we consider the aforementioned random dynamic network with 10 subsystems, with two states per subsystem, so that 10 sensors and 10 actuators are available (). Each sensor measures the two states per subsystem. We impose a logistic constraint so that there are at least one sensor and one actuator to be activated: and . In this particular experiment, we consider the following scenarios.

1. MI-SDP-1: The first scenario uses the results from Theorem 1 that shows the equivalence between SSASP and (9)—the latter is solved via YALMIP’s MI-SDP branch and bound (BnB) solver (yalmipMI, ). We choose , , and . The maximum number of iterations of the BnB solver is chosen to be 1000.

2. MI-SDP-2: The second scenario is identical to the first one with the exception that , , and . This scenario shows the impact of tuning parameters on the performance of the MI-SDP approach.

3. BSA: The third scenario directly follows Algorithm 1 and solves (5) in each iteration to check the feasibility of the given SA combinations, while also computing the SOF gain matrix simultaneously from the solution of LMI (5).

4. HEU: In the fourth scenario, we implement the heuristic as described in Algorithms 2 and 3. The parameters of the heuristic in this scenario are ,