The Robust Minimal Controllability Problem
Abstract
In this paper, we address two minimal controllability problems, where the goal is to determine a minimal subset of state variables in a linear timeinvariant system to be actuated to ensure controllability under additional constraints. First, we study the problem of characterizing the sparsest input matrices that assure controllability when the autonomous dynamics’ matrix is simple. Secondly, we build upon these results to describe the solutions to the robust minimal controllability problem, where the goal is to determine the sparsest input matrix ensuring controllability when specified number of inputs fail. Both problems are NPhard, but under the assumption that the dynamics’ matrix is simple, we show that it is possible to reduce these two problems to set multicovering problems. Consequently, these problems share the same computational complexity, i.e., they are NPcomplete, but polynomial algorithms to approximate the solutions of a set multicovering problem can be leveraged to obtain closetooptimal solutions to either of the minimal controllability problems.
I Introduction
The problem of guaranteeing that a dynamical system can be driven toward the desired state regardless of its initial position is a fundamental question that has been studied in control systems and it is referred to as controllability. Several applications, for instance, control processes, control of large flexible structures, systems biology and power systems [1, 2, 3] rely on the notion of controllability to safeguard their proper functioning. Furthermore, as the systems become larger (i.e., the dimension of their state space), we (often) aim to identify a relatively small subset of state variables that ensure the controllability of the system, for instance, due to economic constraints [4]. Consequently, it is natural to pose the following question.
\mathcal{Q}_{1}: Which state variables need to be directly actuated to ensure the controllability of a dynamical system?
Question \mathcal{Q}_{1} can be formally captured by the minimal controllability problem (MCP) [4] that aims to determine the minimum number of state variables that need to be actuated to ensure system’s controllability. Unfortunately, the MCP problem was shown to be NPhard [4], which implies that a polynomial solution to determine its solution is unlikely to exist.
The MCP is also fundamental to understand resilience and robustness properties of dynamical systems since it unveils which variable need to be actuated. These resilience/robustness properties are crucial to coping with the adverse nature of the environments where the actuators are deployed and, due to the wear and tear of the materials, some of these actuators may malfunction over time. In addition, the inputs can malfunction due to a malicious external agent who aims to tamper with the inputs to compromise the system behavior. In fact, a classical example of such malicious attack is the Stuxnet malware incident [5], in which the controller’s input response to a tempered measured output lead the system away from its normal operating conditions.
Therefore, from a design perspective, we would like to deploy actuators in the system such that any subset with at most a specified number of actuators can fail without compromising the controllability of the system. Subsequently, invoking similar reasons to the MCP, we can seek to address the robust MCP (rMCP) that aims to determine the sparsest input matrix that ensures controllability if at most a specified number of actuators fail. It is important to mention that both minimal controllability problems can be stated regarding observability, by invoking the duality between controllability and observability in LTI systems [6]. In particular, [7, 8, 9] provide necessary and sufficient conditions concerning the sensor deployment to ensure that a reliable estimate of the system is recovered. More importantly, those conditions can be achieved by design, by solving the rMCP.
Related Work: The understanding of which state variables need to be actuated to asseverate certain properties of the system has been an active research area [10]. Initially, the goal was to establish stability and/or asymptotic stability of the dynamics for reference point, for instance, consensus or agreement value [11, 12]. The trend has changed to assure that the system is controllable, since (often) we want to ensure that a control law exists such that an arbitrary goal or desired state is achieved in finite time.
This paper follows up and subsumes some of the existing literature where the dynamics’ matrix is assumed to be the Laplacian, symmetric (modeling undirected graphs) and/or irreducible (modeling directed graphs with the digraph representation being a strongly connected component). In [13] the controllability of circulant networks is analyzed by exploring the PopovBelevitchHautus eigenvalue criterion, where the eigenvalues are characterized using the CauchyBinet formula. The controllability in multiagents with Laplacian dynamics was initially explored in [14]. Later, in [15, 16], the controllability for Laplacian dynamics is studied, and necessary and sufficient conditions are given in terms of partitions of the graph. In [17], the controllability is explored for paths and cycles, and later extended by the same authors to the controllability of grid graphs by means of reductions and symmetries of the graph [18], and considering dynamics that are scaled Laplacians. In [19, 20] the controllability is studied for strongly regular graphs and distanceregular graphs. Recently, in [21, 22] new insights on the controllability of Laplacian dynamics are given regarding the uncontrollable subspace. In addition, in [23] the controllability of isotropic and anisotropic networks is analyzed.
Furthermore, [21] concludes by pointing out that further study of nonsymmetric dynamics and the controllability is required – which we address in the present paper. Note that the MCPs lie within the framework of sparse optimization subject to a rank constraint. Further, we notice that the problem addressed does not belong to known classes where polynomial solutions are available [24], nor it resources to convex relaxation schemes, where no suboptimality guarantees are available. Instead, we consider a much less restrictive assumption: A is a simple matrix, i.e., all eigenvalues are distinct. Furthermore, there are several applications where A satisfies this assumption, for instance, all dynamical systems modeled as random networks of the ErdősRényi type [25], as well as several known dynamical systems used as benchmarks in control systems engineering [26].
Observe that the MCP problem presents both continuous and discrete optimization properties, captured by the controllability property and the number of nonzero entries, respectively. To avoid the nature of this problem, in [4] the nonzero entries of the input matrix were randomly generated. In the present paper, we ‘decouple’ the continuous and discrete optimization properties, and show that by first solving the discrete nature of the problem, it is always possible to deterministically obtain a solution to MCP in a second phase. Besides, the first step reduces the MCP to the set covering problem – well known to be NPhard. Nonetheless, the set covering problem is likely one of the most studied NPhard problems (probably second only to the SAT problem). Subsequently, although the set covering problem is NPhard, some subclasses of the problem are equipped with sufficient structure that can be leveraged to invoke a polynomial algorithm that approximate the solution with ‘almost’ optimality guarantees [27]. This contrasts with the approach proposed in [4], where an approximated solution particular to the MCP problem was provided. In addition, we study the rMCP which has not been previously addressed in the literature. Similarly to the MCP, we show that the rMCP can be polynomially reduced to the set multicovering problem, i.e., a set covering problem that allows the same elements to be covered a predefined number of times. Furthermore, extensions of polynomial approximation algorithms are also available with similar optimality guarantees.
Alternatively, in [28] instead of determining the sparsest input matrix ensuring the controllability, the aim is to determine the sparsest input matrix that ensures structural controllability, which we refer to as the minimal structural controllability problem (MSCP) – see Section III for formal definitions and problem statement. Briefly, the MSCP focus on the structure of the dynamics, i.e., the location of zeros/nonzeros, and the obtained sparsest input matrix is such that for almost all matrices satisfying the structure of the dynamics and the input matrix, the system is controllable [29]. Notwithstanding, in the present paper, we provide an example where the solution to the minimal structural controllability problem is not necessarily a solution to the minimal controllability problem when the dynamics’ matrix is simple; hence, disproving the general belief that a solution to MSCP is a solution to MCP in such cases. Further, we emphasize that the solution to the MSCP has been fully explored in [28] and can be determined by recurring to polynomial complexity algorithms; more precisely, \mathcal{O}(n^{3}) where n is the dimension of the state space. In addition, the minimum number of state variables to achieve structural controllability can account for the scenario where actuating state variables incur in different cost [30]. Further, if the collection of possible actuators is given a priori and we seek the minimum number of these actuators to ensure structural controllability, then the problem is NPhard [31]. Finally, [32] studies the structural counterpart of the rMCP under one failure, which is also proved to be NPhard, and shown to be reducible to a weighted set covering problem. In particular, the reductions and the objects captured by the sets in the set covering problem in [32] are entirely different from those of the problems explored in this paper, mainly, due to the nature of the problems. \circ
Main Contributions of the present paper are as follows: (i) we characterize the exact solutions to the MCP; (ii) we show that for a given dynamics’ matrix almost all input vectors satisfying a specified structure are solutions to the MCP; (iii) we prove that the rMCP is an NPhard problem; (iv) we characterize the exact solutions to the rMCP; (v) we show that the decision version of both MCPs are NPcomplete; (vi) we provide approximated solutions to both MCPs and discuss their optimality guarantees; and, finally, in (vii) we discuss the limitations of the proposed methodology. \circ
The remainder of this paper is organized as follows. In Section II, we formally state both MCPs addressed in this paper. Next, in Section III, we review concepts from computational complexity and control systems that are essential to keep this paper selfcontained. In Section IV, we present the main results of this paper: we characterize the solutions to the MCPs, their complexity, and polynomial algorithms that approximate the solution. Finally, in Section V we provide some examples that illustrate the main results of the paper and discuss the limitations of the proposed methodology.
Notation: We denote vectors by small font letters such as v,w,b and its corresponding entries by subscripts; for example, v_{i} corresponds to the ith entry in the vector v. A collection of vectors is denoted by \{v^{j}\}_{j\in\mathcal{J}}, where the superscript indicates an enumeration of the vectors using indices from a set (usually denoted by calligraphic letter) such as \mathcal{I},\mathcal{J}\subset\mathbb{N}. The number of elements of a set \mathcal{S} is denoted by \mathcal{S}. Realvalued matrices are denoted by capital letters, such as A, B, and A_{i,j} denotes the entry in the ith row and jth column in matrix A. We denote by I_{n} the ndimensional identity matrix. Given a matrix A, \sigma(A) denotes the set of eigenvalues of A, also known as the spectrum of A. Given two matrices M_{1}\in\mathbb{C}^{n\times m_{1}} and M_{2}\in\mathbb{C}^{n\times m_{2}}, the matrix [M_{1}\ M_{2}] corresponds to the n\times(m_{1}+m_{2}) concatenated complex matrix. The structural pattern of a vector/matrix (i.e., the zero/nonzero pattern) or a structural vector/matrix have their entries in \{0,\star\}, where \star denotes a nonzero entry, and they are denoted by a vector/matrix with a bar on top of it. In other words, \bar{A} denotes a matrix with \bar{A}_{i,j}=0 if A_{i,j}=0 and \bar{A}_{i,j}=\star otherwise. We denote by A^{\intercal} the transpose of A. The function \cdot:\mathbb{C}^{n}\times\mathbb{C}^{n}\rightarrow\mathbb{C} denotes the usual inner product in \mathbb{C}^{n}, i.e., v\cdot w=v^{\dagger}w, where v^{\dagger} denotes the adjoint of v (the conjugate of v^{\intercal}). With some abuse of notation, \cdot:\{0,\star\}^{n}\times\{0,\star\}^{n}\rightarrow\{0,\star\} also denotes the map where \bar{v}\cdot\bar{w}\neq 0, with \bar{v},\bar{w}\in\{0,\star\}^{n} if and only if there exists i\in\{1,\ldots,n\} such that \bar{v}_{i}=\bar{w}_{i}=\star. Additionally, \v\_{0} denotes the number of nonzero entries of the vector v in either \{0,\star\}^{n} or \mathbb{R}^{n}. Given a subspace \mathcal{H}\subset\mathbb{C}^{n} we denote by \mathcal{H}^{\mathsf{c}} its complement with respect to \mathbb{C}, i.e., \mathcal{H}^{\mathsf{c}}=\mathbb{C}^{n}\setminus\mathcal{H}. In addition, inequalities involving vectors are to be interpreted componentwise. With abuse of notation, we will use inequalities involving structural vectors as well – for instance, we say \bar{v}\geq\bar{w} for two structural vectors \bar{v} and \bar{w} if and the only if the following two conditions hold: (i) if \bar{w}_{i}=0, then \bar{v}_{i}\in\{0,\star\}, and (ii) if \bar{w}_{i}=\star then \bar{v}_{i}=\star.
II Problems Statement
In this paper, we focus on dynamical systems modeled by discretetime linear timeinvariant (LTI) systems, but the results are readily applicable to continuoustime LTI systems. We will neglect the output equation because we are only addressing the controllability problem. Therefore, consider systems described by
x(k+1)=Ax(k)+Bu(k),\quad x(0)=x_{0},  (1) 
where x\in\mathbb{R}^{n} is the state of the sytem, u\in\mathbb{R}^{p} is the input signal exert by the actuators, and k\in\mathbb{N} denotes the time instance. The matrix A\in\mathbb{R}^{n\times n}, which is referred to as the system dynamics’ matrix describes the coupling between state variables. The matrix B\in\mathbb{R}^{n\times p} is the input matrix and describes the state variables that the inputs act on. As previously mentioned, it is often desirable the LTI system (1) be controllable, i.e., a system can be steered towards a desirable state in at most n steps despite the initial state x_{0}, in which case the pair (A,B) is said to be controllable.
The first problem addressed in this paper is the MCP, that can be formally stated as follows.
\mathcal{P}_{1}: Given the system dynamics’ matrix A determine the input matrix B\in\mathbb{R}^{n\times n} such that
\begin{array}[]{cc}B^{*}=\arg\min\limits_{B\in\mathbb{R}^{n\times n}}&\B\_{0% }\\ \text{s.t.}&(A,B)\text{ controllable.}\end{array}  (2) 
Notice that the input matrix is assumed to be n\times n to ensure a solution to exist, since the identity matrix always ensures system’s controllability.
Alternatively, under the adverse scenarios of failure or malicious temper of the actuators, the dynamics of the system can be modeled by
x(k+1)=Ax(k)+Bu(k)+a(k),  (3) 
where the malfunctioning inputs correspond to nonzero entries in a\in\mathbb{R}^{n} representing an alteration of the actuation in comparison with the actual value. Therefore, an extra set of actuators should be in place to ensure that it is still possible to control the system if some inputs fail, i.e., the system
x(k+1)=Ax(k)+B_{\mathcal{M}\backslash\mathcal{A}}u(k),  (4) 
is controllable, where B_{\mathcal{M}\backslash\mathcal{A}} consists of the subset of columns with indices in \mathcal{M}\backslash\mathcal{A}, the set \mathcal{M}=\{1,\ldots,p\} is the set of inputs’ labeling indices and \mathcal{A}=\{i\in\mathcal{M}:\ a_{i}(k)\neq 0,\ k\in\mathbb{N}\} the set of indices of malfunctioning actuators. Therefore, (A,B_{\mathcal{M}\backslash\mathcal{A}}) is desirable to be controllable, and, subsequently, the rMCP can be posed as follows.
\mathcal{P}_{2}: Given a dynamics’ matrix A\in\mathbb{R}^{n\times n} and the number of possible input failures s, determine the matrix B^{*}\in\mathbb{R}^{n\times(s+1)n} such that
\displaystyle B^{*}=\arg\min\limits_{B\in\mathbb{R}^{n\times(s+1)n}}  \displaystyle\quad\B\_{0}  (5)  
s.t.  \displaystyle(A,B_{\mathcal{M}\backslash\mathcal{A}})\text{ is controllable},  
\displaystyle\ \mathcal{A}\leq s,\ \mathcal{A}\subset\mathcal{M}, 
where \mathcal{M}\subset\{1,\dots,n\} are the indices of the nonzero columns of the matrix B. Notice that, similarly to \mathcal{P}_{1}, the dimension of B are n\times(s+1)n to ensure that a solution always exist, in particular, in the worst case scenario the matrix B that concatenates s times the identity matrix is a feasible solution. In practice, only the nonzero columns of B matter, which we refer to as effective inputs.
In this paper, both MCPs proposed above will be addressed under the following two assumptions.
Assumption 1: The dynamics’ matrix is simple, i.e., all the eigenvalues of A are distinct. \circ
We notice that Assumption 1 is not very restrictive since there are several applications where A satisfy this assumption. For example, dynamical systems modeled as random networks of the ErdősRényi type [25], as well as several known dynamical systems used as benchmarks in control systems engineering [26].
Assumption 2: A lefteigenbasis of A is available, i.e., the eigenbasis consisting of lefteigenvectors of A. \circ
The second assumption is required by technical reasons, since an eigenbasis is determined using numerical methods. Therefore, in practice, it may be composed of approximated eigenvectors to a given floatingpoint error – see Section IVE for further discussion.
III Preliminaries and Terminology
In this section, we review some basic concepts in computational complexity theory, control systems, and structural systems theory, to keep the paper selfcontained.
In what follows, we use some concepts of computational complexity theory [33], that address the classification of (computational) problems into complexity classes. Formally, this classification is for decision problems, i.e., problems with a ‘yes’ or ‘no’ answer. Further, for a decision problem, if there exists an algorithm that obtains the correct answer in a number of steps that is bounded by a polynomial in the size of the input data of the problem, then the algorithm is referred to as an efficient or polynomial solution to the decision problem and the decision problem is said to be polynomially solvable or belong to the class of polynomially solvable problems. A decision problem is said to be in NP (i.e., the class of nondeterministic polynomially solvable problems) if, given any possible solution instance, it can be verified using a polynomial procedure whether the instance constitutes a solution to the problem or not. It is easy to see that any problem that is polynomially solvable (in P) is also in NP, although, there are some problems in NP for which it is unclear whether polynomial solutions exist. These latter problems are referred to as being NPcomplete. Consequently, the class of NPcomplete problems contains those that are the hardest among the NP problems, i.e., those that are verifiable using polynomial algorithms, but no polynomial algorithms to solve them are known to exist. Whereas the above classification is intended for decision problems, it can be immediately extended to optimization problems, by noticing that every optimization problem can be posed as a decision problem. More precisely, given a minimization problem, we can pose the following decision problem: Is there a solution to the minimization problem that is less than or equal to a prescribed value? On the other hand, if the solution to the optimization problem is obtained, then any decision version can be easily addressed. Consequently, if a (decision) problem is NPcomplete, then the associated optimization problem is referred to as being NPhard. We refer the reader to [34] for an introduction to the topic. In what follows, we will consider the following NPhard problem.
Definition 1 ([35]).
(Minimum Set Multicovering Problem) Given a set of m elements \,\mathcal{U}=\left\{1,2,\ldots,m\right\} referred to as universe, a collection of n sets \mathcal{S}=\left\{\mathcal{S}_{1},\ldots,\mathcal{S}_{n}\right\}, with \mathcal{S}_{j}\subset\mathcal{U}, with j\in\{1,\ldots,n\} \displaystyle\bigcup_{j=1}^{n}\mathcal{S}_{j}=\mathcal{U}, and a demand function d:\mathcal{U}\rightarrow\mathbb{N} that indicates the number of times an element i needs to be covered. In other words, d(i) is the minimum number of sets in \mathcal{S} that need to be consider such that i is member of all of this sets. The minimum set multicovering problem consists of finding a set of indices \,\mathcal{I}^{*}\subseteq\left\{1,2,\ldots,n\right\} corresponding to the minimum number of sets covering \mathcal{U}, where every element i\in\mathcal{U} is covered at least d(i) times, i.e.,
\begin{array}[]{rl}\mathcal{J}^{*}=\underset{\mathcal{J}\subseteq\left\{1,2,% \ldots,n\right\}}{\arg\min}&\quad\mathcal{J}\\ \text{s.t. }&\{j\in\mathcal{J}:i\in\mathcal{S}_{j}\}\geq d(i)\ .\end{array} 
In particular, we note that if d(i)=1 for all i\in\{1,\ldots,n\}, then we obtain the well known minimum set covering problem. \diamond
The minimum set multicovering problem plays a double role in this paper: (i) we reduce both MCPs to a minimum set multicovering problem; and (ii) by polynomially reducing it to the rMCP we show the latter to be NPhard. A (computational) problem is said to be reducible in polynomial time to another if there exists a procedure to transform the former to the latter using a polynomial number of operations on the size of its inputs. Such reduction is useful in determining the qualitative complexity class [34] a particular problem belongs to. For instance, we will need the following result.
Proposition 1 ([34]).
Let \mathcal{P}_{A} be an NPhard problem. If there is a polynomial reduction from \mathcal{P}_{A} to \mathcal{P}_{B}, from which a solution to \mathcal{P}_{A} can be determined, then \mathcal{P}_{B} is an NPhard problem. \diamond
Similarly, the minimum set covering problem is used in the present paper to show the NPcompleteness of the MCP, by considering the following result.
Lemma 1 ([34]).
Let \mathcal{P}_{A} and \mathcal{P}_{B} be two NPhard problems, and \mathcal{P}_{A}^{d} and \mathcal{P}_{B}^{d} be their decision versions, respectively. If a problem \mathcal{P}_{A} is polynomially reducible to \mathcal{P}_{B} (or equivalently, their decision versions) and \mathcal{P}_{B} is polynomially reducible to \mathcal{P}_{A} (or equivalently, their decision versions), then both \mathcal{P}_{A}^{d} and \mathcal{P}_{B}^{d} are NPcomplete. \diamond
Now, given an arbitrary LTI system (1), we will focus on the following controllability tests.
Theorem 1 ([6]).
(PBH test for controllability using eigenvalues) The system described in (1) is controllable if and only if \text{rank}\left(\left[\begin{array}[]{lr}A\lambda\text{I}_{n}&B\end{array}% \right]\right)=n\text{ for all }\lambda\in\mathbb{C}. \diamond
In fact, it suffices to verify the criterion of Theorem 1 for each \lambda\in\sigma(A). Observe that Theorem 1 provides a polynomial method to check the controllability of an LTI system since for each eigenvalue \lambda of A only the computation of the rank of [A\lambda I_{n}\ B] is required. Nevertheless, it does not provide any immediate information about which entries of B should be different from zero and with what particular values such that the rank condition is ensured. That is, verifying if a B is a solution can be achieved in P, so the controllability problem is in NP. Therefore, a naive usage of the PBH eigenvalue test would lead to a strictly combinatorial procedure for solving the MCP. Instead, we can consider the PBH test for controllability using eigenvectors.
Theorem 2 ([6]).
(PBH test for controllability using eigenvectors) Given (1), the system is not controllable if and only if there exists a lefteigenvector v of A such that v^{\dagger}B=0. \diamond
To relate our results with the ones from structural systems and further understand the advantages and drawbacks of this approach, we will introduce the structural counterpart of the MCP, the minimal structural controllability problem (MSCP). But first, we need to review the structural counterpart of controllability [29].
Definition 2 ([29]).
(Structural Controllability) Given an LTI system (1) with sparseness given by (\bar{A},\bar{B}), with \bar{A}\in\{0,\star\}^{n\times n} and \bar{B}\in\{0,\star\}^{n\times p}, where the entries correspond to fixed zeros and free real parameters, the pair (\bar{A},\bar{B}) is said to be structurally controllable if there exists a controllable pair (A,B), with the same sparseness as (\bar{A},\bar{B}). \diamond
In fact, a stronger characterization of structural controllability holds as stated in the following proposition.
Proposition 2 ([36]).
For a structurally controllable pair (\bar{A},\bar{B}), the numerical realizations (A,B) with the same sparseness as (\bar{A},\bar{B}) that are noncontrollable are described by a proper variety in \mathbb{R}^{n\times n}\times\mathbb{R}^{n\times p}. In other words, almost all realizations respecting the structural pattern of a structurally controllable pair are controllable. \diamond
By almost all realizations, we mean that at most a set with zero Lebesgue measure will lead to numerical realizations that do not ensure controllability.
Subsequently, the MSCP is posed as follows: given the structural matrix \bar{A}\in\{0,\star\}^{n\times n} associated with the dynamics’ matrix A, find \bar{B} such that
\begin{array}[]{cc}\bar{B}=\underset{\bar{B}^{\prime}\in\{0,\star\}^{n\times n% }}{\arg\min}&\quad\\bar{B}^{\prime}\_{0}\\ \quad\text{ s.t. }&\quad(\bar{A},\bar{B}^{\prime})\text{ is structurally % controllable.}\end{array}  (6) 
Now, note that, by Definition 2, a pair (A,B) is controllable only if the corresponding structural pair (\bar{A},\bar{B}) is structurally controllable. Therefore, it is natural first to characterize all the sparsest structures of input vectors that ensure structural controllability, i.e., solutions to (6). In particular, as a consequence of Proposition 2, we have the following result which links the MCP to its structural counterpart.
Proposition 3 ([28]).
Given A\in\mathbb{R}^{n\times n}, a solution B\in\mathbb{R}^{n\times p} for the MCP and a numerical realization B^{\prime}\in\mathbb{R}^{n\times p} of a solution to the MSCP associated with the structural matrix \bar{A}, we have
\B\_{0}\geq\B^{\prime}\_{0}. 
More generally, for each B that solves the MCP, there exists a solution \bar{B}^{\prime} of the MSCP such that
\bar{B}\geq\bar{B}^{\prime}, 
where \bar{B} and \bar{B}^{\prime} denote the structural matrix associated with B and B^{\prime}, respectively. Conversely, given a structural matrix \bar{A} and a solution \bar{B}^{\prime} to the MSCP, for almost all numerical instances A satisfying the structural pattern of \bar{A}, then almost all numerical instances satisfying the structural pattern of \bar{B}^{\prime} are solutions to the MCP associated with A. \diamond
IV Minimum Controllability Problems
In this section, we provide the main results of this paper. In Section IVA, we show that the MCP can be exactly solved in two steps: (i) Polynomial reduction of the structural optimization problem (2) to a setcovering problem (Algorithm 1); and (ii) determine a numerical parametrization of an input matrix B with specified input structure \bar{B}, in a deterministic polynomial fashion (Algorithm 2). Further, by sequentially performing the two algorithms, we are ‘decoupling’ the discrete and continuous properties of the MCP without losing optimality (Theorem 4). In other words, we treat separately the identification of the matrix pattern \bar{B} (discrete property) and the computation of a numerical realization encompassing the \bar{B} pattern, and ensuring controllability of the system (continuous property). In Section IVB, we show that rMCP is NPhard (Theorem 7), and a similar procedure to that used to solve MCP is followed. More specifically, we determine the sparsity of an input matrix by polynomially reducing the problem to a minimum set multicovering problem (Theorem 8), and this can later be used to characterize the solutions to rMCP (Theorem 9).
Complementary to the solutions to the MCPs, in Section IVC, we show that in fact the decision versions of MCP and rMCP (under Assumption 1) are NPcomplete (Theorem 10). Further, in Section IVD, because the MCPs are NPhard, we discuss a possible approach that leverage existing polynomial algorithms used to determine good approximations of the solutions to the minimum set multicovering problem (for instance, Algorithm 3). Subsequently, we argue that the approximate solution warrants some optimality guarantees (Theorem 11). Finally, in Section IVE, we explore some numerical implications of waiving Assumption 2.
IVA A Characterization of the MCP Solution
In this section, we present a systematic method to obtain a solution to the MCP problem. First, we show that given a lefteigenbasis of the dynamics’ matrix A, it is possible to polynomially reduce the MCP to the minimum set covering problem. This reduction assumes that we only have a single effective input to actuate the system, i.e., the input matrix has a single nonzero column. Notice that a feasible solution always exist because A is simple. Subsequently, we say that the input vector is a solution to the MCP if the input matrix obtained consists of one effective input associated with that input. Further, in Theorem 6, we show that this can be done without loss of generality. The reduction is achieved by exploiting the PBH eigenvector criterion (Theorem 2) for controllability. More precisely, the reduction is obtained in two steps: first, we provide a necessary condition on the structure \bar{b} of the sparsest input vector b (see Lemma 2), which is obtained by formulating a minimum set covering problem (see Algorithm 1) associated with the structure (i.e., location of nonzero entries) of the lefteigenvectors of the dynamics’ matrix A. Secondly, we show that a possible numerical realization of \bar{b} which solves the MCP may be generated using a polynomial construction (Algorithm 2). Both algorithms (Algorithm 1 and Algorithm 2) have polynomial complexity in the number of state variables (Theorem 4). Further, the sequential use of these algorithms provides a systematic solution to the MCP (see Theorem 5).
The first set of results provides necessary conditions on the structure that an input vector b must satisfy to ensure controllability of (A,b), and a polynomial complexity procedure (Algorithm 1) that reduces the problem of obtaining such necessary structural patterns to a minimum set covering problem.
Lemma 2.
Given a collection of nonzero vectors \displaystyle\{\bar{v}^{j}\}_{j\in\mathcal{J}} with \bar{v}^{j}\in\{0,\star\}^{n}, the procedure of finding \,\bar{b}^{*}\in\{0,\star\}^{n} such that
\begin{array}[]{rc}\bar{b}^{*}=\underset{\bar{b}\in\{0,\star\}^{n}}{\arg\min}&% \\bar{b}\_{0}\\ \text{s.t. }&\bar{v}^{j}\cdot\bar{b}\neq 0,\text{ for all }j\in\mathcal{J}\end% {array}  (7) 
is polynomially (in \mathcal{J} and n) reducible to a minimum set covering problem with universe \mathcal{U} and a collection \mathcal{S} of sets by applying Algorithm 1. \diamond
Next, we show that given the structure obtained in Lemma 2, almost all possible real numerical realizations lead to a vector b\in\mathbb{R}^{n} that is a solution to the MCP.
Theorem 3.
Let \{v^{i}\}_{i\in\mathcal{J}} to be the set of lefteigenvectors of A, and \bar{b} a solution to (7). Then, almost all numerical realizations b of \bar{b} are solutions to the MCP. \diamond
Observe that Theorem 3 differs from the converse result in Proposition 3 in a subtle, yet important, manner which we describe in the following remark.
Remark 1.
The converse result in Proposition 3 about the generic properties that characterize structural controllability shows that almost all parameters of both dynamics and input matrices satisfying a given structural pattern are controllable. Although, in Theorem 3 the dynamics’ simple matrix A is fixed, i.e., a numerical instance with specified structure, and density arguments are provided to the numerical realizations of the input vector with certain structure ensure controllability of the system. \diamond
Although Theorem 3 ensures that almost all parameterizations provide a feasible solution to the MCP, we need to determine one parameterization that guarantees controllability. Toward this goal, in Algorithm 2, we present an efficient algorithm to obtain such parameterization. The correctness and computational complexity of the algorithm is provided in the next result.
Theorem 4.
Algorithm 2 is correct and has complexity \mathcal{O}(\mathcal{J}), where \mathcal{J} is the size of the collection of vectors given as input to the algorithm. \diamond
Input: \{v^{j}\}_{j\in\mathcal{J}}, a collection of \mathcal{J} complex vectors, and \bar{B}\in\{0,\star\}^{n\times m}.
Output: B^{*}\in\mathbb{R}^{n\times m} solution to (8).
\displaystyle B^{*}=\arg\min_{B\in\mathbb{R}^{n\times m}}  \displaystyle\qquad\qquad 0  
s.t.  \displaystyle\qquad{(v^{j})}^{\dagger}B>0,\quad j\in\mathcal{J}  
\displaystyle\qquad B_{l,k}=0\text{ if }\bar{B}_{l,k}=0,\ \ l,k=1,\ldots,n 
Whereas Algorithm 2 provides an efficient formulation that enables to retrieve a possible parametrization ensuring controllability, one can easily extend this framework to more general scenarios aiming to capture some additional control metrics of interest, for instance, the controllability energy. This extensions are described in further detail in the following remark.
Remark 2.
Suppose the objective function in Algorithm 2 is given by f(B). Then, this can be chosen to satisfy additional design constraints. For instance, f(B)=c^{\intercal}B\mathbf{1}, where c could capture an actuation cost, i.e., entry c_{i} captures how desirable is to actuate x_{i}, and \mathbf{1} is a vector of ones with appropriate dimensions. Subsequently, one may need additional constraints such that the total actuation budget r available is bounded, for instance, f(B)\leq r and B_{i,j}\geq 0 to avoid negative entries that will restrain the objective goal. Alternatively, f(B) can also be considered to be nonlinear, while capturing controltheoretic properties; in particular, it can be a function of the controllability Grammian [37], with some appropriate constraints to ensure the problem to be well defined. \diamond
Next, we show that the sparsest vector pattern given by Lemma 2, together with Algorithm 2, leads to a numerical realization that is a solution to the MCP.
Lemma 3.
Given \displaystyle\{v^{i}\}_{i\in\mathcal{J}} with v^{i}\in\mathbb{C}^{n}, the procedure of finding b^{*}\in\mathbb{R}^{n} such that
\begin{array}[]{rc}b^{*}=\underset{b\in\mathbb{R}^{n}}{\arg\min}&\b\_{0}\\ \text{s.t. }&v^{i}\cdot b\neq 0,\text{ for all }i\in\mathcal{J},\end{array}  (8) 
is polynomially (in \mathcal{J} and n) reducible to a minimum set covering problem (provided by Algorithm 1), with numerical entries determined using Algorithm 2. \diamond
Now, we state one of the main results of the paper.
Theorem 5.
Finally, we characterize the sparsity solutions to the MCP besides those described by a single effective input.
Theorem 6.
Let b\in\mathbb{R}^{n} be a solution to the MCP as described in Theorem 5, \bar{b} its sparsity and \mathcal{N}\subset\{1,\ldots,n\} the indices where \bar{b} is nonzero, i.e., \mathcal{N}=\{i:\bar{b}_{i}=\star,\text{ and }i=1,\ldots,n\}. If \bar{B}\in\{0,\star\}^{n\times n} has exactly one nonzero entry in the ith row, where i\in\mathcal{N}, then the output B\in\mathbb{R}^{n\times n} of Algorithm 2, when \bar{B} and the lefteigenbasis of A are considered, is a solution to the MCP. \diamond
In particular, from Theorem 6, we obtain the following result regarding the scenario where every effective input actuates a single state variable, which we refer to as dedicated inputs.
Corollary 1.
Let b\in\mathbb{R}^{n} be a solution to the MCP as described in Theorem 5, \bar{b} its sparsity and \mathcal{N}\subset\{1,\ldots,n\} the indices where \bar{b} is nonzero, i.e., \mathcal{N}=\{i:\bar{b}_{i}=\star,\text{ and }i=1,\ldots,n\}. If \bar{B}\in\{0,\star\}^{n\times n} has exactly one nonzero entry in the ith row and each column, where i\in\mathcal{N}, then the output B\in\mathbb{R}^{n\times n} of Algorithm 2, when \bar{B} and the lefteigenbasis of A are considered, is a dedicated solution to the MCP, i.e., every effective input actuates a single state variable. \diamond
IVB On the Exact Solution of the Robust Minimal Controllability Problem
Now, we study the rMCP, by first showing that this is an NPhard problem (Theorem 7). Then, similarly to the previous subsection, we first show that a particular subclass of input matrices is a solution to this problem. More specifically, we characterize the dedicated solutions to the rMCP (Theorem 8), and, subsequently, we provide a characterization of the solution to the rMCP in Theorem 9.
Theorem 7.
The rMCP is NPhard. \diamond
Now, similar to the reduction proposed from MCP to the set covering problem, we can characterize the dedicated solutions to the rMCP by considering a set multicovering problem as stated in the next result.
Theorem 8.
Let v^{1},\ldots,v^{n} be a lefteigenbasis of A, and s the number of possible input failures. Further, consider the set multicovering problem (\{\mathcal{S}_{1},\ldots,\mathcal{S}_{(s+1)n}\}, \mathcal{U}\equiv\{1,\ldots,n\};d), where the demand is d(i)=s+1 for i\in\mathcal{U}, and \mathcal{S}_{k}=\{j:[v^{j}]_{l}\neq 0,\text{ and }l1=k\mod n\} for k\in\mathcal{K}\equiv\{1,\ldots,(s+1)n\}. Then, the following statements are equivalent:

\mathcal{M}^{*} is a solution to the set multicovering problem (\{\mathcal{S}_{1},\ldots,\mathcal{S}_{(s+1)n}\},\mathcal{U}\equiv\{1,\ldots,n% \};d);

B_{n}(\mathcal{M}^{*}) is a dedicated solution to rMCP, where [B_{n}(\mathcal{M}^{*})]_{i,l}=1 for l=i\mod n and i\in\mathcal{M}^{*}\subset\mathcal{K}, and zero otherwise. \diamond
Remark 3.
A matrix B_{n}(\mathcal{M}^{\prime}) described by the concatenation of (s+1) solutions to the MCP achieves feasibility to the rMCP, but it is not necessarily an optimal solution to the rMCP. In Section VC, we provide an example where the concatenation of solutions is not a solution to the rMCP. \diamond
In Theorem 8, we provided a characterization of dedicated solutions to the rMCP. In particular, we notice that the solution may require that several nonzero entries in a row of a dedicated solution are considered. In other words, the same state variable needs to be actuated by different actuators to ensure robustness for s input failures.
Next, we characterize the solutions of the rMCP, i.e., not only the ones that are dedicated. Towards this goal, we need to introduce the following merging procedure. Let two distinct effective inputs i and j, associated with two nonzero columns of the input matrix, b^{i} and b^{j}, be such that they share no nonzero entry k, i.e., [b^{i}]_{k}\neq[b^{j}]_{k} for k\in\{1,\ldots,n\}. These two inputs are said to be merged into one input b^{i^{\prime}}, where [b^{i^{\prime}}]_{k}=[b^{i}]_{k} when [b^{i}]_{k}\neq 0, and [b^{i^{\prime}}]_{k}=[b^{j}]_{k} when [b^{j}]_{k}\neq 0, for k\in\{1,\ldots,n\}. Further, it is implicitly assumed that b^{i^{\prime}} takes the place of b^{i} and b^{j} is set to zero. In other words, the effective input i is associated with b^{i^{\prime}} and the effective input j is discarded.
Theorem 9.
Let B_{n}(\mathcal{M}^{*})\in\mathbb{R}^{n\times(s+1)n} be a dedicated solution to the rMCP as described in Theorem 8. In addition, let \bar{B}\in\{0,\star\}^{n\times(s+1)n} be the sparsity of the matrix resulting of the merging procedure between any of the effective inputs in B_{n}(\mathcal{M}^{*}). Then, the matrix B\in\mathbb{R}^{n\times n} obtained using Algorithm 2, with \bar{B} and the lefteigenbasis of A, is a solution to the rMCP. \diamond
IVC Computational Complexity
In the previous subsections, we have mentioned that both MCPs are NPhard. The NPhardness assesses that a problem is at least as difficult as another NPhard problem. In this subsection, we show that both MCPs are NPcomplete, i.e., their decision versions are NPcomplete. Therefore, we provide an interesting remark about NPcompleteness class from results known in control systems. Also, it sets the grounds for the next subsection, where polynomial approximation algorithms (that obtain a suboptimal solution to the set multicovering problem) are leveraged to obtain approximate solutions to the MCP and rMCP.
Theorem 10.
The MCP and rMCP are NPcomplete. \diamond
Additionally, Theorem 10 leads to the following interesting observation.
Remark 4.
By Proposition 3 (the converse part), it follows that a solution of the MCP almost always coincides with a numerical realization of a solution to an associated minimal structural controllability. Combining this with the fact that the MCP is NPcomplete when the eigenvalues of A are simple (see Theorem 10), it follows that the set of simple dynamics’ matrices that lead to NPcomplete problems has zero Lebesgue measure. \diamond
As stated in Theorem 10, the condition that the matrices A be restricted to have simple eigenvalues, is, in fact, necessary in a sense for the proposed reduction of the MCP to the minimum set covering problem to be polynomial in n. This fact is explored in the next remark.
Remark 5.
The proposed reduction from the MCP to the minimum set covering problem is polynomial in \max(\mathcal{J},n), where \mathcal{J} denotes the number of lefteigenvectors. Nevertheless, because the number of lefteigenvectors can grow exponentially, it follows that the proposed reduction cannot be used to show that the decision version of the (general) MCP is NPcomplete. However, this does not imply that the decision version of the MCP for arbitrary dynamics’ matrices (i.e., when A is not restricted to have simple eigenvalues) is not NPcomplete, which remains an open question. \diamond
Finally, we notice that the fact that a problem is NPhard, it does not mean that all instances are not solved polynomially; notwithstanding, these can be solved exactly [38, 39]. Furthermore, the NPcompleteness stated in Theorem 10, allows us to consider the subclasses of the set multicovering problem that are known to be polynomially solvable, to identify polynomially solvable subclasses of the MCPs. This enables a new characterization of solutions to the question posed in [21], regarding the existence of polynomial algorithms exist to determine controllable graph structures. In particular, we notice that in several of these cases, the graphs are associated with dynamics’ matrices that are simple – the case explored in this present paper. Alternatively, by the proposed construction, if the set multicovering problem obtained possess additional structure, then this can be leveraged to use polynomial algorithms to approximate the solutions with closetooptimal solutions, as we discuss in the next subsection.
IVD Polynomial Approximations to the Solution of the Minimal Controllability Problems
As a consequence of Theorem 10, it follows that we can obtain polynomial approximations for both the multiset covering problem and the rMCP. Notice that, in particular, a solution to the MCP can be obtained by considering that no input fails. Therefore, in Algorithm 3, we propose an algorithm that leverages the submodularity properties [40] of the set multicovering properties to obtain a dedicated solution to the rMCP. Submodularity properties ensure that the associated polynomial greedy algorithms have suboptimality guarantees while performing well in practice [40], see also Remark 6. Subsequently, following a similar reasoning to that presented in Theorem 8, we can obtain the following result.
Theorem 11.
The matrix B_{n}(\mathcal{M}^{\prime}) obtained using Algorithm 3, with \bar{B} and the lefteigenbasis of A, is a feasible solution to the rMCP. Further, the computational complexity of Algorithm 3 is \mathcal{O}(sn), and it ensures an approximation optimality bound of \mathcal{O}(\log{n}). \diamond
Remark 6.
Algorithm 3 produces suboptimal solutions that are often optimal solutions to the rMCP, as illustrated in Section VA. The practical performance, together with the linear computational complexity motivated the choice of such procedure. Nonetheless, the information on the structure of the lefteigenvectors, or equivalently, the structure of the sets in the set multicovering problem, can be leveraged to obtain better approximations, for instance, see [41, 27]. In particular, the approximation algorithm from [27] outperforms the majority of the known approximation algorithms if the number of elements of the largest set is small. The authors obtained an approximation optimality bound of \mathcal{O}(d\log{dc}), where c is the size of an optimal solution and d the number of elements of the largest set, and its computational complexity is \mathcal{O}(cn\log{\frac{n}{c}}). Further, [42] extends the latter results by using a linear programming relaxation, which has comparable computational complexity, but with a better approximation ratio that is smaller by a constant factor. Also, in [42] the approach is directly applicable to set multicovering problems, required to determine the solution to the rMCP. \diamond
Finally, by invoking Theorem 9, we obtain the following result.
Corollary 2.
Let B_{n}(\mathcal{M}^{\prime})\in\mathbb{R}^{n\times(s+1)n} be a dedicated solution to the rMCP as described in Theorem 11. In addition, let \bar{B}\in\{0,\star\}^{n\times(s+1)n} be the sparsity of the matrix resulting of the merging procedure between any of the effective inputs in B_{n}(\mathcal{M}^{\prime}). Then, the matrix B\in\mathbb{R}^{n\times n} obtained using Algorithm 2, with \bar{B} and the lefteigenbasis of A, achieves feasibility to the rMCP and is computed in polynomial time. \diamond
IVE Numerical and Computational Remarks
Now, for the sake of completeness, we discuss the implications of waiving Assumption 2 and the impact on the input vector in the MCP. The results trivially extend to the general solution to the MCPs. Towards this goal, we need the following result from [43].
Theorem 12 ([43]).
Let A\in\mathbb{C}^{n\times n} be a matrix with simple eigenvalues. The deterministic arithmetic complexity of finding the eigenvalues and the eigenvectors of A is bounded by \mathcal{O}\left(n^{3}\right)+t\left(n,m\right) operations, where t(n,m)=\mathcal{O}\left(\left(n\log^{2}n\right)\left(\log m+\log^{2}n\right)\right), for a required upper bound of 2^{m}\A\ on the absolute output error of the approximation of the eigenvalues and eigenvectors of A and for any fixed matrix norm \\cdot\. \diamond
More precisely, Theorem 12 states that in practice, only a numerical approximation of the lefteigenbasis is possible in polynomial time. In this case, let \epsilon=2^{m}\A\ be as in Theorem 12, then the results stated in Lemma 2 and Lemma 3 (see also Algorithm 1 and Algorithm 2) can only be used in an \epsilonapproximation of the lefteigenbasis of the dynamics’ matrix. Therefore, the \epsilonapproximation of the lefteigenbasis may lead to the following issues:
(i) an entry in the lefteigenvectors is considered as zero, where in fact it can be some nonzero value that (in norm) is smaller then \epsilon. Consequently, the sets generated using Algorithm 1 (see also Lemma 2) do not contain the indices associated with those nonzero entries. Thus, additional sets need to be considered to the minimum set covering, which implies that the structure of the input vector may contain more nonzero entries than the sparsest input vector that is a solution to the MCP. In other words, we obtain an overapproximation of the sparsest input vector that is a solution to the MCP.
(ii) an entry in the lefteigenvectors in the \epsilonapproximation of the lefteigenbasis is nonzero. Then, it does not represent an issue when computing the structure of the input vector as described in Lemma 2 (see also Algorithm 1), but it can represent a problem when determining the numerical realization by resorting to Algorithm 2. Nonetheless, by Theorem 3 it follows that such issue is unlikely to occur.
To undertake a deeper understanding of which entries fall in the first issue presented above, several methods to compute eigenvectors can be used and solutions posteriorly compared, see [44] for a survey of the different methods and computational issues associated with those.
V Illustrative examples
In this section, we provide some examples that illustrate the main results of the paper.
VA Minimal Controllability Problem
To illustrate the first main result of this paper, i.e., to determine a solution to \mathcal{P}_{1}, consider the dynamics’ matrix A given by
A=\left[\begin{array}[]{ccccc}6&3&3&2&1\\ 0&8&0&0&0\\ 4&3&7&2&1\\ 0&0&0&6&0\\ 4&3&3&2&3\\ \end{array}\right],  (9) 
where \sigma(A)=\{2,4,6,8,10\} consists of distinct eigenvalues, so the matrix A is simple and the results in Section IVA are applicable. Consequently, to obtain the solution to the MCP, we first compute the lefteigenvectors of A that are as follows: v^{1}=[\begin{array}[]{ccccc}1&1&0&0&1\end{array}]^{\intercal}, v^{2}=[\begin{array}[]{ccccc}0&0&1&0&1\end{array}]^{\intercal}, v^{3}=[\begin{array}[]{ccccc}0&0&0&1&0\end{array}]^{\intercal}, v^{4}=[\begin{array}[]{ccccc}0&1&0&0&0\end{array}]^{\intercal} and v^{5}=[\begin{array}[]{ccccc}1&0&1&1&0\end{array}]^{\intercal}. Therefore, their structures are given by \bar{v}^{1}=[\begin{array}[]{ccccc}\star&\star&0&0&\star\end{array}]^{\intercal}, \bar{v}^{2}=[\begin{array}[]{ccccc}0&0&\star&0&\star\end{array}]^{\intercal}, \bar{v}^{3}=[\begin{array}[]{ccccc}0&0&0&\star&0\end{array}]^{\intercal}, \bar{v}^{4}=[\begin{array}[]{ccccc}0&\star&0&0&0\end{array}]^{\intercal} and \bar{v}^{5}=[\begin{array}[]{ccccc}\star&0&\star&\star&0\end{array}]^{\intercal}. Using Algorithm 1, since \bar{v}_{i} for i=1,\ldots,5, we obtain \{\mathcal{S}_{j}\}_{j=1,\ldots,5}, where the jth set corresponds to the set of indices of the lefteigenvector which have a nonzero entry on the jth position. In particular, we obtain \mathcal{S}_{1}=\left\{1,5\right\}, \mathcal{S}_{2}=\left\{1,4\right\}, \mathcal{S}_{3}=\left\{2,5\right\}, \mathcal{S}_{4}=\left\{3,5\right\}, \mathcal{S}_{5}=\left\{1,2\right\}, and the universe set is given by \mathcal{U}=\displaystyle\bigcup_{i=1}^{n}\mathcal{S}_{i}=\left\{1,2,3,4,5% \right\}. Now, it is easy to see that a solution to this minimum set covering problem is the set of indices \mathcal{I}^{*}=\left\{2,3,4\right\}, since \mathcal{U}=\mathcal{S}_{2}\cup\mathcal{S}_{3}\cup\mathcal{S}_{4} and there is no pair of sets, i.e., \mathcal{I}^{\prime}=\{i,i^{\prime}\} with i,i^{\prime}\in\{1,\ldots,5\} such that \mathcal{U}=\mathcal{S}_{i}\cup\mathcal{S}_{i^{\prime}}. Therefore, a possible structure of the vector b that is a solution to the MCP is
\bar{b}=[\begin{array}[]{ccccc}0&\star&\star&\star&0\end{array}]^{\intercal}.  (10) 
Additionally, to find the numerical parametrization of b, under the sparsity pattern of \bar{b}, we have to solve the following system with three unknowns: b_{2},b_{3},b_{4}\neq 0 and b_{3}+b_{4}\neq 0. By inspection, a possible choice is b=[\begin{array}[]{ccccc}0&1&1&1&0\end{array}]^{\intercal}, but the numerical parametrization can be obtained by invoking Algorithm 2, with the set of lefteigenvectors of A given by \{v^{j}\}_{j\in\{1,\ldots,5\}} and the structure of b given by \bar{b} in (10). For the sake of completeness, we, the controllability matrix is given by
\begin{array}[]{rcl}\mathcal{C}&=&[\begin{array}[]{ccccc}b&Ab&A^{2}b&A^{3}b&A^% {4}b\end{array}]\\ &=&\left[\begin{array}[]{ccccc}0&2&44&608&7184\\ 1&8&64&512&4096\\ 1&12&120&1176&11520\\ 1&6&36&216&1296\\ 0&8&104&1112&11264\\ \end{array}\right],\end{array} 
and the rank(\mathcal{C})=5, which implies that (A,b) is controllable.
Observe that the singleinput solution obtained with b=[\begin{array}[]{ccccc}0&1&1&1&0\end{array}]^{\intercal}, can be immediately translated into a solution with two effective inputs, by invoking Theorem 6. In particular, two possible solutions are B=[\begin{array}[]{cc}b^{1}&b^{2}\end{array}] with b^{1}=[\begin{array}[]{ccccc}0&1&1&0&0\end{array}]^{\intercal} and b^{2}=[\begin{array}[]{ccccc}0&0&0&1&0\end{array}]^{\intercal}, and B=[\begin{array}[]{ccc}b^{1}&b^{2}&b^{3}\end{array}] with b^{1}=[\begin{array}[]{ccccc}0&1&0&0&0\end{array}]^{\intercal}, b^{2}=[\begin{array}[]{ccccc}0&0&1&0&0\end{array}]^{\intercal} and b^{3}=[\begin{array}[]{ccccc}0&0&0&1&0\end{array}]^{\intercal}, where the latter is a dedicated solution. Alternatively, if we consider for instance B=[\begin{array}[]{cc}b^{1}&b^{2}\end{array}] with b^{1}=[\begin{array}[]{ccccc}0&1&0&0&0\end{array}]^{\intercal} and b^{2}=[\begin{array}[]{ccccc}0&0&1&1&0\end{array}]^{\intercal}, then v^{\intercal}B=0 for the lefteigenvector v=[\begin{array}[]{ccccc}1&0&1&1&0\end{array}]^{\intercal} which renders the pair (A,B) uncontrollable. Thus, as prescribed in Theorem 6, by invoking Algorithm 2, one can obtain a new realization of B that ensures controllability of (A,B); for instance, the same b^{1} and b^{2}=[\begin{array}[]{ccccc}0&0&\frac{12}{10}&1&0\end{array}]^{\intercal}.
In Section IVD, a systematic polynomial approximation to the MCP can be obtained by considering the rMCP with the number of input failures equal to s=0. In fact, by doing so, one obtains the same sparsity to b, i.e., \bar{b}, as in the aforementioned example, and the subsequent analysis follows. Furthermore, we notice that the approximate solution is a solution to the MCP.
VB Minimal Structural Controllability Problem
The solution to the MSCP considering \bar{A} associated with A in (7) is given by \bar{b}^{\prime}=[0\ \star\ 0\ \star\ 0]^{\intercal}, see [28] for details. Therefore, the structural controllability solution to the MSCP provides a strict lower bound on the number of state variables we should actuate with the input, i.e., the sparsity of the input vector (in accordance to Proposition 3). More precisely, we achieve structural controllability by actuating two variables (specifically x_{2} and x_{4}), but in order to ensure controllability an additional state variable needs to be actuated, for instance, x_{3} as obtained in Section VA. Therefore, structural controllability is necessary, but not sufficient, to achieve controllability even when the matrix A is simple. In particular, considering the converse part of Proposition 3, we note that the numerical values of the matrix A fall into the set of zero Lebesgue measure (see also Proposition 2), where the solution associated with the MSCP does not provide a solution to the MCP. As a consequence, notice that Theorem 3 is different and stronger than Proposition 3 (as observed in Remark 1). More specifically, in Theorem 3 the matrix A has fixed values and only the nonzero entries of B are taken into account, whereas in Proposition 3 both nonzero entries of A and B are considered not to be fixed.
To sharpen the intuition behind these results and observations, we perturbed the matrix A by adding a random uniform noise on the interval [10^{10},10^{10}] to each of its nonzero entries, which leads to a new matrix that we denote by A^{\prime} (with the same structure as A). The structure of the lefteigenvector of the matrix A^{\prime} now becomes: \bar{v}^{\prime 1}=[\begin{array}[]{ccccc}\star&\star&\star&\star&\star\end{% array}]^{\intercal}, \bar{v}^{\prime 2}=[\begin{array}[]{ccccc}\star&\star&\star&\star&\star\end{% array}]^{\intercal}, \bar{v}^{\prime 3}=[\begin{array}[]{ccccc}0&0&0&\star&0\end{array}]^{\intercal}, \bar{v}^{\prime 4}=[\begin{array}[]{ccccc}0&\star&0&0&0\end{array}]^{\intercal} and \bar{v}^{\prime 5}=[\begin{array}[]{ccccc}\star&\star&\star&\star&\star\end{% array}]^{\intercal}. Subsequently, building the sets for the minimum set covering problem as in Algorithm 1, based on \bar{v}_{i}^{\prime} with i=1,\ldots,5, we obtain \mathcal{S}_{1}^{\prime}=\left\{1,2,5\right\}, \mathcal{S}_{2}^{\prime}=\left\{1,2,4,5\right\}, \mathcal{S}_{3}^{\prime}=\left\{1,2,5\right\}, \mathcal{S}_{4}^{\prime}=\left\{1,2,3,5\right\} and \mathcal{S}_{5}^{\prime}=\left\{1,2,5\right\}, and the universe of the minimum set covering problem is \mathcal{U}=\{1,2,3,4,5\}. Finally, by inspection, we can see that a solution of this minimum set covering problem is the set of indices \mathcal{I}^{\prime*}=\left\{2,4\right\}. Hence, the sparsity of the solution to the MCP coincides with the solution to the MSCP associated with \bar{A}. Lastly, we observe that this example illustrates the conclusions of Proposition 2 and Proposition 3.
VC Robust Minimal Controllability Problem
Now, we illustrate how to find a solution to \mathcal{P}_{2}. Let us apply the developments of Section IVB, when we consider the dynamics’ matrix in (9). First, if we consider that at most one input fails, we use Algorithm 1, where a set multicovering problem is considered with the sets as in Section VB, universe \mathcal{U}=\{1,\ldots,5\} and with a demand function d(i)=2 for i=1,\ldots,5, i.e., each element must be covered twice. Subsequently, by inspection, we conclude that the sets \mathcal{S}_{2} and \mathcal{S}_{4} need to be considered twice, since the elements 5 and 4 only appear in these sets, respectively. After this, we need to cover the element 2 and to this end we can choose \mathcal{S}_{3} or \mathcal{S}_{5} or twice one of them, so a possible solution to the multiset covering problem is \mathcal{M}^{\ast}=\{2,3,4,2,3,4\}. Therefore, B_{n}(\mathcal{M}^{\ast}) is a solution to the rMCP, and, in particular, the solution is the same as concatenating twice a dedicated solution to the MCP, see Remark 3. Further, Algorithm 3 produces an optimal solution as often occurs in practice (Remark 6).
In fact, if we apply the developments of Section IVB when s inputs are allowed to fail, i.e., for demand function d(i)=s+1 for i=1,\ldots,5, we notice that the sets \mathcal{S}_{2} and \mathcal{S}_{4} need to be considered s+1 times since the elements 5 and 4 only appear in these sets, respectively. Besides, we need to cover the element 2, so we can choose either \mathcal{S}_{3} or \mathcal{S}_{5} s+1 times, which implies that B(\mathcal{M}^{\ast}), with \mathcal{M}^{\ast}=\{2,3,4,\ldots,2,3,4\} where the elements 2,3 and 4 appear s+1 times, is a solution. Similarly, the solution consists of concatenating s+1 times a dedicated solution to the MCP, and the same remarks are applicable, i.e., Remark 3 and Remark 6.
Notwithstanding, the concatenation of s+1 solutions to the MCP is not always a solution to the rMCP when at most s inputs are allowed to fail. Let us consider the dynamics’ matrix and associated lefteigenvectors as follows:
A=\left[\begin{array}[]{ccc}4&\ 2&\ 2\\ 1&\ 3&\ 1\\ 1&\ 1&\ 5\\ \end{array}\quad\right]\text{ and }\quad V=\left[\begin{array}[]{ccc}&&\\ v^{1}&v^{2}&v^{3}\\ &&\\ \end{array}\right]=\left[\begin{array}[]{ccc}1&0&1\\ 1&1&0\\ 0&1&1\\ \end{array}\right].  (11) 
First, we note that \sigma(A)=\{2,4,6\}, so A is simple, and we can apply the results in Section IVB. Secondly, the structure of the lefteigenvectors of A is given by \bar{v}^{1}=[\begin{array}[]{ccc}\star&\star&0\end{array}]^{\intercal}, \bar{v}^{2}=[\begin{array}[]{ccc}0&\star&\star\end{array}]^{\intercal} and \bar{v}^{3}=[\begin{array}[]{ccccc}\star&0&\star\end{array}]^{\intercal}. Further, we consider that at most one input failure is likely to occur, i.e., s=1. Then, we can invoke Algorithm 1 to build the sets for the set multicovering problem, which are as follows: \mathcal{S}=\{\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3}\}, with \mathcal{S}_{1}=\{1,2\}, \mathcal{S}_{2}=\{2,3\} and \mathcal{S}_{3}=\{1,3\}, and \mathcal{U}=\bigcup_{i=1}^{3}\mathcal{S}_{i}=\{1,2,3\}. By inspection, we obtain that \mathcal{M}^{\prime}=\{1,2,3\} is the optimal solution, where the indices cover each element of \mathcal{U} twice. Further, observe that a solution to the dedicated input MCP always has size equal to two, and in this case, the concatenation of two solutions lead to a solution that has one more input than the optimal solution obtained. Observe that this is a small dimensional example that incurs into a solution that is already 33\% worst than the optimal. Alternatively, if we apply Algorithm 3 to approximate the solution to the rMCP, we obtain one that is optimal, i.e., B(\mathcal{M}^{\prime}) where \mathcal{M}^{\prime}=\{1,2,3\}, which is consistent with Remark 6.
VI Conclusions and Further Research
In this paper, we addressed two minimal controllability problems, with the goal of characterizing the input configurations that actuate the minimal subset of variables yielding controllability, under a specified number of failures. The problems explored were shown to be NPcomplete, and a polynomial reduction of these to a set multicovering problem was provided. In particular, the strategies followed by us separate the discrete and continues nature of the minimal controllability problems. Subsequently, we discussed greedy solutions to the minimal controllability problems that yields feasible (but suboptimal) solutions to rMCP.
A possible, and interesting, direction for future research in this line of work includes the use of the obtained inputs’ structure and consider methods such as coordinate gradient descent to minimize an energy cost, and to consider the case where the model is not exactly known.
References
 [1] M. Egerstedt, “Complex networks: Degrees of control,” Nature, vol. 473, no. 7346, pp. 158–159, May 2011.
 [2] D. D. Siljak, LargeScale Dynamic Systems: Stability and Structure. Dover Publications, 2007.
 [3] S. Skogestad, “Control structure design for complete chemical plants,” Computers and Chemical Engineering, vol. 28, no. 12, pp. 219–234, 2004.
 [4] A. Olshevsky, “Minimal controllability problems,” IEEE Transactions on Control of Network Systems, vol. 1, no. 3, pp. 249–258, Sept 2014.
 [5] R. Langner, Robust Control System Networks: How to Achieve Reliable Control After Stuxnet. Momentum Press, 2011.
 [6] J. P. Hespanha, Linear Systems Theory. Princeton, New Jersey: Princeton Press, Sep. 2009.
 [7] Y. Shoukry and P. Tabuada, “Eventtriggered projected Luenberger observer for linear systems under sparse sensor attacks,” in 53rd IEEE Conference on Decision and Control. IEEE, 2014, pp. 3548–3553.
 [8] Y. Chen, S. Kar, and J. M. F. Moura, “Dynamic Attack Detection in CyberPhysical Systems with Side Initial State Information,” ArXiv eprints, Mar. 2015.
 [9] H. Fawzi, P. Tabuada, and S. Diggavi, “Secure estimation and control for cyberphysical systems under adversarial attacks,” ArXiv eprints, May 2012.
 [10] X. Wang and H. Su, “Pinning control of complex networked systems: A decade after and beyond,” Annual Reviews in Control, vol. 38, no. 1, pp. 103–111, 2014.
 [11] W. Ren, R. Beard, and E. Atkins, “A survey of consensus problems in multiagent coordination,” in Proceedings of the American Control Conference, June 2005, pp. 1859–1864 vol. 3.
 [12] R. OlfatiSaber, J. A. Fax, and R. M. Murray, “Consensus and Cooperation in Networked MultiAgent Systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007.
 [13] M. NabiAbdolyousefi and M. Mesbahi, “On the controllability properties of circulant networks,” IEEE Transactions on Automatic Control, vol. 58, no. 12, pp. 3179–3184, Dec 2013.
 [14] H. Tanner, “On the controllability of nearest neighbor interconnections,” in 43rd IEEE Conference on Decision and Control, vol. 3, Dec 2004, pp. 2467–2472 Vol.3.
 [15] A. Rahmani, M. Ji, M. Mesbahi, and M. Egerstedt, “Controllability of multiagent systems from a graphtheoretic perspective,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 162–186, Feb. 2009.
 [16] M. Egerstedt, S. Martini, M. Cao, K. Camlibel, and A. Bicchi, “Interacting with networks: How does structure relate to controllability in singleleader, consensus networks?” Control Systems Magazine, vol. 32, no. 4, pp. 66 – 73, 2012.
 [17] G. Parlangeli and G. Notarstefano, “On the reachability and observability of path and cycle graphs,” IEEE Transactions on Automatic Control, vol. 57, no. 3, pp. 743–748, March 2012.
 [18] G. Notarstefano and G. Parlangeli, “Controllability and observability of grid graphs via reduction and symmetries,” IEEE Transactions on Automatic Control, vol. 58, no. 7, pp. 1719–1731, July 2013.
 [19] A. Y. Kibangou and C. Commault, “Observability in connected strongly regular graphs and distance regular graphs,” IEEE Transactions on Control of Network Systems, 2014.
 [20] S. Zhang, M. Camlibel, and M. Cao, “Controllability of diffusivelycoupled multiagent systems with general and distance regular coupling topologies,” in 50th IEEE Conference on Decision and Control and European Control Conference, Dec 2011, pp. 759–764.
 [21] C. Aguilar and B. Gharesifard, “Graph controllability classes for the laplacian leaderfollower dynamics,” IEEE Transactions on Automatic Control, vol. PP, no. 99, pp. 1–1, 2014.
 [22] A. Chapman and M. Mesbahi, “On symmetry and controllability of multiagent systems,” in 53rd IEEE Conference on Decision and Control, Dec 2014.
 [23] F. Pasqualetti and S. Zampieri, “On the controllability of isotropic and anisotropic networks,” in 53rd IEEE Conference on Decision and Control, Dec 2014.
 [24] S. Foucart and H. Rauhut, A Mathematical Introduction to Compressive Sensing. Birkhauser Basel, 2013.
 [25] T. Tao and V. Vu, “Random matrices have simple spectrum,” ArXiv eprints, Dec. 2014. [Online]. Available: http://adsabs.harvard.edu/abs/2014arXiv1412.1438T
 [26] K. Ogata, Modern Control Engineering, 4th ed. Upper Saddle River, NJ, USA: Prentice Hall PTR, 2001.
 [27] H. Brönnimann and M. T. Goodrich, “Almost optimal set covers in finite VCdimension,” Discrete & Computational Geometry, vol. 14, no. 4, pp. 463–479, 1995.
 [28] S. Pequito, S. Kar, and A. P. Aguiar, “A framework for structural input/output and control configuration selection of largescale systems,” IEEE Transactions on Automatic Control, vol. 61, no. 2, pp. 303–318, Feb 2016.
 [29] J.M. Dion, C. Commault, and J. V. der Woude, “Generic properties and control of linear structured systems: a survey.” Automatica, pp. 1125–1144, 2003.
 [30] S. Pequito, S. Kar, and A. P. Aguiar, “Minimum cost input/output design for largescale linear structural systems,” Automatica, vol. 68, pp. 384 – 391, 2016.
 [31] ——, “On the complexity of the constrained input selection problem for structural linear systems,” Automatica, vol. 62, pp. 193 – 199, 2015.
 [32] X. Liu, S. Pequito, S. Kar, B. Sinopoli, and A. P. Aguiar, “Minimum Sensor Placement for Robust Observability of Structured Complex Networks,” ArXiv eprints, Jul. 2015.
 [33] S. A. Cook, “The complexity of theoremproving procedures,” in Proceedings of the third annual ACM symposium on Theory of computing, ser. STOC ’71. New York, NY, USA: ACM, 1971, pp. 151–158.
 [34] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NPCompleteness. New York, NY, USA: W. H. Freeman & Co., 1979.
 [35] C. Chekuri, K. L. Clarkson, and S. HarPeled, “On the set multicover problem in geometric settings,” ACM Trans. Algorithms, vol. 9, no. 1, pp. 9:1–9:17, Dec. 2012.
 [36] K. J. Reinschke, Multivariable control: a graph theoretic approach, ser. Lect. Notes in Control and Information Sciences. SpringerVerlag, 1988, vol. 108.
 [37] F. Pasqualetti, S. Zampieri, and F. Bullo, “Controllability metrics, limitations and algorithms for complex networks,” IEEE Transactions on Control of Network Systems, vol. 1, no. 1, pp. 40–52, March 2014.
 [38] Q.S. Hua, D. Yu, F. C. M. Lau, and Y. Wang, Proceedings of the Algorithms and Computation: 20th International Symposium, ISAAC 2009, Honolulu, Hawaii, USA. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, ch. Exact Algorithms for Set Multicover and Multiset Multicover Problems, pp. 34–44.
 [39] Q.S. Hua, Y. Wang, D. Yu, and F. C. Lau, “Dynamic programming based algorithms for set multicover and multiset multicover problems,” Theoretical Computer Science, vol. 411, no. 26â28, pp. 2467 – 2474, 2010.
 [40] F. Bach, “Learning with Submodular Functions: A Convex Optimization Perspective,” ArXiv eprints, Nov. 2011.
 [41] D. Hochbaum, “Approximation algorithms for the set covering and vertex cover problems,” SIAM Journal on Computing, vol. 11, no. 3, pp. 555–556, 1982.
 [42] G. Even, D. Rawitz, and S. M. Shahar, “Hitting sets when the VCdimension is small,” Information Processing Letters, vol. 95, no. 2, pp. 358–362, 2005.
 [43] V. Y. Pan and Z. Q. Chen, “The complexity of the matrix eigenproblem,” in Proceedings of the thirtyfirst annual ACM symposium on Theory of computing, ser. STOC ’99. New York, NY, USA: ACM, 1999, pp. 507–516.
 [44] J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, Z. Bai, Ed. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics, 2000.
 [45] W. M. Wonham, Linear multivariable control: a geometric approach, ser. Applications of mathematics. New York, Berlin, Tokyo: Springer, 1985.
 [46] N. Megiddo, “Linear programming in linear time when the dimension is fixed,” Journal of the ACM, vol. 31, no. 1, pp. 114–127, 1984.
Proof of Lemma 2
Consider the sets \mathcal{S} and \mathcal{U} obtained in Algorithm 1. The following equivalences hold: let \mathcal{I}\subset\{1,\cdots,n\} be a set of indices and \bar{b}_{\mathcal{I}} the structural vector whose ith component is nonzero if and only if i\in\mathcal{I}. Then, the collection of sets \{\mathcal{S}_{i}\}_{i\in\mathcal{I}} in \mathcal{S} covers \mathcal{U} if and only if \forall j\in\mathcal{J},~{}\exists k\in\mathcal{I}\text{ such that }\,j\in% \mathcal{S}_{k}, which is the same as \forall j\in\mathcal{J},~{}\exists k\in\mathcal{I}\text{ such that }\,\bar{v}^% {j}_{k}\neq 0\text{ and }\,\bar{b}_{k}\neq 0\,, this can be rewritten as \forall j\in\mathcal{J},~{}\exists k\in\mathcal{I}\text{ such that }\bar{v}^{j% }_{k}\bar{b}_{k}\neq 0 and therefore \forall j\in\mathcal{J}\quad\bar{v}^{j}\cdot\bar{b}\neq 0. In summary, \bar{b}_{\mathcal{I}} is a feasible solution to the problem in (7). In addition, it can be seen that by such reduction, the optimal solution \bar{b}^{\ast} of (7) corresponds to the structural vector \bar{b}_{\mathcal{I}^{\ast}}, where \{\mathcal{S}_{i}\}_{i\in\mathcal{I}^{\ast}} is the minimal collection of sets that cover \mathcal{U}, i.e., \mathcal{I}^{\ast} solves the minimum set covering problem associated with \mathcal{S} and \mathcal{U}. Hence, the result follows by observing that Algorithm 1 has polynomial complexity, namely \mathcal{O}(\max\{\mathcal{J},n\}^{3}).\hfill\blacksquare
Proof of Theorem 3
The proof follows by showing that if \{v^{i}\}_{i\in\mathcal{J}} with countable \mathcal{J} such that v^{i}\neq 0 for all i\in\mathcal{J} and \bar{b} a solution to (7), then the set \Omega=\{b\in\mathbb{R}^{n}\ :\ v^{i}\cdot b=0\text{ for some }i\in\mathcal{J}% ,\text{ and }b\text{ is a numerical instance of }\bar{b}\} has zero Lebesgue measure. The proof follows similar steps to those proposed in [45], but due to the additional sparsity constraint we devise an independent proof. Let \{v^{i}\}_{i\in\mathcal{J}}, with countable \mathcal{J}, be given and let \bar{b} be a solution to problem (8). For b\in\mathbb{R}^{n}, the equation v^{i}\cdot b=0 represents a hyperplane \mathcal{H}^{i}\subset\mathbb{C}^{n} (provided v^{i}\neq 0 for all i), thus the equation v^{i}\cdot b\neq 0 defines the space \mathbb{C}^{n}\setminus\mathcal{H}^{i}. Therefore, the set of b that satisfies v^{i}\cdot b\neq 0 for all i\in\mathcal{J}, is given by \bigcap\limits_{i\in\mathcal{J}}\left(\mathbb{C}^{n}\setminus\mathcal{H}^{i}% \right)=\mathbb{C}^{n}\setminus\left(\bigcup\limits_{i\in\mathcal{J}}\mathcal{% H}^{i}\right) and the set \Omega of values which does not verify the equations is the complement, i.e., \left(\mathbb{C}^{n}\setminus\bigcup\limits_{i\in\mathcal{J}}\mathcal{H}^{i}% \right)^{\mathsf{c}}=\bigcup\limits_{i\in\mathcal{J}}\mathcal{H}^{i}, which is a set with zero Lebesgue measure in \mathbb{C}^{n}, since \mathcal{J} is countable.
Now, if \{v^{i}\}_{i\in\mathcal{J}} is taken to be the set of lefteigenvectors of A and \bar{b} the corresponding solution to problem (8), each member of the set \Omega constitutes a solution to (8) and hence the MCP. Since, by the preceding arguments, \Omega has Lebesgue measure zero in \mathbb{C}^{n}, it follows readily that almost all numerical instances of \bar{b} are solutions to the MCP.\hfill\blacksquare
Proof of Theorem 4
Proof of Lemma 3
By Lemma 2, given \{\bar{v}^{i}\}_{i\in\mathcal{J}}, problem (8) is polynomially (in \mathcal{J} and n) reducible to a minimum set covering problem. Now, given a solution \bar{b} to (7), Algorithm 2 can be used to obtain a numerical instantiation b with the same structure as \bar{b} such that v^{i}\cdot b\neq 0 for all i\in\mathcal{J}, which incurs polynomial complexity (in \mathcal{J} and n) by Theorem 4. Furthermore, it is readily seen that any feasible solution b^{\prime} to (8) satisfies \b^{\prime}\_{0}\geq\\bar{b}\_{0}=\b\_{0}. Hence, b obtained by the above recipe is a solution to (8) and the desired assertion follows by observing that all steps in the above construction, yielding \bar{b} have polynomial complexity (in \mathcal{J} and n). \hfill\blacksquare
Proof of Theorem 5
Proof of Theorem 6
Proof of Theorem 7
The proof follows by noticing that we can polynomially reduce the MCP to an instance of the rMCP, and invoking Proposition 1. In particular, the rMCP is already the result of such reduction since the MCP can be obtained when the total number of inputs allowed to fail is s=0.\hfill\blacksquare
Proof of Theorem 8
First, we observe that, by construction of the sets \{\mathcal{S}_{1},\ldots,\mathcal{S}_{(s+1)n}\} and the demand function d(i), for i\in\{1,\ldots,n\}, there exists always s+1 entries matching every nonzero entry of the vectors in a lefteigenbasis. This implies that if at most s sensors fail, at least one entry of a column c of B is such that for each lefteigenvector v.c\neq 0, implying {v^{i}}^{\intercal}B\neq 0 for i\in\{1,\ldots,n\}. Hence, the system is controllable by Theorem 2, and we have a feasible solution. Now we need to show that the solution is optimal, i.e., there is not another solution with less dedicated inputs to the rMCP. We will proceed by contradiction, so assume that there is a solution to a demand function d(i)=w for i\in\{1,\ldots,n\} and some w<s+1. Then, for some entry of a lefteigenvector v it is only ensured the existence of w columns in B whose inner product is not zero. Therefore, if w dedicated inputs fails, i.e., the corresponding columns of B are now zero, then B is such that v^{\intercal}B=0, for some eigenvector v. Thus, contradicting the assumption that there is a sparser solution to the rMCP.\hfill\blacksquare
Proof of Theorem 9
Proof of Theorem 10
From [4], we have that the MCP is NPhard, and, in particular, the minimum set covering problem can be polynomially reduced to it. Therefore, we just need to show that the MCP assuming that A comprises only simple eigenvalues and the lefteigenbasis is known, i.e., under the assumption made in this paper, can be reduced polynomially to the minimum set covering problem.
To this end, note that, given the set \{\bar{v}^{i}\}_{i\in\mathcal{J}} of lefteigenvectors of A, the MCP is equivalent to problem (8), the latter being polynomially (in \mathcal{J} and n) reducible to the minimum set covering problem (see Lemma 3). Since \mathcal{J}=n, the overall reduction to the minimum set covering problem is polynomial in n and the result follows by invoking Proposition 1.
Proof of Theorem 11
First, notice that the output of Algorithm 3, i.e., B_{n}(\mathcal{M}^{\prime}), is a feasible solution since the algorithm stops when each of the elements in the universe of the set multicover is s+1 times covered.
The computational complexity of Algorithm 3 is obtained by the overall complexity of steps 1, 4 and 5. In step 1, we need to compute (s+1)n sets, in step 5 at most n sets need to be considered, and, in step 4, (s+1) iterations are performed, each with the number of steps of step 5, yielding (s+1)n computational steps. Summing up the complexity of each step, Algorithm 3 has, in the worst case, computational complexity of order \mathcal{O}(sn). In addition, notice that the performance attained in a multiset covering problem is the same as in the rMCP, as a consequence of Theorem 10. Furthermore, the solution obtained incurs in an optimality gap of at most \mathcal{O}(\log{n}) since the algorithm implements the greedy algorithm associated with submodular functions, as it is the case of the multiset covering problem, and the result follows. \hfill\blacksquare