On the Complexity and Approximability of Optimal Sensor Selection for Kalman Filtering
Abstract
Given a linear dynamical system, we consider the problem of selecting (at designtime) an optimal set of sensors (subject to certain budget constraints) to minimize the trace of the steady state error covariance matrix of the Kalman filter. Previous work has shown that this problem is NPhard for certain classes of systems and sensor costs; in this paper, we show that the problem remains NPhard even for the special case where the system is stable and all sensor costs are identical. Furthermore, we show the stronger result that there is no constantfactor (polynomialtime) approximation algorithm for this problem. This contrasts with other classes of sensor selection problems studied in the literature, which typically pursue constantfactor approximations by leveraging greedy algorithms and submodularity of the cost function. Here, we provide a specific example showing that greedy algorithms can perform arbitrarily poorly for the problem of designtime sensor selection for Kalman filtering.
I Introduction
Selecting an appropriate set of actuators or sensors in order to achieve certain performance requirements is an important problem in control system design (e.g., [1], [2], [3]). For instance, in the case of linear GaussMarkov systems, researchers have studied techniques to select sensors dynamically (at runtime) or statically (at designtime) in order to minimize certain metrics of the error covariance of the corresponding Kalman Filter. These are known as sensor scheduling problems (e.g., [4], [5], [6]) and designtime sensor selection problems (e.g., [7], [8], [9], [10]), respectively. These problems are NPhard in general (e.g., [10]), and various approximation algorithms have been proposed to solve them. For example, the concept of submodularity [11] has been widely used to analyze the performance of greedy algorithms for sensor scheduling and selection (e.g., [12], [13], [6], [14]).
In this paper, we consider the designtime sensor selection problem for optimal filtering of discretetime linear dynamical systems. We study the problem of choosing a subset of sensors (under given budget constraints) to optimize the steady state error covariance of the corresponding Kalman filter. We refer to this problem as the Kalman filtering sensor selection (KFSS) problem. We summarize some related work as follows.
In [7], the authors considered the designtime sensor selection problem of a sensor network for discretetime linear dynamical systems, also known as dynamic datareconciliation problems. They assumed that each sensor measures one component of the system state and the measured and unmeasured states are related via network defined massbalance functions. The objective is to minimize the cost of implementing the network configuration subject to certain performance criteria. They transformed the problem into convex optimization problems, but did not give complexity analysis of the problem. In contrast, we consider the problem of minimizing the estimation error under a cardinality constraint on the chosen sensors without network configuration and analyze the complexity of the problem.
In [9], the authors studied the designtime sensor selection problem for discretetime linear timevarying systems over a finite time horizon, under the assumption that each sensor measures one component of the system state vector. The objective is to minimize the number of chosen sensors while guaranteeing a certain level of performance (or alternatively, to minimize the estimation error with a cardinality constraint on the chosen sensors). In contrast, we consider general measurement matrices and aim to minimize the steady state estimation error.
The papers [15] and [10] considered the same designtime sensor selection as the one we consider here. In [15], the authors expressed the problem as a semidefinite program (SDP). However, they did not provide theoretical guarantees on the performance of the proposed algorithm. The paper [10] showed that the problem is NPhard and gave examples showing that the cost function is not submodular in general. The authors also provided upper bounds on the performance of algorithms for the problem; these upper bounds were functions of the system matrices. Although [10] showed via simulations that greedy algorithms performed well for several randomly generated systems, the question of whether such algorithms (or other polynomialtime algorithms) could provide constantfactor approximation ratios for the problem was left open.
Our contributions to this problem are as follows. First, we show that the KFSS problem is NPhard even for the special case when the system is stable and all sensors have the same cost. This complements and strengthens the complexity result in [10], which only showed NPhardness for two subclasses of problem instances: (1) when the system is unstable and the sensor costs are identical, and (2) when the system is stable but the sensor costs are arbitrary. The NPhardness of those cases followed in a relatively straightforward manner via reductions from the minimal controllability [2] and knapsack [16] problems, respectively. In contrast, the stronger NPhardness proof that we provide here requires a more careful analysis, and makes connections to finding sparse solutions to linear systems of equations, yielding new insights into the problem.
After establishing NPhardness of the problem as above, our second (and most significant) contribution is to show that there is no constant factor approximation algorithm for this problem (unless ). In other words, there is no polynomialtime algorithm that can find a sensor selection that is always guaranteed to yield a mean square estimation error (MSEE) that is within any constant finite factor of the MSEE for the optimal selection. This stands in stark contrast to other sensor selection problems studied in the literature, which leveraged submodularity of their associated cost functions to provide greedy algorithms with constantfactor approximation ratios [9].
Our inapproximability result above immediately implies that greedy algorithms cannot provide constantfactor guarantees for our problem. Our third contribution in this paper is to explicitly show how greedy algorithms can provide arbitrarily poor performance even for very small instances of the KFSS problem (i.e., in systems with only three states and three sensors to choose from).
The rest of this paper is organized as follows. In Section II, we formulate the KFSS problem. In Section III, we analyze the complexity of the KFSS problem. In Section IV, we study a greedy algorithm for the KFSS problem and analyze its performance. In Section V, we conclude the paper.
Ia Notation and terminology
The set of natural numbers, integers, real numbers, rational numbers, and complex numbers are denoted as , , , and , respectively. For any , denote as the least integer greater than or equal to . For a square matrix , let , , , and be its transpose, determinant, set of eigenvalues and set of singular values, respectively. The set of eigenvalues of are ordered with nondecreasing magnitude, i.e., ; the same order applies to the set of singular values . Denote as the element in the th row and th column of . A positive definite (resp. positive semidefinite) matrix is denoted as (resp. ), and if . The set of by positive definite (resp. positive semidefinite) matrices is denoted as (resp. ). The identity matrix with dimension is denoted as . For a vector , denote as the th element of and let be its support, where . Denote the Euclidean norm of by . Define to be a row vector where the th element is and all the other elements are zero; the dimension of the vector can be inferred from the context. For a random variable , let be its expectation. For a set , let be its cardinality.
Ii Problem Formulation
Consider the discretetime linear system
(1) 
where is the system state, is a zeromean white Gaussian noise process with for all , and is the system dynamics matrix. We assume throughout this paper that the pair is stabilizable.
Consider a set consisting of sensors. Each sensor provides a measurement of the system in the form
(2) 
where is the state measurement matrix for sensor , and is a zeromean white Gaussian noise process. We further define , and . Thus, the output provided by all sensors together is given by
(3) 
where and . We denote and consider , .
Consider that there are no sensors initially deployed on the system. Instead, the system designer must select a subset of sensors from the set to install. Each sensor has a cost ; define the cost vector . The designer has a budget , representing the total cost that can be spent on sensors from .
After a set of sensors is selected and installed, the Kalman filter is then applied to provide an optimal estimate of the states using the measurements from the installed sensors (in the sense of minimizing the MSEE). We define a vector as the indicator vector of the selected sensors, where if and only if sensor is installed. Denote as the measurement matrix of the installed sensors indicated by , i.e., , where . Similarly, denote as the measurement noise covariance matrix of the installed sensors, i.e., , where . Let and denote the a priori error covariance matrix and the a posteriori error covariance matrix of the Kalman filter at time step , respectively, when the sensors indicated by are installed. We will use the following result [17].
Lemma 1
Suppose the pair is stabilizable. For a given indicator vector , both and will converge to finite limits and , respectively, as if and only if the pair is detectable.
The limit satisfies the discrete algebraic Riccati equation (DARE) [17]:
(4) 
Applying the matrix inversion lemma [18], we can rewrite Eq. (4) as
(5) 
where is the sensor information matrix corresponding to sensor selection indicated by . Note that the inverses in Eq. and Eq. are interpreted as pseudoinverses if the arguments are not invertible. For the case when , the matrix inverse lemma does not hold under pseudoinverse (unless ), we compute via Eq. .
The limits and are coupled as [19]:
(6) 
For the case when the pair is not detectable, we define the limit . The Kalman filter sensor selection (KFSS) problem is defined as follows.
Problem 1
(KFSS Problem) Given a system dynamics matrix , a measurement matrix containing all of the individual sensor measurement matrices, a system noise covariance matrix , a sensor noise covariance matrix , a cost vector and a budget , the Kalman filtering sensor selection problem is to find the sensor selection , i.e., the indicator vector of the selected sensors, that solves
where is given by Eq. if the pair is detectable, and , otherwise.
Iii Complexity Analysis
As described in the Introduction, the KFSS problem was shown to be NPhard in [10] for two classes of systems and sensor costs. First, when the matrix is unstable, the set of chosen sensors must cause the resulting system to be detectable in order to obtain a finite steady state error covariance matrix. Thus, for systems with unstable and identical sensor costs, [10] provided a reduction from the “minimal controllability” (or minimal detectability) problem considered in [2] to the KFSS problem. Second, when the matrix is stable (so that all sensor selections cause the system to be detectable), [10] showed that when the sensor costs can be arbitrary, the knapsack problem can be encoded as a special case of the KFSS problem, thereby again showing NPhardness of the latter problem.
In this section, we provide a stronger result and show that the KFSS problem is NPhard even for the special case where the matrix is stable and all sensors have the same cost. Hereafter, it will suffice for us to consider the case when , , i.e., each sensor corresponds to one row of matrix , and the sensor selection cost vector is , i.e., each sensor has cost equal to .
We will use the following results in our analysis (the proofs are provided in the appendix).
Lemma 2
Consider a discretetime linear system as defined in and . Suppose the system dynamics matrix is of the form with , , the system noise covariance matrix is diagonal, and the sensor noise covariance matrix is . Then, the following holds for all sensor selections .

, satisfies
(7) 
If such that , then .

If such that , then .

If such that and the th column of is zero, then .

If such that , then .
Lemma 3
Consider a discretetime linear system as defined in Eq. and Eq. . Suppose the system dynamics matrix is of the form , where , the measurement matrix , where , the system noise covariance matrix , and the sensor noise covariance matrix . Then, the MSEE of state , i.e., , satisfies
(8) 
where . Moreover, if we view as a function of , denoted as , then is a strictly increasing function of with .
Iiia NPhardness of the KFSS problem
To prove the KFSS problem (Problem 1) is NPhard, we relate it to the problem described below.
Definition 1
Given a finite set with and a collection of element subsets of , an exact cover for is a subcollection such that every element of occurs in exactly one member of .
Remark 1
Since each member in is a subset of with exactly elements, if there exists an exact cover for , then it must consist of exactly members of .
We will use the following result [16].
Lemma 4
Given a finite set with and a collection of element subsets of , the problem to determine whether contains an exact cover for is NPcomplete.
We are now in place to prove the following result.
Theorem 1
The KFSS problem is NPhard when the system dynamics matrix is stable and each sensor has identical cost.
We give a reduction from to KFSS. Consider an instance of to be a finite set with , and a collection of element subsets of , where . For each element , define the column vector to encode which elements of are contained in . In other words, for and , if element of set is in , and otherwise. Define the matrix . Furthermore, define . Thus has a solution such that has nonzero entries if and only if the answer to the instance of is “yes” [20].
Given the above instance of , we then construct an instance of KFSS as follows. We define the system dynamics matrix as , where .^{1}^{1}1We take for the proof. The set is defined to contain sensors with the collective measurement matrix
(9) 
where and are defined, based on the given instance of , as above. The system covariance matrix is set to be , and the measurement noise covariance matrix is set to be . Finally, the cost vector is set as , and the sensor selection budget is set as . Note that the sensor selection vector for this instance is denoted by . For the above construction, since the only nonzero eigenvalue of is , we know from Lemma 2(c) that for all sensor selections .
We claim that the solution to the constructed instance of the KFSS problem satisfies if and only if the answer to the given instance of the problem is “yes”.
Suppose that the answer to the instance of the problem is “yes”. Then has a solution such that has nonzero entries. Denote the solution as and denote . Define as the sensor selection vector that indicates selecting the first and the th to the th sensors, i.e., sensors that correspond to rows , from (9). Since , we have for as defined in Eq. . Noting that , it then follows that . Hence, we know from Lemma 2(a) and Lemma 2(e) that , which is also the minimum value of among all possible sensor selections . Since always holds as argued above, we have and is the optimal sensor selection, i.e., .
Conversely, suppose that the answer to the problem is “no”. Then, for any union of () subsets in , denoted as , there exist elements in that are not covered by , i.e., for any and , has zero rows, for some . We then show that for all sensor selections that satisfy the budget constraint. First, for any possible sensor selection that does not select the first sensor, we have the first column of is zero (from the form of as defined in Eq. ) and we know from Lemma 2(d) that , which implies that . Thus, consider sensor selections that select the first sensor, denote , where and define . We then have
(10) 
where has zero columns, for some . As argued in Lemma 5 in the appendix, there exists an orthogonal matrix of the form (where is also an orthogonal matrix) such that
In the above expression, is of full column rank, where . Furthermore, and of its elements are ’s, and . We perform a similarity transformation on the system with the matrix (which does not change the trace of the error covariance matrix), and perform additional elementary row operations to transform into the matrix
(11) 
Since and are both diagonal, and , we can obtain from Eq. that the steady state error covariance corresponding to the sensing matrix is of the form
where , denoted as for simplicity, satisfies
where , and . We then know from Lemma 3 that since . Hence, we have .
This completes the proof of the claim above. Suppose that there is an algorithm that outputs the optimal solution to the instance of the KFSS problem defined above. We can call algorithm to solve the problem. Specifically, if the algorithm outputs a solution such that , then the answer to the instance of is “yes”; otherwise, the answer is “no”.
Hence, we have a reduction from to the KFSS problem. Since is NPcomplete and KFSS , we conclude that the KFSS problem is NPhard.
IiiB Inapproximability of the KFSS Problem
In this section, we analyze the achievable performance of algorithms for the KFSS problem. Specifically, consider any given instance of the KFSS problem. For any given algorithm , we define the following ratio:
(12) 
where is the optimal solution to the KFSS problem and is the solution to the KFSS problem given by algorithm .
In [10], the authors showed that there is an upper bound for for any sensor selection algorithm , in terms of the system matrices. However, the question of whether it is possible to find an algorithm that is guaranteed to provide an approximation ratio that is independent of the system parameters has remained open up to this point. In particular, it is typically desirable to find constantfactor approximation algorithms, where the ratio is upperbounded by some (systemindependent) constant. Here, we provide a strong negative result and show that for the KFSS problem, there is no constantfactor approximation algorithm in general, i.e., for all polynomialtime algorithms and , there are instances of the KFSS problem where .
Theorem 2
If , then there is no polynomialtime constantfactor approximation algorithm for the KFSS problem.
Suppose that there exists such a (polynomialtime) approximation algorithm , i.e., such that for all instances of the KFSS problem, where is as defined in Eq. . We will show that can be used to solve the problem as described in Lemma 4. Given an arbitrary instance of the problem (with a base set containing elements and a collection of element subsets of ), we construct a corresponding instance of the KFSS problem in a similar way to that described in the proof of Theorem 1. Specifically, the system dynamics matrix is set as , where and satisfies . The set contains sensors with collective measurement matrix
(13) 
where , depend on the given instance of and are as defined in the proof of Theorem 1. The constant is chosen as . The system noise covariance matrix is set to , and the measurement noise covariance matrix is set to be . The sensor cost vector is set as , and the sensor selection budget is set as . Note that the sensor selection vector is given by .
We claim that there exists a sensor selection vector such that if and only if the answer to the problem is “yes”.
Suppose that the answer to the problem is “yes”. We know from Theorem 1 that there exists a sensor selection such that .
Conversely, suppose that the answer to the problem is “no”. Then, for any union of () subsets in , denoted as , there exist elements in that are not covered by . We follow the discussion in Theorem 1. First, for any sensor selection that does not select the first sensor, we have . Hence, by our choice of , we have , which implies since for all possible sensor selections. Thus, consider sensor selections that include the first sensor. As argued in the proof of Theorem 1 leading up to Eq. , we can perform an orthogonal similarity transformation on the system, along with elementary row operations on the measurement matrix to obtain a measurement matrix of the form
(14) 
where elements of are ’s and . Then, we have . Moreover, we obtain from Lemma 3
If we view as a function of , denoted as , we know from Lemma 3 that is an increasing function of . Hence, we have , i.e.,
Since , we have , which implies . Hence, we have .
This completes the proof of the claim above. Hence, if algorithm for the KFSS problem has for all instances, it is clear that can be used to solve the problem by applying it to the above instance. Specifically, if the answer to the instance is “yes”, then the optimal sensor selection would yield a trace of , and thus the algorithm would yield a trace no larger than . On the other hand, if the answer to the instance is “no”, all sensor selections would yield a trace larger than , and thus so would the sensor selection provided by . In either case, the solution provided by could be used to find the answer to the given instance. Since is NPcomplete, there is no polynomialtime algorithm for it if , and we get a contradiction. This completes the proof of the theorem.
Iv Greedy Algorithm
Our result in Theorem 2 indicates that no polynomialtime algorithm can be guaranteed to yield a solution that is within any constant factor of the optimal solution. In particular, this result applies to the greedy algorithms that are often studied for sensor selection in the literature [10], where sensors are iteratively selected in order to produce the greatest decrease in the error covariance at each iteration. In particular, it was shown via simulations in [10] that such algorithms work well in practice (e.g., for randomly generated systems). In this section, we provide an explicit example showing that greedy algorithms for KFSS can perform arbitrarily poorly, even for small systems (containing only three states). We will focus on the simple greedy algorithm for the KFSS problem defined as Algorithm 1, for instances where all sensor costs are equal to , and the sensor budget for some (i.e., up to sensors can be chosen). For any such instance of the KFSS problem, define , where is the solution of the DARE corresponding to the sensors selected by Algorithm 1.
Example 1
Consider an instance of the KFSS problem with matrices , , and , defined as
where , and . In addition, we have the set of candidate sensors , the selection budget and the cost vector .
Theorem 3
For the instance of the KFSS problem defined in Example 1, the ratio satisfies
(15) 
We first prove that the greedy algorithm defined as Algorithm 1 selects sensor and sensor in its first and second iterations. Since the only nonzero eigenvalue of is , we know from Lemma 2(c) that and , , which implies that and . Hence, we focus on determining .
In the first iteration of the greedy algorithm, suppose the first sensor, i.e., sensor corresponding to , is selected. Then, using the result in Lemma 3, we obtain , denoted as , to be
Similarly, if the second sensor, i.e., the sensor corresponding to , is selected in the first iteration of the greedy algorithm, we have , denoted as , to be
If the third sensor, i.e., the sensor corresponding to , is selected in the first iteration of the greedy algorithm, the first column of is zero, which implies based on Lemma 2(d). If we view as a function of , denoted as , we have . Since we know from Lemma 3 that is an increasing function of and upper bounded by , we obtain , which implies that the greedy algorithm selects the second sensor in its first iteration.
In the second iteration of the greedy algorithm, if the first sensor is selected, we have , on which we perform elementary row operations and obtain . By direct computation from Eq. , we obtain . If the third sensor is selected, we have . By direct computation from Eq. , we obtain , denoted as , to be
Similar to the argument above, we have and , where , which implies that the greedy algorithm selects the third sensor in its second iteration. Hence, we have .
If , then and thus we know from Lemma 2(a) and Lemma 2(e) that , which is also the minimum value of among all possible sensor selections . Combining the results above and taking the limit as , we obtain the result in Eq. .
Examining Eq. (15), we see that for the given instance of KFSS, we have as and . Thus, can be made arbitrarily large by choosing the parameters in the instance appropriately. It is also useful to note that the above behavior holds for any algorithm that outputs a sensor selection that contains sensor for the above example.
V Conclusions
In this paper, we studied the KFSS problem for linear dynamical systems. We showed that this problem is NPhard and has no constantfactor approximation algorithms, even under the assumption that the system is stable and each sensor has identical cost. We provided an explicit example showing how a greedy algorithm can perform arbitrarily poorly on this problem, even when the system only has three states. Our results shed new insights into the problem of sensor selection for Kalman filtering and show, in particular, that this problem is more difficult than other variants of the sensor selection problem that have submodular cost functions. Future work on characterizing achievable (nonconstant) approximation ratios, or identifying classes of systems that admit nearoptimal approximation algorithms, would be of interest.
Appendix
Proof of Lemma 2:
Since and are diagonal, the system represents a set of scalar subsystems of the form
where is the th state of and is a zeromean Gaussian noise process with variance . As is stable, the pair is detectable and the pair is stabilizable for all sensor selections . Thus, the limit exists (based on Lemma 1), and is denoted as .
Proof of (a). Since and are diagonal, we know from Eq. that
which implies , . Moreover, it is easy to see that , . Since , we obtain from Eq.
(16) 
which implies that since is diagonal. Hence, , .
Proof of (b). Letting in inequality , we obtain .
Proof of (c). Letting in inequality , we obtain .
Proof of (d). Assume without loss of generality that the first column of is zero, since we can simply renumber the states to make this the case without affecting the trace of the error covariance matrix. Hence, we have of the form
Moreover, since and are diagonal and , we can obtain from Eq. that is of the form
where and satisfies
which implies .
Proof of (e). Similar to the proof of (d), we can assume without loss of generality that . If we further perform elementary row operations on ,^{2}^{2}2Note that since we assume , it is easy to see that the Kalman filter gives the same results if we perform any elementary row operation on . we get a matrix of the form
Moreover, since and are diagonal and , we can obtain from Eq. that is of the form
where and satisfies