Controlled Sensing: A Myopic Fisher Information Sensor Selection Algorithm
This paper considers the problem of state tracking with observation control for a particular class of dynamical systems. The system state evolution is described by a discrete–time, finite–state Markov chain, while the measurement process is characterized by a controlled multi–variate Gaussian observation model. The computational complexity of the optimal control strategy proposed in our prior work proves to be prohibitive. A suboptimal, lower complexity algorithm based on the Fisher information measure is proposed. Toward this end, the preceding measure is generalized to account for multi–valued discrete parameters and control inputs. A closed–form formula for our system model is also derived. Numerical simulations are provided for a physical activity tracking application showing the near–optimal performance of the proposed algorithm.
In recent years, there has been an increasing interest in the problem of controlled sensing for inference in signal processing and related fields. In essence, the goal of controlled sensing is to characterize the way sensing modalities (e.g. sensor type, number of samples) are used to accomplish a certain inference task. For instance, consider a physical activity tracking application in which the goal is to continuously estimate a person’s physical activity (e.g. walking, standing, etc) using a set of sensors, such as accelerometers and heart–rate monitors. Intuitively, depending on which sensors are used, the response may differ. Thus, carefully controlling the measurement process can dynamically refine the belief about the person’s unknown time–evolving state and potentially lead to substantial performance gains. Applications of controlled sensing include, but are not limited to sensor management for object tracking , ehealth , spectrum sensing  and amplitude design for channel estimation .
We have previously considered the controlled sensing problem for inference in the case of discrete–time, finite–state Markov chains with controlled multi–variate Gaussian observations . In particular, we addressed the joint problem of deriving recursive formulae for a Minimum Mean–Squared Error (MMSE) state estimator and designing a control strategy to optimize its performance. In this regard, a Kalman–like estimator was designed and a dynamic programming (DP) algorithm optimizing the associated Mean–Squared Error (MSE) was derived.
It is usually the case that DP–based approaches suffer from the curse of dimensionality (i.e. one or more of the state, observation and control spaces are large) yielding no efficiently computable solutions. In our problem formulation, this fact is exacerbated by the fact that adopting the MSE as performance objective  results in nonlinear cost functions.
Herein, as a first step toward the design of computationally efficient control strategies, we propose a sensor selection algorithm based on the Fisher information measure . This measure is extremely important in estimation theory and statistics since it 1) characterizes how well we can estimate a parameter based on a set of observations, and 2) is related to the concept of efficiency and the Kullback–Leibler divergence , which also constitutes a fundamental measure in information theory.
The problem of controlled sensing has been previously studied under time–invariant  and time–varying hypotheses . In the latter case, it is common to assume that the unknown state is revealed through discrete noisy observations . On the other hand, the authors in  consider, among others, a Gaussian multi–variate signal model assuming i.i.d. measurements. In contrast, we consider time–varying systems with a generic Gaussian multi–variate signal model, which accounts for correlated measurements and enables fusion of multiple samples from different sensors.
In prior work, various performance objectives have been considered, e.g. detection error probability and bounds , general convex distance measures , estimation bounds . In contrast, our focus is MSE, but since the optimal strategy is computationally intensive, we propose an algorithm based on the Fisher information measure. Various scalar functions of the Fisher information matrix have been previously considered as optimization criteria for sensor selection and active parameter/state estimation . However, in all these cases, the differentiability of the associated likelihood function is implicitly assumed. In contrast, we consider , where this assumption fails.
Our contributions are as follows. To overcome the differentiability issue, we appropriately generalize the Fisher information measure. Furthermore, we derive a closed–form expression for our system model and propose a lower complexity algorithm that optimizes this expression with respect to control input selection. Finally, we evaluate the performance of the proposed algorithm using real data from a physical activity tracking application and show its near–optimal performance.
Unless stated otherwise, all vectors are column vectors denoted by lowercase boldface symbols (e.g. ). On the other hand, matrices are denoted by uppercase boldface symbols (e.g. ). Sets are indicated by calligraphic symbols (e.g. ). denotes the trace operator, the –norm of vector , a vector of dimension determined by the context with in the th position and zero everywhere else, the diagonal matrix with elements the components of and the determinant of matrix .
In this section, we describe the problem of controlled sensing. More precisely, we introduce our formulation, which includes our stochastic system model and the related stochastic optimization problem. For completeness, We also review our previously proposed Kalman–like system state estimator .
Consider a discrete–time dynamical system. Let denote discrete time and denote a first–order Markov chain on the –state state space with indicating the total number of system states. We assume that the Markov chain dynamics are modeled by the transition probability matrix with and initial distribution with . We also assume that the Markov chain is stationary, viz. the associated transition probabilities do not change with time. We consider a set of sensors, which at each time step , generate multiple noisy observations of the system state . A controller decides to receive all or any subset of these noisy observations by selecting an appropriate control input at time step denoted by . We assume that there is a finite number of available controls, i.e. . The resulting measurement vector is described by the following multivariate Gaussian model
where and denote the mean vector and covariance matrix, respectively. To select a control input, the controller exploits the knowledge of the observation–control history , where is the –algebra generated by , and .
At each time step, the objective of the controller is to estimate the unknown system state by appropriately using the available observation and control input information. In that sense, the controller’s operation can be divided into the following three phases: 1) control selection, viz. the appropriate is selected based on , 2) measurement vector generation, viz. a measurement vector is generated based on the selected control , and 3) system state estimation, viz. an estimate of the state is determined based on the available information. In regard to the latter phase, we have recently proposed in  an approximate MMSE system state estimator, which is formally similar to the well–known Kalman filter. Specifically, the posterior distribution of given , denoted by with , constitutes the optimal MMSE state estimate of the Markov chain system state. In , we derived an approximate MMSE estimator , which is characterized by a Kalman–like structure. Theorem ? provides the associated equations.
The estimator of Theorem ? is employed during the system state estimation phase. It is evident from the related formulae that the estimation accuracy depends on the appropriate control input selection. Since the estimator’s MSE performance is characterized by the conditional filtering error covariance matrix , we formulated in  the following optimization problem, which falls under the framework of partially observable Markov decision processes (POMDPs) .
Controlled sensing problem. Determine a sequence of control inputs , which minimizes the expected total cost
The solution to the above problem is used during the control selection phase.
3Dynamic Programming Formulation
In this section, we briefly summarize the solution of the controlled sensing problem proposed in our prior work . Specifically, having formulated our optimization problem as a POMDP enable us to seek a solution using dynamic programming principles, as illustrated in Theorem ?.
Due to the complexity of the expressions involved, it is impossible to determine an analytical solution to ( ?) and ( ?). Alternatively, we can numerically get an approximate solution. However, there are certain practical issues to consider. In particular, is continuous–valued, which implies that we must quantize the associated predicted belief space to acquire a finite number of states. In addition, the non–linear nature of the cost vector along with the multi–dimensional integration required during the computation of ( ?) complicates the derivation of the DP policy. The above difficulties motivate our efforts for identifying a lower complexity scheme, which constitutes the major contribution of this work.
In this section, we review the Fisher information measure  and determine its exact form for our system model. We also comment on its structure and individual characteristics.
The Fisher information  constitutes a well–known information measure, which tries to capture the amount of information that an observable random variable contains about an unknown parameter . It is related to the concept of efficiency in estimation theory since it provides a lower bound for the variance of estimators of a parameter, known as the Cramér–Rao lower bound (CRLB) . To formally define the Fisher information, we begin with the following definition.
Under suitable regularity conditions (i.e. differentiation with respect to and integration with respect to can be interchanged), it can be shown that the first moment of the score is zero
Next, we give the formal definition of Fisher information.
We underscore that since the expectation of the score is zero, the associated term has been dropped in Definition ?. Furthermore, we observe that the Fisher information characterizes the relative rate at which the pdf changes with respect to the unknown parameter . In other words, the greater the expectation of a change is at a given value, the easier is to distinguish this value from neighboring values, and hence, we can achieve better estimation performance.
4.2Discrete Fisher Information
We consider the dynamical system model in Section 2, where the unknown system state is observed through a noisy measurement vector that is shaped by a control input . Since the system state corresponds to a discrete–time, finite–state Markov chain with states, we adopt hereafter the scalar notation , where now .
In our formulation, there are three key components: 1) the system state , which corresponds to the unknown parameter of interest, 2) the measurement vector that refers to the observed random variable, and 3) the control input . Therefore, we need to ensure that these components are taken into consideration during the derivation of the Fisher information measure. First, we observe that the discrete nature of the system state prevents the direction application of Definitions ? and ?. To overcome this issue, we define the following generalized score function
where the dependence on has been stated explicitly and denotes a “test point”. The role of the latter is to avoid the need for differentiability imposed by Definition ?, while capturing any changes of the parameter values and enabling the computation of a generalized Fisher information measure. For completeness, we recall that the density of a multivariate Gaussian random vector is given by
where is the mean vector and is the covariance matrix.
The expected value of the generalized score function in (Equation 5) has the following form
where we have exploited the following property for 
At this point, we define the following generalized Fisher information measure
Note that the above expressions can be simplified more in our case, since . After some manipulations, Eq. (Equation 7) becomes
We notice that, as expected, the resulting measure constitutes a complicated function of the statistics of the underlying multivariate Gaussian model. At the same time, these statistics are driven by the selected control input .
As already discussed, the generalized Fisher information measure in (Equation 7) avoids the need for differentiability of the associated likelihood function by using test points. In general, these test points are selected so that the resulting parameter space is covered, yet ensuring that invalid parameter values are ignored. For our problem, this implies that test points should be selected to account for the discrete nature of the parameter space, i.e. test points should be state–dependent: . For example, if we assume , the valid test point values for state are and .
5Greedy Fisher Information Sensor Selection
In this section, we propose a myopic sensor selection algorithm that exploits the generalized Fisher information measure (Equation 9) and discuss about its implementation.
As already discussed, Fisher information captures the amount of information that an observable random variable carries about an unknown parameter. Ideally, we would like to maximize this information so that we can infer with certainty the value of the unknown parameter of interest. In our formulation, we also have an extra degree of freedom, the control input, which we can exploit accordingly to maximize the amount of information we acquire with respect to the unknown parameter. To this end, we propose the following myopic sensor selection strategy that maximizes the generalized Fisher information measure (Equation 9) at each time step
where . We underscore that the Fisher information in (Equation 9) is maximized with respect to all possible test point values at each time step to ensure that the tightest Fisher information is computed.
Examining Eq. (Equation 10), we notice that the function depends both on the system state and the control input . However, the former variable is unknown, in fact this is what wish to infer. To overcome this impediment, we instead use an estimate of the system state, i.e.
where is computed through our Kalman–like filter recursions of Theorem ?. Our proposed strategy, which we refer to as Greedy Fisher Information Sensor Selection (GFIS), is shown in Algorithm ?. Note that the sensor selection part is intertwined with the Kalman–like filter recursions. In particular, at each time step, GFIS determines the predicted belief state, which then uses to determine the appropriate control input via (Equation 10).
The proposed algorithm presents several benefits. Among these, the most important is its myopic structure, i.e. no computation of an expected future cost is any more required. As a result, the proposed algorithm incurs much lower computational complexity compared to DP. At the same time, the computation of the values of the function along with the optimization step (Equation 10) can be completed off–line. Consequently, the proposed strategy can be implemented as a look–up table, suggesting a very efficient implementation. The associated complexity is , where is the complexity of computing for a pair , versus for DP, where is the number of predicted belief states
In this section, we provide numerical simulations to evaluate the GFIS algorithm developed in Section 5. We also compare its performance with the DP algorithm of Section 3. Both algorithms are applied to a physical activity state tracking example also considered in .
We consider a Wireless Body Area network (WBAN) that consists of three heterogeneous sensors (two accelerometers (ACCs)
where denotes sensor . For a particular sensor , the mean vector is of size and the covariance matrix is defined as , where is a Toeplitz matrix with first row/column , is the identity matrix, is the model parameter and accounts for sensing and communication noise.
Figure 1 shows the underlying distributions for the four physical activity states for a single participant for each of the three sensors. In the simulations, we have adopted but are methods are directly applicable to larger values of . The Markov chain transition probability matrix used is .
Fig. ? depicts the true system state sequence and the tracking performance of DP and GFIS. We note that both algorithms track very well the individual’s time–varying activity state despite the few number of samples used. We also observe that GFIS usually confuses the Walk and Run states, which has to do with how close the associated signal models are in conjunction with the Markov chain transition probabilities values. However, its greedy nature benefits detecting the Stand state versus DP, which optimizes the average cost and filters out states out with low stationary probability. Table ? shows that the performance loss due to the adoption of GFIS is small. Meanwhile, the associated reduction in complexity is significant, making the proposed algorithm attractive for controlled sensing applications. It is possible to achieve better MSE/detection accuracy by considering a Bayesian version  of the Fisher information measure and/or extensions to the dynamical case . However, this may significantly increase the related complexity.
Finally, Fig. ? illustrates the average number of samples per sensor and per state selected by DP and GFIS. We notice that both algorithms request no samples from the ECG as expected, since according to Figure 1 the associated distributions are highly overlapping. On the other hand, both algorithms request a combination of samples from the two accelerometers, where the exact number depends on the underlying physical activity state and the adopted algorithm. An interesting observation is that GFIS tends to select, on average, more samples from the second accelerometer in contrast to DP, which requests on average same number of samples from both ACCs. In contrast, for the state, the situation is reversed, i.e. GFIS selects on average more samples from ACC 1 than ACC 2, while DP requests samples only from ACC 2.
In this paper, we considered the controlled sensing problem in the case of discrete–time, finite–state Markov chains with controlled Gaussian measurement vectors. Despite that the optimal solution is computationally intensive, it was possible to design a suboptimal, lower complexity algorithm based on the Fisher information measure. To this end, we generalized the Fisher information measure to account for multi–valued discrete parameters and control inputs. Numerical simulations using real data from a physical activity state tracking application indicated the near–optimality of the proposed algorithm. Future work will focus on determining theoretical performance guarantees for the proposed algorithm and considering sensor usage costs.
- We quantize the predicted belief space with resolution .
- ACC 1 is an internal phone sensor, whereas ACC 2 is a standalone sensor and they are positioned in different parts of the body.
- G. K. Atia, V. V. Veeravalli, and J. A. Fuemmeler, “Sensor Scheduling for Energy-Efficient Target Tracking in Sensor Networks,” IEEE Trans. Signal Process., vol. 59, no. 10, pp. 4923–4937, Oct. 2011.
- D.-S. Zois, M. Levorato, and U. Mitra, “Energy–Efficient, Heterogeneous Sensor Selection for Physical Activity Detection in Wireless Body Area Networks,” IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1581–1594, Apr. 2013.
- J. Unnikrishnan and V. V. Veeravalli, “Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio,” IEEE Trans. Signal Process., vol. 58, no. 2, pp. 750–760, Feb. 2010.
- R. Rangarajan, R. Raich, and A. O. Hero, “Optimal Sequential Energy Allocation for Inverse Problems,” IEEE Journal of Selected Topics in Signal Process., vol. 1, no. 1, pp. 67–78, June 2007.
- D.-S. Zois, M. Levorato, and U. Mitra, “Kalman-like state tracking and control in POMDPs with applications to body sensing networks,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Process. (ICASSP), May 2013.
- R. A. Fisher, “On the Mathematical Foundations of Theoretical Statistics,” Philosophical Transactions of the Royal Society of London. Series A, vol. 222, no. 594–604, pp. 309–368, 1922.
- C. Gourieroux and A. Monfort, Statistics and Econometric Models: Volume 1, General Concepts, Estimation, Prediction and Algorithms.1em plus 0.5em minus 0.4emCambridge University Press, 1995.
- S. Nitinawarat, G. K. Atia, and V. V. Veeravalli, “Controlled Sensing for Multihypothesis Testing,” IEEE Trans. on Automatic Control, vol. 58, no. 10, pp. 2451–2464, Oct. 2013.
- M. Naghshvar and T. Javidi, “Active Sequential Hypothesis Testing,” The Annals of Statistics, vol. 41, no. 6, pp. 2703–2738, Dec. 2013.
- V. Krishnamurthy and D. Djonin, “Structured Threshold Policies for Dynamic Sensor Scheduling – a Partially Observed Markov Decision Process Approach,” IEEE Trans. Signal Process., vol. 55, no. 10, pp. 4938–4957, Oct. 2007.
- E. Masazade, R. Niu, and P. K. Varshney, “An approximate dynamic programming based non-myopic sensor selection method for target tracking,” in 46th Annual Conf. on Information Sciences and Systems (CISS), Mar. 2012.
- L. Scardovi, “Information based Control for State and Parameter Estimation,” Ph.D. dissertation, University of Genoa, Genoa, Italy, 2005.
- A. O. Hero and D. Cochran, “Sensor Management: Past, Present, and Future,” IEEE Sensors Journal, vol. 11, no. 12, pp. 3064–3075, Dec. 2011.
- D. P. Bertsekas, Dynamic Programming and Optimal Control. 1em plus 0.5em minus 0.4emAthena Scientific, 2005, vol. 1.
- H. L. Van Trees and K. L. Bell, Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking.1em plus 0.5em minus 0.4emIEEE Press, Wiley-Interscience, 2007.
- M. J. Schervish, Theory of Statistics.1em plus 0.5em minus 0.4emSpringer, 1995.
- K. B. Petersen and M. S. Pedersen, “The Matrix Cookbook,” Nov. 2008.