Minimum-Energy Distributed FilteringThis work was supported by the Australian Research Council under Discovery Projects funding scheme (project D0120102152).

Minimum-Energy Distributed Filtering

Abstract

The paper addresses the problem of distributed filtering with guaranteed convergence properties using minimum-energy filtering and filtering methodologies. A linear state space plant model is considered observed by a network of communicating sensors, in which individual sensor measurements may lead to an unobservable filtering problem. However, each filter locally shares estimates, that are subject to disturbances, with its respective neighboring filters to produce an estimate of the plant state. The minimum-energy strategy of the proposed local filter leads to a locally optimal time-varying filter gain facilitating the transient and the asymptotic convergence of the estimation error, with guaranteed performance. The filters are implementable using only the local measurements and information from the neighboring filters subject to disturbances. A key idea of the proposed algorithm is to locally approximate the neighboring estimates, that are not directly accessible, considering them as disturbance contaminated versions of the plant state. The proposed algorithm imposes minimal communication load on the network and is scalable to larger sensor networks.

\IEEEoverridecommandlockouts\overrideIEEEmargins

1 Introduction

There is considerable interest in the literature in multi-agent systems that are capable of performing control and filtering related tasks in a cooperative manner. Applications range from military aerial fleets, monitoring and maintenance agents in industrial applications to biological applications. In this paper, in particular, we are concerned with distributed filtering using a network of filters that estimate the state of a plant using disturbance contaminated local measurements. An interesting case is when these filters individually have difficulty providing an accurate and complete estimate of the plant, a problem that is resolved by using information from the neighbouring filters.

Kalman filtering is the focus of many of the existing methods proposed for distributed filtering. An early result on this subject by Durrant-Whyte et. al [1, 2] provides an exact decentralized formulation for the multi-sensor version of the Kalman filter. This formulation avoids the requirement of a central processing or communication unit with which each sensor has to communicate its information for calculating the state estimate of the plant. The decentralized scheme is robust to sensor failure and network changes, reduces the communication load and allows for faster information processing. The decentralized Kalman filter proposed in [1, 2] however, requires an all-to-all communication of state error information and variance error information between the sensors; this impairs its scalability to bigger networks. Olfati-Saber [3] proposed a distributed Kalman filter algorithm that involves two additional consensus filters which are needed for fusion of the sensor information and covariance information required for the decentralized Kalman filtering scheme used. The distributed naming refers to the fact that all-to-all information sharing is avoided and instead each agent only shares information with its local neighbors. The distributed consensus algorithms proposed in [3] assume disturbance free data sharing between the sensors that is arguably an unrealistic assumption in applications involving imperfect communication.

More recently, observer designs based on linear matrix inequalities (LMIs) have been employed to tackle the imperfect communication problem in multi-agent distributed estimation [4, 5]. Subbotin and Smith [5] proposed a convex optimization problem where minimizing the estimation error covariance of the global network selects the local observer gains. Ugrinovskii [4] designed the observer gains by minimizing an consensus performance cost that guarantees the associated disturbance attenuation criterion and guarantees the convergence of the filter dynamics of the nodes. Both works target the asymptotic performance of their respective estimation algorithms by using constant gains. This is in contrast to filtering algorithms that use time-varying gains calculated to adjust both the transient and the asymptotic performance  [6].

In this paper a minimum-energy distributed filtering algorithm is proposed that addresses the imperfect communication of the nodes by locally minimizing a quadratic energy cost comprising the initialization and measurement errors as well as the communication errors of the local filter. A novel contribution of this paper that leads to the distributed nature of the local filters lies in augmenting the minimum-energy cost functional with a cost on approximating the neighboring estimates. Furthermore, a penalty is included in the local costs that provides a guaranteed disturbance attenuation property over the network of filters.

We provide a condition on the network parameters that is sufficient to ensure convergence of the proposed filters. The condition is quite general, and captures the types of convergence considered in [4, 5] as special cases. Moreover, we offer a tuning algorithm based on LMIs that facilitates the implementation of the proposed filters with guaranteed performance and convergence of the filters. A simulation example regarding the Chua electronic circuit [7] is provided that demonstrates the effectiveness of the proposed algorithm in the case of a chaotic plant system and network of five filters that estimate the state of the plant.

The remainder of the paper is organised as follows. Section 2 reviews the concept of minimum-energy filtering. In Section 3 we extend the minimum-energy filtering paradigm to allow for a distributed filtering formulation as well as to impose the filtering performance criterion. The distributed minimum-energy filter is provided in that section. Convergence of the proposed filter is studied in Section 4, and tuning of the filter is discussed in Section 5. Section 6 illustrates the design procedure and performance of the proposed filter via simulation, and Section 7 concludes the paper.

2 Minimum-Energy Filtering

In this section we review the concept of minimum-energy filtering that was pioneered by Mortensen [8] and was further elaborated by Hijab [9]. Later, we use this method to obtain a distributed filtering algorithm for estimating the state of a linear system using a network of filters. Consider the linear system

(1)

where the signals and are, respectively, the state and the unknown modeling disturbance; the latter is assumed to be integrable on . The matrices and are known state matrix and input disturbance coefficient matrix, respectively. Note that is assumed to be unknown.

Further consider the following measurement model

(2)

where the signals are the measured data and the unknown measurement disturbance, which is also assumed to be integrable on . The matrices and are the measurement matrix and the measurement disturbance coefficient matrix, receptively, that are assumed to be known from the model. Denote and and assume that .

Associated with these equations is the energy functional measuring the aggregated energy associated with the unknowns ,

(3)

where , the matrix is a weighting on the difference between the unknown initial state and its nominated a priori estimate .

Denote by the data obtained according to (2) during the time . Given the measurement data , minimizing the cost (3) with respect to and , subject to equations (1) and (2), leads to an optimal state trajectory . This is the ‘most likely trajectory’ [8] or the minimum-energy trajectory that is compatible with the data . The end point of this trajectory constitutes the minimum-energy estimate of the state , given the measurement data ,

(4)

It is desirable to obtain the estimates continuously as time evolves in . Note that in general, for ,

(5)

Therefore, it is not sufficient to only minimize the cost (3) over and infinite sequence of minimizations are required if all the estimated values in this period are needed. The calculus of variations can be utilized to solve this problem and obtain a recursive filtering algorithm similar to those arising in optimal control [10]. The idea is to solve the following two step optimization problem

(6)

The inner minimization problem is solved as an optimal tracking problem, in which the system (1) is to track the given output signal , , and is treated as a control input signal. For this step both and are considered to be fixed but constrained by (1) for . The associated value function of this inner tracking problem, encoding the minimum energy required to take the system from the initial state to the state over the time period , is

(7)

which from (3) has the initial condition

(8)

Note that in (7), the variables and are considered as independent variables. That is, by selecting a time and an end state , the value function (7) provides the optimal value arising from minimizing the cost (3) over .

The final step is to perform the outer minimization, which is equivalent to minimizing the value function over . This leads to the optimal trajectory and the minimum-energy estimate chosen as its final value, . Provided the value function is sufficiently smooth, this can be achieved by solving

(9)

Mortensen [8] proposed solving the next equation, obtained from (9), that leads to a recursive set of equations for updating .

(10)

The resulting filter turns out to be in the exact form of the Kalman-Bucy filter [6]. While the resulting filter is well-known in the literature, the minimum-energy methodology is less known. Nevertheless, minimum-energy filtering is a systematic recursive method that can be applied in many applications including in distributed filtering considered in the rest of this paper.

3 Minimum-Energy Distributed Filtering

Consider a network of filters with a directed graph topology where and are the set of vertices and the set of edges, respectively. An edge directing node of the graph towards node where is denoted by . In accordance with the common convention [3], the graph is assumed to have no self-loops, i.e., . The neighborhood of node , i.e., the set of nodes which send information to node , is denoted by .

In this section we again consider the plant model (1). As in Section 2, the initial state and the -integrable modeling error are unknown. Our goal is to construct a network of filters that can each estimate the plant state using the measurements of the following type,

(11)

where , indicates the th node of the network. Each measurement is obtained using the associated observation matrix known to node from the model. The signal is an unknown measurement error assumed to be integrable on , with known coefficient matrices such that the matrix is positive definite.

In addition, the filter obtains information from agents in its neighbourhood ; is the cardinality of the set , . The obtained signals at node are

(12)

where is the estimate of state at node . Similar to the measurements , the signals are obtained using the weighted connectivity matrices that are assumed to be known to node while are not known. The signals are unknown communication errors of class , with known coefficient matrices such that are positive definite.

Remark 1

An interesting situation arises when some or all the pairs are not detectable and hence the state cannot be estimated at the corresponding filter nodes from only. Forming a large-scale distributed observer network by enabling the filters to obtain information about the state estimates of their neighboring filters allows these nodes to overcome the lack of detectability and provide a reliable state estimate , e.g., see [11] where this issue is discussed in detail. It should be noted that the distributed filtering problem under consideration is different from a centralized filtering scheme where all the measurements are sent to a central processing unit to obtain an estimate of the plant’s state. In a centralized scheme the computational complexity at the central node, communication load on the network and scalability are the common issues that distributed filtering aims to address.

In many existing methodologies, cf. [4, 7, 5], treating communications between the nodes as extra measurements results in observer design conditions that require solving large-scale coupled matrix equations or matrix inequalities. In contrast, we invoke the following equation that is going to be instrumental in obtaining decoupled filter gain conditions. Since are meant to represent the state vector with high fidelity, we use the following model at node ,

(13)

The signals denote the unknown error signals of this assignment. Note that this model is only proposed for node and node obviously computes using its corresponding filter.

Next, we pose a minimum-energy filtering problem to obtain a distributed filter at each node such that a cost functional, depending on the unknown initial state of the plant and the unknown errors in the measurements and neighbouring information associated to filter , is minimized. This cost functional is in fact the sum of aggregated energies in the unknowns of the model, the measurements and the neighbouring information associated with filter . Note that the distributed estimation paradigm restricts us to only utilizing local measurements and local information from the neighboring nodes, i.e. in this optimization process. It is important to stress the role of the unknown signals that are also to be optimized over in this process.

The idea behind minimizing the cost functional over , , amounts to agent relying on the fact that its neighboring agents are also minimum-energy type estimators of the plant state and hence their estimates can be thought of as estimates of the plant state contaminated with errors that are based on minimum-energy filtering too. This is a key idea among the results of this work that facilitates obtaining a distributed solution for our problem. The concept of minimum-energy filtering was explained in details in Section 2.

The energy cost similar to (3), associated with the system (1), the information available to node according to (11) and (12), and the approximation model (13) is defined as

(14)

where the signal is an a priori candidate for the initial state at node and the matrix is an a priori known weighting for the initialization error . The weighting matrices , , quantify the ‘confidence’ of node in the quality of its neighbor estimates and are selected locally by node .

Given the measurement data and the communication data , the signals and are identified as dependent variables and the energy cost functional (14) has arguments , and . Next, based on (14) define the following cost functional for node

(15)

where the matrix is a given coefficient. The selection of this matrix will be discussed in Section 5. The inclusion of the error term with a negative sign enforces a guaranteed type performance at node while a minimum-energy estimate is sought [12].

The principle of minimum energy requires that a set of the unknowns , , be sought at each node that are compatible with the measurements and the neighbourhood information in satisfying (11), (12) and (13) while minimizing the cost (15),

(16)

where . Using the minimum energy filtering approach as was explained in Section 2 the following filter is obtained

(17)

The matrix is defined as . The positive definite time-varying gain matrix is calculated from the following Riccati equation

(18)

The matrix is defined as where was given in (1).

An advantage of equation (18) is that it does not depend on any measurements, neighbour information or any on-line acquired data and hence can be fully solved off-line. It can also be solved on-line forward in time, so that the computed values for can be used to dynamically run the observer (17). Also, the filter equation (17) has the standard form of a distributed filter that produces state estimates based on local measurements and information obtained about the neighbours. A noteworthy feature of the proposed filter is that its gains are dependent only on the information available at node . The only matrix to be computed to find the gains, the matrix is computed locally, without interacting with the neighbours, provided the matrices are defined a priori. This feature of the proposed minimum energy estimator sets it apart from other distributed estimation algorithms such as those proposed in [4, 5].

4 Convergence Analysis

In this section we provide sufficient conditions to guarantee that the interconnected network of filters designed according to equations (17) and (18) converges to the true state of the plant as . Due to the presence of uncertainty, the convergence is understood in the sense. Firstly, we show that error dynamics exhibit properties of an internally stable system. The notion of internal stability is equivalent to asymptotic convergence of the estimation error of the network in the absence of disturbance signals. Secondly, we will show that when disturbances are integrable on , the filter ensures disturbance attenuation properties similar to those of an filter.

Consider the node error and its dynamics,

(19)

where the dynamics of the matrix are given in (18).

Let us introduce the following notation

(20)

where denotes the block diagonal matrix with as its diagonal blocks, and is the Kronecker product of matrices. Using this notation, satisfies the differential Riccati equation

(21)

The following theorem is the main result of this paper, which characterizes performance of the distributed minimum energy filter (17).

Theorem 1

Given a positive semidefinite weighting matrix , suppose a block diagonal matrix is such that

(22)

and each Riccati equation (18) has a positive definite bounded solution on . Then, the filtering algorithm (17), (18) guarantees the satisfaction of the global disturbance attenuation criterion2

(23)

Moreover, in the absence of the disturbances , , , the global estimation error asymptotically converges to zero as .

The proof is omitted due to page restrictions.

Several special cases of Theorem 1 deserve attention.

Minimum energy filter with guaranteed convergence performance

In this special case, the performance objective of interest is to achieve the internal stability of the filtering error dynamics, and enforce an acceptable disturbance attenuation performance of the filter error dynamics. Let , . Then the conditions of Theorem 1 specialize to guarantee the following condition

(24)

Minimum energy filter with guaranteed transient consensus performance

Consider where , and , are respectively the Laplace matrix of the network graph and that of its transpose, i.e., the graph obtained from the network graph by reversing its edges. This choice of results in the left hand side of the (23) being equal to the weighted disagreement cost between the nodes ; cf. [4, 7]. Therefore, if is such that the conditions of Theorem 1 are satisfied, then this choice of adjusts the filter to guarantee the consensus performance,

(25)

5 Network Design and Tuning

Theorem 1 provides a sufficient condition guaranteeing the convergence in the sense of the network of filters consisting of the estimators (17), (18). To apply this condition the matrices need to be selected to satisfy (22). The following propositions are instrumental in designing these matrices. First, we recall a connection between the differential Riccati equation (18) and a corresponding algebraic Riccati equation (ARE) [13],

(26)
Proposition 1

If the (ARE) (26) has a positive definite stabilizing solution then the equation (18) with an initial condition has a bounded positive definite solution on and .

Next, we use the above connection between the ARE (26) and the differential Riccati equation (18) to derive a tractable sufficient condition for the conditions of Theorem 1 to hold.

Theorem 2

Suppose is stabilizable, and and are the matrices defined in Theorem 1. The linear matrix inequality (LMI) conditions

(27)

guarantee the satisfaction of the conditions of Theorem 1 for any sufficiently large .

In the light of Theorem 2, solving the LMI (27) in variables , , is considered as a tuning process for the proposed filtering algorithm (17), (18). Tuning can be carried out, e.g., using MATLAB. It can be performed assuming the matrix is given, to obtain the matrices and to be used in equation (18). According to Theorem 1, this tuning process will guarantee the disturbance attenuation and the internal stability of the error dynamics for the entire networked filter consisting of the node filters (17), (18). Alternatively, the matrices , , can be selected a priori. The filter gain equations (18) are completely decoupled in this case, and Theorem 2 can be used to compute the matrix to characterize performance of the corresponding distributed filter (17).

6 Illustrative Example

In this section, a simulated network of five sensor nodes is considered that are to estimate a three-dimensional plant. The plant’s state matrix is given by

(28)

This corresponds to a Chua electronic circuit which was considered in [7]. A Chua circuit is a chaotic system switching between three regimes of operation. Here, we focus on one of the regimes and consider the second mode of the Chua example that was considered in [7]. The matrices considered are and . The network considered consists of five nodes with connectivity edges . Following [7], all communication matrices associated with the sensor nodes and are taken to be if the pair belongs to the set of edges , and otherwise .

We have considered the remainder of the coefficient matrices to be , and . The measure of confidence in the neighbouring estimates is set to be . The initial values for the Riccati differential equations (18) are chosen to be and the matrix is chosen by solving (27) with the disturbance attenuation weighting matrix (to ensure (27) holds strictly, in fact we solved (27) with ). According to Theorem 2, this guarantees the satisfaction of the conditions of Theorem 1. Furthermore, with this choice of matrix , the proposed minimum energy filter guarantees that estimation error dynamics are internally stable and exhibit the performance expressed in condition (25).

To illustrate performance of the filters, they are simulated using MATLAB. The simulation was implemented with the maximum step size of seconds and simulation time of seconds. The system disturbances are modeled as pulse signals of appropriate dimensions that last for seconds. The initial state is drawn from a normally distributed vectors with mean of and standard deviation . Figure 1 shows the plot of the estimation error of the first coordinate of each filter, . The plots confirm that despite some of local filters alone lead to an undetectable estimation problem, the proposed distributed filter has successfully utilized imperfect communications between the nodes to provide estimates at each node with estimation errors converging to zero.

Figure 1: The estimation errors of the filters in their first coordinates.

7 Conclusions

In this paper we proposed a distributed filtering algorithm by utilizing an minimum-energy filtering approach. The algorithm employs a decoupled computation of the individual filter coefficients. This is achieved by considering the estimation error of neighbouring agents as additional exogenous disturbances weighted according to the nodes’ confidence in their neighbours’ estimates. The proposed algorithm is shown to provide guaranteed internal stability and desired disturbance attenuation of the network error dynamics. The paper has discussed tuning of the algorithm. We have also provided a simulation example that confirms convergence of the proposed algorithm in the case a Chua circuit with undetectable pairs at some of the nodes.

Footnotes

  1. thanks: This work was supported by the Australian Research Council under Discovery Projects funding scheme (project D0120102152).
  2. In (23), denotes the norm in .

References

  1. H. F. Durrant-Whyte, B. Rao, and H. Hu, “Toward a fully decentralized architecture for multi-sensor data fusion,” in Robotics and Automation, 1990. Proceedings., 1990 IEEE International Conference on.   IEEE, 1990, pp. 1331–1336.
  2. B. Rao, H. Durrant-Whyte, and J. Sheen, “A fully decentralized multi-sensor system for tracking and surveillance,” The International Journal of Robotics Research, vol. 12, no. 1, pp. 20–44, 1993.
  3. R. Olfati-Saber, “Distributed kalman filter with embedded consensus filters,” in Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC’05. 44th IEEE Conference on.   IEEE, 2005, pp. 8179–8184.
  4. V. Ugrinovskii, “Distributed robust filtering with h-infinity consensus of estimates,” Automatica, vol. 47, no. 1, pp. 1–13, 2011.
  5. M. V. Subbotin and R. S. Smith, “Design of distributed decentralized estimators for formations with fixed and stochastic communication topologies,” Automatica, vol. 45, no. 11, pp. 2491 – 2501, 2009.
  6. B. D. O. Anderson and J. Moore, Optimal filtering.   Prentice Hall, 1979.
  7. V. Ugrinovskii, “Distributed robust estimation over randomly switching networks using consensus,” Automatica, vol. 49, no. 1, pp. 160 – 168, 2013.
  8. R. E. Mortensen, “Maximum-likelihood recursive nonlinear filtering,” Journal of Optimization Theory and Applications, vol. 2, no. 6, pp. 386–394, 1968.
  9. O. B. Hijab, “Minimum energy estimation,” Ph.D. dissertation, University of California, Berkeley, 1980.
  10. M. Athans and P. Falb, Optimal control: an introduction to the theory and its applications.   McGraw-Hill, 1966.
  11. V. Ugrinovskii, “Conditions for detectability in distributed consensus-based observer networks,” IEEE Tran. Autom. Contr., vol. 58, pp. 2659 – 2664, 2013, arXiv:1303.6397[cs.SY].
  12. W. M. McEneaney, “Robust filtering for nonlinear systems,” Systems & control letters, vol. 33, no. 5, pp. 315–325, 1998.
  13. T. Başar and P. Bernhard, H [infinity symbol]-optimal control and related minimax design problems: A dynamic game approach.   Birkhčauser (Boston), 1995.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
123649
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description