Stochastic Sensor Scheduling via Distributed Convex Optimization \thanksreffootnoteinfo

# Stochastic Sensor Scheduling via Distributed Convex Optimization \thanksreffootnoteinfo

[    [
###### Abstract

In this paper, we propose a stochastic scheduling strategy for estimating the states of discrete-time linear time invariant (DTLTI) dynamic systems, where only one system can be observed by the sensor at each time instant due to practical resource constraints. The idea of our stochastic strategy is that a system is randomly selected for observation at each time instant according to a pre-assigned probability distribution. We aim to find the optimal pre-assigned probability in order to minimize the maximal estimate error covariance among dynamic systems. We first show that under mild conditions, the stochastic scheduling problem gives an upper bound on the performance of the optimal sensor selection problem, notoriously difficult to solve. We next relax the stochastic scheduling problem into a tractable suboptimal quasi-convex form. We then show that the new problem can be decomposed into coupled small convex optimization problems, and it can be solved in a distributed fashion. Finally, for scheduling implementation, we propose centralized and distributed deterministic scheduling strategies based on the optimal stochastic solution and provide simulation examples.

qual]Chong Li, isu]Nicola Elia

Dept. of Electrical and Computer Engineering, Iowa State University, Ames, IA, 50011. Currently with Qualcomm Research, Bridgewater, NJ, 08807.

Dept. of Electrical and Computer Engineering, Iowa State University, Ames, IA, 50011.

Key words:  Networked control systems, sensor scheduling, Kalman filter, stochastic scheduling, sensor selection

11footnotetext: This research has been supported partially supported by NSF ECS-0901846 and NSF CCF-1320643. Partial version of this paper has appeared in [1].

## 1 Introduction

In this paper, we consider the problem of scheduling the observations of independent targets in order to minimize the tracking error covariance, but when only one target can be observed at a given time. This problem captures many interesting tracking/estimation application problems. As a motivational example, consider independent dynamic targets, spatially distributed in an area, that need to be tracked (estimated) by a single (mobile) camera sensor. The camera has limited sensing range and therefore it needs to zoom in on, or be in proximity of, one of the targets for obtaining measurements. Under the assumption that the switching time among the targets is negligible, then we need to find a visiting sequence in order to minimize the estimate error.
Another case is when a set of mobile surveillance devices need to track geographically-separated targets, where each target is tracked by one assigned surveillance device. However, the sensing/measuring channel can only be used by one estimator at the time (e.g. sonar range-finding [2]). Then, we need to design a scheduling sequence of surveillance devices for accurate tracking.

### 1.1 Related Work and Contributions of This Paper

There has been considerable research effort devoted to the study of sensor selection problems, including sensor scheduling [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] and sensor coverage [17, 18, 19, 20, 7, 21]. This trend has been inspired by the significance and wide applications of sensor networks. As the literature is vast, we list a few results which are relevant to this paper. The sensor scheduling problem mainly arises from minimization of two relevant costs: sensor network energy consumption and estimate error. On the one hand, [5],[6] and [9], see also reference therein, have proposed various efficient sensor scheduling algorithms to minimize the sensor network energy consumption and consequently maximize the network lifetime. On the other hand, researchers have proposed many tree-search based sensor scheduling algorithms (mostly in conjunction with Kalman filtering) to minimize the estimate error [3],[4],[13], e.g. sliding-window, thresholding, relaxed dynamic programming, etc. By taking both sensor network lifetime and estimate accuracy into account, several sensor tree-search based scheduling algorithms have been proposed in [8], [22]. In [10], the authors have formulated the general sensor selection problem and solved it by relaxing it to a convex optimization problem. The general formulation therein can handle various performance criteria and topology constraints. However, the framework in [10] is only suitable for static systems instead of dynamic systems which are mostly considered in the literature.
In general, deterministic optimal sensor selection problems are notoriously difficult. In this paper, we propose a stochastic scheduling strategy. At each time instant, a target is randomly chosen to be measured according to a pre-assigned probability distribution. We find the optimal pre-assigned distribution that minimize an upper bound on the expected estimate error covariance (in the limit) in order to keep the actual estimate error covariance small. Compared with algorithms in the literature, this strategy has low computational complexity, it is simple to implement and provides performance guarantee on the general deterministic scheduling problem. Of course, the reduction of computational complexity comes at the expenses of degradation of the ideal performance. However, in many situations the extra computational complexity cost may not be justified. Further this strategy can easily incorporate extra constraints on the scheduling design, which might be difficult to handle in existing algorithms (e.g. tree-search based algorithms).
Our work is related to [7], [23] and [11]. [7] introduces stochastic scheduling to deal with sensor selection and coverage problems, and [23] extends the setting and results in [7] to a tree topology. Although we also adopt the stochastic scheduling approach, the problem formulation and proposed algorithms of this paper are different from [7, 23]. In particular, we consider different cost functions and design distributed algorithms that provide optimal probability distributions. [11] has considered a scheduling problem in continuous-time and proposed a tractable relaxation, which provides a lower bound on the achievable performance, and an open-loop periodic switching strategy to achieve the bound in the limit of arbitrarily fast switching. However, besides the difference in the formulations, their approach does not appear to be directly extendable to the discrete-time setting. In summary, our main contributions include:

1. We obtain a stochastic scheduling strategy with performance guarantee on the general deterministic scheduling problem by solving distributed optimization problems.

2. For scheduling implementation, we propose both centralized and distributed deterministic scheduling strategies.

### 1.2 Notations and Organization

Throughout the paper, is the transpose of matrix . implies an matrix with as all its entries. denotes a diagonal matrix with vector as its diagonal entries. (or ) and (or ) respectively implies matrix is positive semi-definite and positive definite where and represent the positive semi-definite and positive definite cones. For a matrix A, if the block entry , we use in the matrix to present block .
The paper is organized as follows. In section 2, we mathematically formulate the stochastic scheduling problem. In section 3, we develop an approach and a distributed computing algorithm to solve the optimization problem. In section 4, we present some further results and the extensions of our model. In section 5, we consider the scheduling implementation problem. At last, we present simulations to support our results.

## 2 Sensor Scheduling Problem Setup

Consider a set of N DTLTI systems (targets) evolving according to the equations

 xi[k+1]=Aixi[k]+wi[k]i=1,2,…N (1)

where is the process state vector and is assumed to be an independent Gaussian noise with zero mean and covariance matrix . The initial state is assumed to be an independent Gaussian random variable with zero mean and covariance matrix . In practice, each DTLTI system modeled above may represent the dynamic change of a local environment, the trajectory of a mobile vehicle, the varying states of a manufactory machine, etc. As a result of the sensor’s limited range of sensing or the congestion of the sensing channel, at time instant , only one system can be observed as

 ~yi[k]=ξi[k](Cixi[k]+vi[k]) (2)

where is the indicator function indicating whether or not the system is observed at time instant , and accordingly we have constraint\@footnotemark\@footnotetextIf we assume at most one target is chosen to be measured at each time instant, then we have . Without loss of generality, in this paper we consider the case that one out of targets must be chosen at each time instant. . is the measurement noise, which is assumed to be independent Gaussian with zero mean and covariance matrix .

###### Assumption 1

For all , the pair is controllable and the pair is detectable.

Denote as the estimate at time , obtained by a causal estimator for system , which depends on the past and current observations . We begin by considering problem of minimizing (in the limit) the maximal estimate error. The problem can be formulated mathematically as

 min^xi,{ξi[j]}∞j=1maxi(limsupT→∞1TT∑k=1E[(xi[k]−^xi[k])′(xi[k]−^xi[k])])s.t.Equation:(???),(???),i=1,…,N,N∑i=1ξi[k]=1, (3)

As the DTLTI systems are assumed to be evolving independently, then for a fixed the optimal estimator for minimizing the estimate error covariance of system () is given by a Kalman filter\@footnotemark\@footnotetextThis indicates that parallel estimators, i.e., Kalman filters, are used for estimating independent DTLTI systems. whose process of prediction and update are presented as follows [24]. Firstly we define

 ^xi[k|k]≜E[xi[k]|{~yi[j]}kj=1]Pi[k|k]≜E[(xi[k]−^xi[k|k])(xi[k]−^xi[k|k])′|{~yi[j]}kj=1]^xi[k+1|k]≜E[xi[k+1]|{~yi[j]}kj=1]Pi[k+1|k]≜E[(xi[k+1]−^xi[k+1|k])(xi[k+1]−^xi[k+1|k])′|{~yi[j]}kj=1]

Then the Kalman filter evolves as

 ^xi[k+1|k]=Ai^xi[k|k]Pi[k+1|k]=AiPi[k|k]A′+Qi^xi[k+1|k+1]=^xi[k+1|k]+ξi[k+1]K[k+1](yi[k+1]−Ci^xi[k+1|k])Pi[k+1|k+1]=Pi[k+1|k]−ξi[k+1]Ki[k+1]CiPi[k+1|k]

where is the Kalman gain matrix. After straightforward derivation, we have the covariance of the estimate error evolving as

 AiPi[k]A′i+Qi−ξi[k]AiPi[k]C′i(CiPi[k]C′i+Ri)−1CiPi[k]A′i =Pi[k+1] (4)

where we use the simplified notation . Note that the error covariance is a function of sequence . Moreover, given , the evolution of the error covariance is independent of the measurement values. Substituting the optimal estimator, the problem (2) is simplified into the following one.

Deterministic Scheduling Problem:

 μd=min{ξi[j]}∞j=1,i=[1..N]maxi(limsupT→∞1TT∑k=1Tr(Pi[k]))subject toEquation:(???),N∑i=1ξi[k]=1, (5)

This problem is notoriously difficult to solve. There are not known optimal alternatives to searching all possible schedules and then pick the optimal one by complete comparison. However, the procedure is not computationally tractable in practice. Motivated by this fact, in what follows, we present a stochastic scheduling strategy with advantages summarized below:

1. The stochastic scheduling strategy provides an upper bound on the performance of the deterministic scheduling problem under mild conditions, as proved in Theorem 1 next.

2. The stochastic scheduling problem can be easily relaxed into a convex optimization problem, which can be solved efficiently in a distributed fashion.

3. The relaxed problem provides an open-loop strategy, which has low computing complexity and is simple to implement.

4. Several practical constraints/considerations can be easily incorporated into the stochastic scheduling formulation, as discussed in Section .

### 2.1 Problem Formulation: Stochastic Scheduling Strategy

First of all, we remove the dependence on time instant and consider as an independent and identically distribution (i.i.d) Bernoulli random variable with

 ξi[k]={1with probabilityqi0with probability1−qii=1,2,…N (6)

for all , where is the probability that the system is observed at each time instant. As , we have . Then the stochastic scheduling strategy is that at each time instant a target (i.e., DTLTI system) is randomly chosen for measurements according to a pre-assigned probability distribution .
Notice that the error covariance is random as it depends on the randomly chosen sequence . Thus we need to evaluate the expected estimate error covariance in order to minimize the actual estimate error. Putting above together, we have the stochastic scheduling problem motivated by (5) as follows.

Stochastic Scheduling Problem

 μs=minqi,i=[1..N]maxiTr(limk→∞Eξi,k[Pi[k]])subject toEquation:(???),(???),N∑i=1qi=1,0≤qi≤1, (7)

where the expectation is w.r.t , which we denote by .

###### Theorem 1

If is invertible for , the deterministic scheduling performance in (5) is almost surely upper bounded by the stochastic scheduling performance in (7).

Proof. See Appendix.

###### Remark 2

While Problem (5) provides an important motivation, Problem (7) is relevant on its own right and can be applied without the restriction on being invertible.

### 2.2 Relaxation

Problem (7) involves the evolution of

 Eξi,k[Pi[k]]=AiEξi,k−1[Pi[k−1]]A′i+Qi−qiEξi,k−1[AiPi[k−1]C′i(CiPi[k−1]C′i+Ri)−1CiPi[k−1]A′i]

Unfortunately, the right-hand side of the above expression is not easily computable, as it involves the expectation w.r.t. , of a nonlinear recursive expression of . However, [24] has nicely shown that is upper bounded by the fixed point of the following associated MARE,

 MARE:Xi=AiXiA′i+Qi−qiAiXiC′i(CiXiC′i+Ri)−1CiXiA′i. (8)

This result motivates us to minimize as a means to keep itself small. Specifically, we consider the following optimization problem, denoted by OP, in the rest of the paper.

 OP: minqi,i=[1..N]maxiTr(Xi) subject to:N∑i=1qi=1,qci

where is an implicit function of , defined by and

 gqi(Xi)=AiXiA′i+Qi−qiAiXiC′i(CiXiC′i+Ri)−1CiXiA′i (10)
###### Remark 3

is the critical value depending on the unstable eigenvalues of , where the fixed point exists if and only if the assigned probability . We refer interested readers to [24, 25] for the details on . In this paper, we assume , otherwise the above optimization problem has no feasible solution, i.e., the upper bound turns out to be infinity. For stable systems, we always have .

Note that the searching space of the above optimization problem is continuous, and it will be shown that the problem is (quasi)-convex. In addition, based on Theorem 1, the objective value of the above problem is an upper bound on the performance of the deterministic scheduling problem (5). In the rest of the paper, we provide efficient distributed algorithms to obtain the optimal solutions of OP.

## 3 Minimization of The Maximal Estimate Error among Targets.

In this section, we first show that OP can be decoupled into convex optimization problems, which can be solved separately. Then by utilizing the classical consensus algorithm, we propose a distributed computing algorithm to obtain the optimal solution of OP. First of all, we recall some results on the MARE in [24] derived under the assumption .

###### Lemma 4

Fix , for any initial condition ,

 limk→∞g(k)q(X0)=limk→∞gq(gq(⋯gq(X0)))k=X

where is the unique positive-semidefinite fixed point of the MARE, namely, .

###### Lemma 5

For a given scalar and a DTLTI system as described in (2) and (2), the fixed point of the MARE presented in the form of (8) can be obtained by solving the following LMI problem.

 argmaxXTr(X)subject to[AXA′−X+Q√qAXC′√qCXA′CXC′+R]⪰0X⪰0. (11)
###### Lemma 6

Assume , and is controllable. Then the following facts are true.

1. if .

2. if .

3. if .

4. .

5. .

6. Define the linear operator

 Lq(Y)=(1−q)(AYA′)+qFYF′.

Suppose there exists such that . Then for all

 limk→∞L(k)q(W)=0

where

 Φq(K,X)=(1−q)(AXA′+Q)+q(A+KC)X(A+KC)′+qQ+qKRK′. (12)

Now we prove the monotonicity of the fixed point of the MARE, which will facilitate us to analyze OP.

###### Definition 7

(Matrix-monotonicity) A function : is matrix-monotonic if for all with , we have either or in the positive semidefinite cone .

###### Theorem 8

(Matrix-monotonicity of the MARE) For , the fixed point of the MARE is matrix-monotonically decreasing w.r.t the scalar .

Proof. Assume and , satisfying and . The existence of the fixed points and is guaranteed according to Lemma 4. We need to show . Since , by using Lemma 6(2) we have

 X1=gq1(X1)⪰gq2(X1).

By Lemma 6(1), we have

 X1⪰gq2(gq2(X1))⪰gq2(gq2(gq2(X1)))⪰…⪰g(k)q2(X1)

By the convergence property of the MARE (i.e., Lemma 4), we have by taking .

###### Remark 9

This theorem reveals an important message that, for any two different scalar and , the corresponding fixed points can be ordered in the positive semidefinite cone. In other words, for a given system model , the fixed points of the MARE w.r.t variable are comparable. As we will see, this property is the foundation for deriving algorithms to solve OP.

Now, we are ready to analyze and solve OP. For the ease of reading, we occasionally use notation to stress that is a function of , i.e., is the fixed point of the MARE w.r.t the scalar .

###### Corollary 10

Problem OP is a quasi-convex optimization problem.

Proof. Consider the cost function of OP. From Theorem 8, is monotonically non-increasing in , as the trace function is linear. Therefore, is a quasi-convex function, for each , due to the fact that any monotonic function is quasi-convex. Next, based on the fact that non-negative weighted maximum of quasi-convex functions preserves quasi-convexity, the result follows.

Next, we can rewrite problem OP in the following equivalent form

 minqi,i=[1..N],γ>0γsubjectto:Tr(Xi(qi))≤γ,∑Ni=1qi=1,qci

The problem is in principle solved by bisecting and checking the feasible set is not empty. However the constraint set is not in a useful form yet. For any fixed , the feasible set is convex but not easy to work with, given the implicit functions .

It is then convenient to consider the following related problem for a given .

 μ(γ)=minqi,i=[1..N]N∑i=1qisubject to:Tr(Xi(qi))≤γ,qci
###### Lemma 11

is feasible for Problem (13) if and only if Problem (14) is feasible and .

Proof. Let denote the set of ’s feasible for Problem (13).

Assume is not empty. Then feasible set of Problem (14) is not empty, and since we know that there are ’s such that , then . For the other direction, assume that . Then is not empty, and therefore is feasible for Problem (13). If , then let such that , and consider , for . Then, , , and because of Theorem 8. Thus, is not empty. Hence is feasible for Problem (13).
The cost of (14) is separable and the constraints are independent for each . Thus, for any , (14) can be solved by minimizing independent problems, namely:

 μ(γ)=N∑i=1minqiqisubject to:Tr(Xi(qi))≤γ,qci

It is easy to infer that in (15) is decreasing w.r.t. performance . Thus, OP can be solved by (15) using bisection on until .

We next concentrate on the subproblems:

 qopti(γ)=minqiqisubject to:Tr(Xi(qi))≤γ,qci

Based on Theorem 8, we see that the optimal solution of the problem (16) implies the smallest probability required for measuring system for achieving the pre-assigned estimate performance . If the problem (16) is not feasible (e.g. is too small), we set . Next, we show that the optimization problem (16) can be reformulated as the iteration of a Linear Matrix Inequality (LMI) feasibility problem. Without abuse of notation, we remove the subscript since the following results apply to all DTLTI dynamic systems.

###### Lemma 12

Assume that is controllable and is detectable. For any given and invertible matrices and , the following statements are equivalent:

1. such that .

2. and such that (defined in (12)).

3. and such that .

where

 (17)

Proof. . According to Lemma 6 (3), we have or with . Then since and the other terms are . Moreover,

 2¯X=2Φq(K¯X,¯X)=(1−q)A(2¯X)A′+q(A+K¯XC)(2¯X)(A+K¯XC)′+2Q+2qK¯XRK′¯X≻(1−q)A(2¯X)A′+q(A+K¯XC)(2¯X)(A+K¯XC)′+Q+qK¯XRK′¯X=Φq(K¯X,2¯X) (18)

The inequality follows from the fact that and . The proof is complete.
. If , the proof follows from Theorem in [24] with .
.

 X≻Φq(K,X)⇔X≻(1−q)AXA′+q(A+KC)X(A+KC)′+qKRK′+Q

Let and . Left and right multiply the above inequality by we have

 ⇔GXG≻(1−q)GAXA′G+q(GA+HC)X(GA+HC)′+qHRH′+GQG

By using Schur complement this is equivalent to (17).

###### Theorem 13

If is controllable and is detectable, the solution of the optimization problem (16) can be obtained by solving the following quasi-convex optimization problem in the variables .

 minq,H,G≻0,Y≻0qsubject toTr(Y)≤γ,i=1,2,…N[YIIG]⪰0Γq(G,H)≻0,qc

where is given by (17).

Proof. From the substitution of in Lemma 12, it is straightforward to obtain the following equivalent statements by the Schur complement.
such that .
such that and
such that and
From Lemma 12, we have the equivalence between and in terms of feasibility. For fixed , is a LMI in variables . Therefore, the problem (19) can be solved as a quasi-convex optimization problem by using bisection for variable .

In summary, we have shown the following

###### Theorem 14

Problem OP is equivalent to a quasi-convex optimization problem that can be solved by solving (15) using bisection on until within the desired accuracy. For each level , each of the independent subproblems (16) can be solved by solving (19) also using bisection for variable and iterating LMI feasibility problems.

### 3.1 Distributed solutions

Note that the steps of the outer bisection iteration are straightforward and can be done either by a centralized scheduler or in a distributed fashion. If a centralized scheduler/computing-unit is available, it can collect the from the estimators, check that their sums is less than or equal to one, and send back to the estimator an updated value of based on a bisection algorithm.

Alternatively, the estimators need to cooperate and agree on an optimal feasible . This can be done assuming the estimators are strongly connected via a network where the communications between any two estimators are error-free\@footnotemark\@footnotetextNote that the decentralized computing units are allowed to be allocated in a single fusion center or to be physically distributed in an area.. In this case, each estimator needs to obtain the value of by communicating with its neighbors. Under the assumption that is known to the estimators, can be obtained by a distributed averaging process in finite steps as shown in [26]. Then, by increasing or decreasing under a common bisection rule among estimators, the value of can be driven to and consequently OP is solved.

The above argument, leads to the following distributed computing algorithm, Algorithm 1, to solve OP. The inputs of the algorithm are global information assumed to be known by each estimator in prior. Denote as the optimal objective value of OP. To avoid cumbersome details and to save space, we present the algorithm under the assumption that the interval is selected to contain . i.e., we have at each step. Then the algorithm is guaranteed to converge to the optimal objective value within the desired tolerance.

## 4 Extensions and Special Cases.

In practical scenarios, some conditions/constraints might be of interest in the sensor scheduling problem. However, adding extra constraints on scheduling design is problematic in many existing scheduling strategies. Our stochastic scheduling approach can easily incorporate extra conditions/constaints, as shown through two specific examples next.

### 4.1 Prioritization of Certain Targets

For some reason, specific targets may require extra attention, (i.e., more precise estimation) from the sensor. Our model can incorporate this requirement by adding constraint where represents the assigned attention weight for target . Then we need to solve the following problem,

 minqi,i=[1..N]maxiTr(Xi)subject toXi=gqi(Xi),N∑i=1qi=1,qj≥αjqci

### 4.2 Measurement Loss in Sensing

In practice, measurement loss is a common phenomena due to various sources, e.g., shadowing, weather condition, large delay, etc. If the measurement loss probability for sensing -th target is known in prior, this extra condition can be easily incorporated in our model. Assume that ’s are pre-assigned to each target. Then the actual probability of reliably receiving measurements from -th target is because of measurement loss. Therefore, OP can be modified as follows,

 minqi,i=[1..N]maxiTr(Xi)subject toXi=gqi(1−τi)(Xi)N∑i=1qi=1qi(1−τi)>qci,qi≤1,i=1,2,…N (21)

With simple modifications, these two extended problems can be solved by the proposed distributed algorithms as well.

### 4.3 Closed-form Solutions to MARE for Special Cases

For the following special class of systems, the underlying MARE has a closed-form solution. Although this cannot be expected in general, it increases the computational efficiency of the proposed algorithm. Incidentally, this appears to the first non-trivial closed-form solution of the MARE in the literature.

Consider a set of DTLTI single-state systems to be measured evolving according to the equation

 xi[k+1]=aixi[k]+wi[k] (22)

where and the covariance of and are and , respectively. The measurement taken by the sensor at each time instant is formulated as follows,

 ~yi[k]=ξi[k](xi[k−di]+vi[k])

where represents the delay in measurement, which we assume to be fixed and known in this paper. By using augmented states to deal with delays, it is straightforward to have the following compact form for system with measurement delays,

 Xi[k+1]=AiXi[k]+Biwi[k]~yi[k]=ξi[k](CiXi[k]+vi[k]), (23)

where has the following structure

 Xi[k]=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣x1i[k]x2i[k]⋮xdii[k]xi[k]⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,Ai=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣010⋯0001⋯0⋮⋱⋮000⋯1000⋯ai⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,Bi=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣00⋮01⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (24)
 Ci=[10⋯0].

Note that only is the true state of system at time instant while other states included in vector are dummy variables for handling delays. By exploiting the special structure of above model, we are able to obtain the closed-form fixed point of the MARE. We present this result in the following theorem.

###### Theorem 15

For a given , consider the MARE described as , where have the structure presented in (24), and with . Then the MARE has a unique positive-semidefinite fixed point as follows, if and ,

 X=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣x1x1⋯x1x1x2⋯x2⋮⋮⋱⋮x1x2⋯xn⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (25)

where

 xj=Q+√Q2+4qQR2q+(j−1)Qj=1,2,⋯,n;

if and ,

 X=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣x1ax1⋯an−1x1ax1x2⋯an−2x2⋮⋮⋱⋮an−1x1an−2x2⋯xn⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (26)

where

 x1=Ra2−R+Q+√(Ra2−R+Q)2−4(a2−1−a2q)QR2(1+a2q−a2)xj=a2(j−1)x1+1−a2(j−1)1−a2Qj=1,2,…,n;

if , MARE fails to converge to a steady state value.

The proof is tedious but straightforward by plugging in the above closed-form solution into the MARE. If , the results directly follows from [25].

Note that the stochastic upper bound in Theorem 1 may not hold since is not invertible. However, a deterministic scheduling sequence can be constructed based on the optimal stochastic solution in the next section, providing an upper bound on the original deterministic scheduling problem with good performance.

## 5 Scheduling Implementation

For completeness, in this section we consider the problem of scheduling implementation and present a simple approach to implement the scheduling sequence. For stochastic scheduling implementation, a central scheduler is required to construct a scheduling sequence by randomly selecting targets (via a random seed) according to the optimal probability distribution. Note that this construction process can be performed efficiently either off-line or on-line.

We next turn our attention back to a deterministic scheduling and look for one consistent with the the optimal stochastic solution. Note that the optimization problems are minimizing the average costs of all possible stochastic sequences. With the optimal stochastic solutions, we are able to randomly construct a sequence compatible with the distribution. However, in practice, such random-constructed scheduling sequence may result in undesirable performance. For example, one target may not be measured for a long consecutive time instants and its error covariance is temporarily built up. Thus, we would like to identify and use, among all possible stochastic sequences, those that have low costs. Motivated by the sensor scheduling literature, which suggests periodic solutions [27, 28], we define and look for deterministic sequences of minimal consecutiveness, defined next. These sequences are periodic and switch among targets most often compatibly with the optimal scheduling distribution. We remark that the approach we propose in this section is heuristic but it can be implemented in a distributed fashion and leads to good performance in simulations. We leave the analysis of this and other approaches to future research.

###### Definition 16

Let be a set of sequences with length , where each element in the sequence takes value from an element set and the number of occurrences of each value in the sequence is . Under these assumptions, then the sequence of minimal consecutiveness is the solution of the following optimization problem

 min{s[k]}Lk=1maxi,j∈{1,2,⋯,N}{j−i|j≥i,s[i]