Error Correcting Codes for Distributed Control

Error Correcting Codes for Distributed Control

Ravi Teja Sukhavasi and    Ravi Teja Sukhavasi is a graduate student with the department of Electrical Engineering, California Institute of Technology, Pasadena, USA teja@caltech.eduBabak Hassibi is a faculty with the department of Electrical Engineering, California Institute of Technology, Pasadena, USA hassibi@caltech.eduThis work was supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CCF-1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by Caltech’s Lee Center for Advanced Networking. Babak Hassibi
Abstract

The problem of stabilizing an unstable plant over a noisy communication link is an increasingly important one that arises in applications of networked control systems. Although the work of Schulman and Sahai over the past two decades, and their development of the notions of “tree codes” and “anytime capacity”, provides the theoretical framework for studying such problems, there has been scant practical progress in this area because explicit constructions of tree codes with efficient encoding and decoding did not exist. To stabilize an unstable plant driven by bounded noise over a noisy channel one needs real-time encoding and real-time decoding and a reliability which increases exponentially with decoding delay, which is what tree codes guarantee. We prove that linear tree codes occur with high probability and, for erasure channels, give an explicit construction with an expected decoding complexity that is constant per time instant. We give novel sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants and argue that they are asymptotically tight. This work takes an important step towards controlling plants over noisy channels, and we demonstrate the efficacy of the method through several examples.

I Introduction

Control theory deals with regulating the behavior of dynamical systems using real-time output feedback. Most traditional control systems are characterized by the measurement and control subsystems being co-located. Hence, there were no loss of measurement and control signals in the feedback loop. There is a very mature theory for this setup and there are concrete theoretical tools to analyze the overall system performance and its robustness to modeling errors [1]. There are increasingly many applications of networked control systems, however, where the measurement and control signals are communicated over noisy channels. Some examples include the smart grid, distributed computation, intelligent highways, etc (e.g., see [2]).

Applications of networked control systems represent different levels of decentralization in their structure. At a high level, the measurement unit and the controller are not co-located but each is individually centralized. In addition, the measurement and control subsystems are themselves comprised of arrays of sensors and actuators that in turn communicate with each other over a network. Our focus is on the former. We consider the setup where the measurement and control subsystems are individually centralized but are separated by communicated channels.

Several aspects of this problem have been studied in the literature [3, 4, 5, 6, 7]. When the communication links are modeled as rate-limited noiseless channels, significant progress has been made (see e.g.,[8, 9, 10]) in understanding the bandwidth requirements for stabilizing open loop unstable systems. [11] considered robust feedback stabilization over communication channels that are modeled as variable rate digital links where the encoder has causal knowledge of the number of bits transmitted error free. Under a packet erasure model, [12] studied the problem of LQG (Linear Quadratic Gaussian) control in the presence of measurement erasures and showed that closed loop mean squared stability is not possible if the erasure probability is higher than a certain threshold. So, clearly the measurement and control signals need to be encoded to compensate for the channel errors.

There are two key differences between the communication paradigm for distributed control and that traditionally studied in information theory. Shannon’s information theory, in large part, is concerned with reliable one-way communication while communication for control is fundamentally interactive: the plant measurements to be encoded are determined by the control inputs, which in turn are determined by how the controller decodes the corrupted plant measurements. Furthermore, conventional channel codes achieve reliability at the expense of delay which, if present in the feedback loop of a control system, can adversely affect its performance.

In this context, [13] provides a necessary and sufficient condition on the communication reliability needed over channels that are in the feedback loop of unstable scalar linear processes, and proposes the notion of anytime capacity as the appropriate figure of merit for such channels. In essence, the encoder is causal and the probability of error in decoding a source symbol that was transmitted time instants ago should decay exponentially in the decoding delay .

Although the connection between communication reliability and control is clear, very little is known about error-correcting codes that can achieve such reliabilities. Prior to the work of [13], and in the context of distributed computation, [14] proved the existence of codes which under maximum likelihood decoding achieve such reliabilities and referred to them as tree codes. Note that any real-time error correcting code is causal and since it encodes the entire trajectory of a process, it has a natural tree structure to it. [14] proves the existence of nonlinear tree codes and gives no explicit constructions and/or efficient decoding algorithms. [15] and [14] also propose sequential decoding algorithms whose expected complexity per time instant is fixed but the probability that the decoder complexity exceeds decays with a heavy tail as . Much more recently [16] proposed efficient error correcting codes for unstable systems where the state grows only polynomially large with time. When the state of an unstable scalar linear process is available at the encoder and when there is noiseless feedback of channel outputs, [17] and [18] develop encoding-decoding schemes that can stabilize such a process over the binary symmetric channel and the binary erasure channel respectively. But when the state is available only through noisy measurements or when there is no channel feedback, little is known in the way of stabilizing an unstable scalar linear process over a stochastic communication channel.

The subject of error correcting codes for control is in its relative infancy, much as the subject of block coding was after Shannon’s seminal work in [19]. So, a first step towards realizing practical encoder-decoder pairs with anytime reliabilities is to explore linear encoding schemes. We consider rate causal linear codes which map a sequence of -dimensional binary vectors to a sequence of dimensional binary vectors where is only a function of . Such a code is anytime reliable if at all times and delays , for some . We show that linear tree codes exist and further, that they exist with a high probability. For the binary erasure channel, we propose a maximum likelihood decoder whose average complexity of decoding is constant per each time iteration and for which the probability that the complexity at a given time exceeds decays exponentially in . This allows one to stabilize a partially observed unstable scalar linear process over a binary erasure channel and to the best of the authors’ knowledge, this has not been done before.

In Section II, we present some background and motivate the need for anytime reliability with a simple example. In Section IV, we come up with a sufficient condition for anytime reliability in terms of the weight distribution of the code. In Section V, we introduce the ensemble of time invariant codes and use the results from Section IV to prove that time invariant codes with anytime reliability exist with a high probability. In Section VI, we invoke some standard results from the literature on coding theory to improve the results obtained in Section V. In Section VII, we present a simple decoding algorithm for the erasure channel.

Ii Background

Owing to the duality between estimation and control, the essential complexity of stabilizing an unstable process over a noisy communication channel can be captured by studying the open loop estimation of the same process. We will motivate the kind of communication reliability needed for control through a simple example.

A toy example: Consider tracking the following random walk, , where is Bernoulli, i.e., 0 or 1 with equal probability, and . Suppose an observer observes and communicates over a noisy communication channel to an estimator. Also assume that the estimator knows the system model and the initial state . The observer clearly needs to communicate whether is or . Note that the observer only has causal access to , i.e., at any time , the observer has access to . Let the encoding function of the observer at time be , where is the channel input alphabet and is the number of channel uses available for each step of the system evolution. One can visualize such a causal encoding process over a binary tree as in Fig. 1. While the information bits determine the path in the tree, the label on each branch denotes the symbol transmitted by the observe/encoder. The codeword associated to a given path in the tree is given by the concatenation of the branch symbols along that path.

Fig. 1: One can visualize any causal code on a tree. The distance property is: . This must be true for any two paths with a common root and of equal length in the tree

Upon receiving the channel outputs until time , the estimator generates estimates of the noise sequence . Then, the estimator’s estimate of the state, , is given by

(1)

Suppose , i.e., is the probability that the position of the earliest erroneous is at time . The probability here is over the randomness of the channel. From (1), we can bound from above as

Clearly, a sufficient condition for to be finite is as follows

(2)

where and are constants that do no depend on .

In the context of control, it was first observed in [13] that exponential reliability of the form (2) is required to stabilize unstable plants over noisy communication channels. For a given channel, encoder-decoder pairs that achieve (2) are said to be anytime reliable. This definition will be made more precise in Section III. In the context of distributed computation, it was observed in [14] that a causal code under maximum likelihood decoding over a discrete memoryless channel is anytime reliable provided that the code has a certain distance property which is illustrated in Fig. 1. Avoiding mathematical clutter, one can describe the distance property as follows. For any two paths with a common root and of equal length in the tree whose least common ancestor is at a height from the bottom, the Hamming distance between their codewords should be proportional to . [14] referred to codes with this distance property as tree codes and showed that they exist. There has recently been increased interest (e.g., [20, 21, 22]) in studying tree codes for interactive communication problems. But the tree codes are, in general, non-linear and the existence was not with high probability.

We will prove the existence, with high probability, of linear tree codes and exploit the linearity to develop an efficiently decodable anytime reliable code for the erasure channel.

Iii Problem Setup

Fig. 2: Causal encoding and decoding
The binary entropy function
The smaller root of the equation
For a matrix , abs(), i.e., .
Spectral radius of
For a vector , The component of
, i.e., a column with 1’s
For , Component-wise inequality
Logarithm in base 2
For , , i.e., Kullbeck-Leibler divergence
between Bernoulli() and Bernoulli()
TABLE I:

The notation to be used in the rest of the paper is summarized in Table I. Consider the following dimensional unstable linear system with dimensional measurements. Assume that is observable and is controllable.

(3)

where , is the dimensional control input and, and are bounded process and measurement noise variables, i.e., and for all . The measurements are made by an observer while the control inputs are applied by a remote controller that is connected to the observer by a noisy communication channel. We assume that the control input is available to the plant losslessly. We do not assume that the observer has access to either the channel outputs or the control inputs. As is shown to be possible, e.g., in [9, 13], we do not use the control actions to communicate the channel outputs back to the observer through the plant because this could have a detrimental affect on the performance of the controller.

Before proceeding further, a word is in order about the boundedness assumption on the noise. If the process and/or measurement noise have unbounded support, it is not clear how one can stabilize the system without additional assumptions on the channel. For example, [17] assumes feedback of channel outputs to the observer in order to stabilize an unstable process perturbed by Gaussian noise over an erasure channel while [23] proposes a forward side channel between the observer and the controller that has a positive zero error capacity. We avoid this difficulty by assuming that the noise has bounded support which may be a reasonable assumption to make in practice.

The measurements will need to be quantized and encoded by the observer to provide protection from the noisy channel while the controller will need to decode the channel outputs to estimate the state and apply a suitable control input . This can be accomplished by employing a channel encoder at the observer and a decoder at the controller. For simplicity, we will assume that the channel input alphabet is binary. Suppose one time step of system evolution in (3) corresponds to channel uses111In practice, the system evolution in (3) is obtained by discretizing a continuous time differential equation. So, the interval of discretization could be adjusted to correspond to an integer number of channel uses, provided the channel use instances are close enough., i.e., bits can be transmitted for each measurement of the system. Then, at each instant of time , the operations performed by the observer, the channel encoder, the channel decoder and the controller can be described as follows. The observer generates a bit message, , that is a causal function of the measurements, i.e., it depends only on . Then the channel encoder causally encodes to generate the channel inputs . Note that the rate of the channel encoder is . Denote the channel outputs corresponding to by , where denotes the channel output alphabet. Using the channel outputs received so far, i.e., , the channel decoder generates estimates of , which, in turn, the controller uses to generate the control input . This is illustrated in Fig. 2. Now, define

Thus, is the probability that the earliest error is steps in the past.

Definition 1 (Anytime reliability)

Given a channel, we say that an encoder-decoder pair is anytime reliable over that channel if

(4)

In some cases, we write that a code is anytime reliable. This means that there exists a fixed such that the code is anytime reliable.

We will show in Sections VIII and IX that anytime reliability with an appropriately large rate, , and exponent, , is a sufficient condition to stabilize (3) in the mean squared sense222can be easily extended to any other norm. In what follows, we will demonstrate causal linear codes which under maximum likelihood (ML) decoding achieve such exponential reliabilities.

Iv Linear Anytime Codes

As discussed earlier, a first step towards developing practical encoding and decoding schemes for automatic control is to study the existence of linear codes with anytime reliability. We will begin by defining a causal linear code.

Definition 2 (Causal Linear Code)

A causal linear code is a sequence of linear maps and hence can be represented as

(5)

where

We denote . Note that a tree code is a more general construction where need not be linear. Also note that the associated code rate is . The above encoding is equivalent to using a semi-infinite block lower triangular generator matrix given by

One can equivalently represent the code with a parity check matrix , where . The parity check matrix is in general not unique but it is easy to see that one can choose to be block lower triangular too.

(6)

where and . In fact, we present all our results in terms of the parity check matrix. Before proceeding further, some of the notation specific to coding is summarized in Table II.

Hamming weight of
TABLE II:

The objective is to study the existence of causal linear codes which are anytime reliable under maximum likelihood (ML) decoding. With reference to Fig. 1, this amounts to choosing the branch labels, , in such a way that they satisfy the distance property, and also are linear functions of the input, . Further, we are interested in characterizing the thresholds on the rate, , and exponent, , for which such codes exist. In the interest of clarity, we will begin with a self-contained discussion of a weak sufficient condition on the distance distribution, , of a causal linear code so that it is anytime reliable under ML decoding. This sufficient condition is an adaptation of the distance property illustrated in Fig. 1 to the case of causal linear codes. In section V, we will demonstrate the existence of causal linear codes that satisfy this sufficient condition. The thresholds thus obtained will be significantly tightened in section VI by invoking some standard results from random coding literature, e.g., [24, 25].

Iv-a A Sufficient Condition

Suppose the decoding instant is and without loss of generality, assume that the all zero codeword is transmitted, i.e., for . We are interested in the error event where the earliest error in estimating happens at , i.e., for all and . Note that this is equivalent to the ML codeword, , satisfying and , and having full rank so that can be uniquely mapped to a transmitted sequence . Then, using a union bound, we have

(7)

Consider a memoryless binary-input output-symmetric (MBIOS) channel. Let and denote the input and output alphabet respectively. The Bhattacharya parameter, , for such a channel is defined as

Now, it is well known (e.g., see [26]) that, under ML decoding

From (7), it follows that . If and for some , then

(8)

where . So, an obvious sufficient condition for can be described in terms of and as follows. For some , we need

(9a)
(9b)
where is a constant that is independent of . This brings us to the following definition
Definition 3 (Anytime distance and Anytime reliability)

We say that a code has anytime distance, if the following hold

  1. is full rank for all

  2. , for all and .

We require that have full rank so that the mapping from the source bits to coded bits is invertible. We summarize the preceeding discussion as the following Lemma.
Lemma IV.1

If a code has anytime distance, then it is anytime reliable under ML decoding over a channel with Bhattacharya parameter where

V Linear Anytime Codes - Existence

Consider causal linear codes with the following Toeplitz structure

The superscript in denotes ‘Toeplitz’. is obtained from in (6) by setting for . Due to the Toeplitz structure, we have the following invariance, and for all . The code will be referred to as a time-invariant code. The notion of time invariance is analogous to the convolutional structure used to show the existence of infinite tree codes in [14]. This time invariance allows one to prove that such codes which are anytime reliable are abundant.

Definition 4 (The ensemble )

The ensemble of time-invariant codes, , is obtained as follows, is any fixed full rank binary matrix and for , the entries of are chosen i.i.d according to Bernoulli(), i.e., each entry is 1 with probability and 0 otherwise.

For the ensemble , we have the following result

Theorem V.1 (Abundance of time-invariant codes)

Let . Then, for each and

{proof}

See Appendix -C

We can now use this result to demonstrate an achievable region of rate-exponent pairs for a given channel, i.e., the set of rates and exponents such that one can guarantee anytime reliability using linear codes. Note that the thresholds in Theorem V.1 are optimal when . So, for the rest of the analysis we fix . To determine the values of that will satisfy (8), note that we need

With this observation, we have the following Corollary.

Corollary V.2

For any rate and exponent such that

if is chosen from , then

Note that for BEC(), and for BSC(), . The constant in the exponent in Corollary V.2 can be computed explicitly and it decreases to zero if either the rate or the exponent approach their respective thresholds. Further note that almost every code in the ensemble is -anytime reliable after a large enough initial delay .

The thresholds in Corollary V.2 have been obtained by using a simple union bound for bounding the error probability in (7). As one would expect, these thresholds can be improved by doing a more careful analysis. It turns out that the ensemble of random causal linear codes bears close resemblance to random linear block codes. This allows one to borrow results from the random coding literature to tighten the thresholds.

Vi Improving the Thresholds

We will examine the Toeplitz ensemble more closely and show that its delay dependent distance distribution is bounded above by that of the random binary linear code ensemble, which we will define shortly. This will enable us to significantly improve the rate, exponent thresholds of Section V that were obtained using a simple union bound.

Vi-a A Brief Recap of Random Coding

For an arbitrary discrete memoryless channel, recall the following familiar definition of the random coding exponent, , from [24]333We use base-2 instead of the natural logarithm

(11a)
(11b)

In (11b), denotes a distribution on the channel input alphabet. The ensemble of random binary linear codes with block length and rate is obtained by choosing an binary parity check matrix , i.e., , each of whose entries is chosen i.i.d Bernoulli. For such an ensemble, any non-zero binary word is a codeword with probability . For a given block code, let denote the minimum distance and the number of codewords with Hamming weight . A quick calculation shows that and that grows like with a high probability. A typical code in this ensemble is defined to be one that has and . A simple Markov inequality shows that the probability that a code from this ensemble is atypical is at most . For the typical code over BSC(), the block error probability decays as where the exponent has been characterized in [25]. As has been noted in [25], these calculations can be easily extended to a wider class of channels. In particular, the class of MBIOS channels admits a particularly clean characterization. We present the following generalization of the result in [25] without proof.

Lemma VI.1

Consider a linear code with block length , rate and distance distribution such that

  1. if

for some . Let the channel be a MBIOS channel with Bhattacharya parameter . Then the block error probability, , under ML decoding is bounded as

(12)

where

(13)

and as .

Proof:

The proof is a straightforward generalization of the result in [25]. \qed

Vi-B The Toeplitz Ensemble

In the causal case, fix an arbitrary decoding instant and consider the event that the earliest error happens at a delay . As seen before, the associated error probability depends on the relevant codebook and its distance distribution . Recall from Table II that

Due to the Toeplitz structure, we have . So, we drop the subscript in and write it as . Note that is determined by the matrix . Let be a given -dimensional binary word, i.e., , and write , where notionally corresponds to the encoder output bits during the time slot. Suppose , then it is easy to see that

Recall that .

Now observe that . This is same as the average weight distribution of the random binary linear code with a block length and rate . So, applying Lemma VI.1, we get the following result.

Theorem VI.2

For each rate and exponent , if is chosen from , then

where is the Shannon capacity of the channel and

(14)

The problem of stabilizing unstable scalar linear systems over noisy channels in the absence of feedback has been considered in [13]. [13] showed the existence of anytime reliable codes for and . The code is not linear in general and the existence was not with high probability. Theorem VI.2 proves linear anytime reliable codes for exponent, , up to . When , . So, Theorem VI.2 marks a significant improvement in the known thresholds for stabilizing unstable processes over noisy channels, as is demonstrated in Figures 3 and 4.

(a) Binary Erasure Channel,
(b) Binary Symmetric Channel,
Fig. 3: Comparing the thresholds obtained from Theorem VI.2 and Theorem 5.2 in [13]

Vii Decoding over the Binary Erasure Channel

Owing to the simplicity of the erasure channel, it is possible to come up with an efficient way to perform maximum likelihood decoding at each time step. Consider an arbitrary decoding instant , let be the transmitted codeword and let denote the corresponding channel outputs. Recall that denotes the leading principal minor of . Let denote the erasures in and let denote the columns of that correspond to the positions of the erasures. Also, let denote the unerased entries of and let denote the columns of excluding . So, we have the following parity check condition on , . Since is known at the decoder, is known. Maximum likelihood decoding boils down to solving the linear equation . Due to the lower triangular nature of , unlike in the case of traditional block coding, this equation will typically not have a unique solution, since will typically not have full column rank. This is alright as we are not interested in decoding the entire correctly, we only care about decoding the earlier entries accurately. If , then corresponds to the earlier time instants while corresponds to the latter time instants. The desired reliability requires one to recover with an exponentially smaller error probability than . Since is lower triangular, we can write as

(15)

Let denote the orthogonal complement of , ie., . Then multiplying both sides of (15) with diag, we get

(16)

If has full column rank, then can be recovered exactly. The decoding algorithm now suggests itself, i.e., find the smallest possible such that has full rank and it is outlined in Algorithm 1.

  1. Suppose, at time , the earliest uncorrected error is at a delay . Identify and as defined above.

  2. Starting with , partition

    where correspond to the erased positions up to delay .

  3. Check whether the matrix has full column rank.

  4. If so, solve for in the system of equations

  5. Increment and continue.

Algorithm 1 Decoder for the BEC

Note that one can equivalently describe the decoding algorithm in terms of the generator matrix and it will be very similar to Alg 1.

Vii-a Encoding and Decoding Complexity

Consider the decoding instant and suppose that the earliest uncorrected erasure is at time . Then steps 2) and 3) in Algorithm 1 can be accomplished by just reducing into the appropriate row echelon form, which has complexity . The earliest entry in is at time implies that it was not corrected at time , the probability of which is . Hence, if nothing more had to be done, the average decoding complexity would have been at most which is bounded and is independent of . In particular, the probability of the decoding complexity being would have been at most . But, inorder to actually solve for in step 4), one needs to compute the syndromes and . It is easy to see that the complexity of this operation increases linearly in time . This is to be expected since the code has infinite memory. A similar computational complexity also plagues the encoder, for, the encoding operation at time is described by where denote the source bits and hence becomes progressively hard with .

We propose the following scheme to circumvent this problem in practice. We allow the decoder to periodically, say at () for appropriately chosen , provide feedback to the encoder on the position of the earliest uncorrected erasure which is, say at time . The encoder can use this information to stop encoding the source bits received prior to , i.e., for starting from time . In other words, for , . The decoder accordingly uses the new generator matrix starting from . In practice, this translates to an arrangement where the decoder sends feedback at time and can be sure that the encoder receives it by time . Such feedback, in the form of acknowledgements from the receiver to the transmitter, is common to most packet-based modern communication and networked systems for reasonable values of . Note that this form of feedback finds a middle ground between one extreme of having no feedback at all and another extreme where every channel output is fed back to the transmitter, the latter being impractical in most cases. The decoder proposed in Alg. 1 is easy to implement and its performance is simulated in Section XI.

Vii-B Extension to Packet Erasures

The encoding and decoding algorithms presented so far have been developed for the case of bit erasures. But it is not difficult to see that the techniques generalize to the case of packet erasures. For example, for a packet length , what was one bit earlier will now be a block of bits. Each binary entry in the encoding/parity check matrix will now be an binary matrix. The rate will remain the same. So, at each time, packets each of length will be encoded to packets each of the same length . Recall that the anytime performance of the code is determined by the delay dependent codebook and its distance distribution . In the case of packet erasures, one can obtain analogous results by defining the Hamming distance of a codeword slightly differently. By viewing a codeword as a collection of packets, define its Hamming distance to be the number of non zero packets. The definition of the delay dependent distance distribution will change accordingly. With this modification, one can easily apply the results developed in Sections IV, V and the decoding algrithm in Section VII above to the case of packet erasures.

Viii Sufficient Conditions for Stabilizability - Scalar Measurements

Recall that we do not assume any feedback about the channel outputs or the control inputs at the observer/encoder. This is the setup we imply whenever we say that no feedback is assumed. In this context [13] derives a sufficient condition for stabilizing scalar linear systems over noisy channels without feedback while [27] considers stabilizing vector valued processes in the presence of feedback. So, to the best of our knowledge, there are no results on stabilizing unstable vector valued processes over a noisy channel when the observer does not have access to either the control inputs or the channel outputs.

We will develop two sufficient conditions for stabilizing vector valued processes over noisy channels without feedback. The two sufficient conditions are based on two different estimation algorithms employed by the controller and neither is stronger than the other. We will then show in Section X-A that both sufficient conditions are asymptotically tight. For ease of presentation, we will treat the case of scalar and vector measurements separately. We will present the sufficient conditions for the case of scalar measurements here while vector measurements will be treated in Section IX

Consider the unstable dimensional linear state space model in (3) with scalar measurements, i.e., , and . Suppose that the characteristic polynomial of is given by

Without loss of generality we assume that are in the following canonical form.

Owing to the duality between estimation and control, we can focus on the problem of tracking (3) over a noisy communication channel. For, if (3) can be tracked with an asymptotically finite mean squared error and if is stabilizable, then it is a simple exercise to see that there exists a control law that will stabilize the plant in the mean squared sense, i.e., . In particular, if the control gain is chosen such that is stable, then will stabilize the plant, where is the estimate of using channel outputs up to time . In control parlance, this amounts to verifying that the control input does not have a dual effect [28]. Hence, in the rest of the analysis, we will focus on tracking (3). The control input therefore is assumed to be absent, i.e., .

Viii-a Hypercuboidal Filter

We bound the set of all possible states that are consistent with the estimates of the quantized measurements using a hypercuboid, i.e., a region of the form , where and the inequalities are component-wise.

Since we assume that the initial state has bounded support, we can write and suppose using the channel ouputs received till time , we have . Since , the measurement update provides information of the form while there will be no additional information on other components of . Note that an estimate of the state is given by the mid point of this region, i.e., . If we define , then the estimation error is asymptotically bounded if every component of is asymptotically bounded. Using such a filter, we can stabilize the system in the mean squared sense over a noisy channel provided that the rate and exponent of the anytime reliable code used to encode the measurements satisfy the following sufficient condition

Theorem VIII.1

It is possible to stabilize (3) in the mean squared sense with an anytime code provided

(17)
{proof}

See Appendix -D

Before proceeding further, we will provide a brief sketch of the proof. Note that is a measure of the uncertainty in the state estimate. From Lemma .2, . The anytime exponent is determined by the growth of in the absence of measurements, hence the bound . The bound on the rate is determined by how fine the quantization needs to be for to be bounded asymptotically. It will be shown in Section -G that is always larger than . By using an alternate filtering algorithm, which we call the Ellipsoidal filter, one can improve this requirement on the exponent from to . But this will come at the price of a larger rate.

Viii-B Ellipsoidal Filter

One can alternately bound the set of all possible states that are consistent with the estimates of the quantized measurements using an ellipsoid

This can be seen as an extension of the technique proposed in [29] to filtering using quantized measurements. If , . So, let .

Let and suppose using the channel outputs received till time , we have . Since , the measurement update provides information of the form , which one may call a slab. would then be an ellipsoid that contains the intersection of the above slab with , in particular one can set it to be the minimum volume ellipsoid covering this intersection. Lemma .4 gives a formula for the minimum volume ellipsoid covering the intersection of an ellipsoid and a slab. For the time update, it is easy to see that for any and , contains the state whenever contains