Remote estimation over a packetdrop channel with Markovian state
Abstract
We investigate a remote estimation problem in which a transmitter observes a Markov source and chooses the power level to transmit it over a timevarying packetdrop channel. The channel is modeled as a channel with Markovian state where the packet drop probability depends on the channel state and the transmit power. A receiver observes the channel output and the channel state and estimates the source realization. The receiver also feeds back the channel state and an acknowledgment for successful reception to the transmitter. We consider two models for the source—finite state Markov chains and firstorder autoregressive processes. For the first model, using ideas from team theory, we establish the structure of optimal transmission and estimation strategies and identify a dynamic program to determine optimal strategies with that structure. For the second model, we assume that the noise process has unimodal and symmetric distribution. Using ideas from majorization theory, we show that the optimal transmission strategy is symmetric and monotonic and the optimal estimation strategy is like Kalman filter. Consequently, when there are a finite number of power levels, the optimal transmission strategy may be described using thresholds that depend on the channel state. Finally, we propose a simulation based approach (Renewal Monte Carlo) to compute the optimal thresholds and optimal performance and elucidate the algorithm with an example.
I Introduction
Ia Motivation and literature overview
Network control systems are distributed systems where plants, sensors, controllers, and actuators are interconnected via a communication network. Such systems arise in a variety of applications such as IoT (Internet of Things), smart grids, vehicular networks, robotics, etc. One of the fundamental problem in network control system is remote estimation—how should a sensor (which observes a stochastic process) transmit its observations to a receiver (which estimates the state of the stochastic process) when there is a constraint on communication, either in terms of communication cost or communication rate.
In this paper, we consider a remote estimation system as shown in Fig. 1. The system consists of a sensor and an estimator connected over a timevarying wireless fading channel. The sensor observes a Markov process and chooses the power level to transmit its observation to the remote estimator. Communication is noisy and the transmitted packet may get dropped according to a probability that depends on the channel state and the power level. When the packet is dropped the receiver generates an estimate of the state of the source according to previously received packets. The objective is to choose power control and estimation strategies to minimize a weighted sum of transmission power and estimation error.
Several variations of the above model have been considered in the literature. Models with noiseless communication channels have been considered in [1, 2, 3, 4, 5, 6]. Since the channel is noiseless, these papers assume that there are only two power levels: power level 0, which corresponds to not transmitting; and power level 1, which corresponds to transmitting. Under slightly different modeling assumptions, these papers identify the structure of optimal transmission and estimation strategies for firstorder autoregressive sources with unimodal noise and for higher order autoregressive sources with orthogonal dynamics and isotropic Gaussian noise. It is shown that the optimal transmission strategy is thresholdbased, i.e., the sensor transmits whenever the current error is greater than a threshold. It is also shown that the optimal estimation strategy is like Kalman filter: when the receiver receives a packet, the estimate is the received symbol; when it does not receive the packet, then the estimate is the onestep prediction based on the previous symbol. Quite surprisingly, these results show that there is no advantage in trying to extract information about the source realization from the choice of the power levels. The transmission strategy at the sensor is also called eventtriggered communication because the sensor transmits when the event ‘error is greater than a threshold’ is triggered. Models with i.i.d. packetdrop channels are considered in [7, 8, 9], where it is assumed that the transmitter has two power levels: on or off. Remote estimation over additive noise channel is considered in [10].
In this paper we consider a remote estimation problem over packetdrop channel with Markovian state. We assume that the receiver observes the channel state and feeds it back to the transmitter with one step delay. Preliminary results for this model are presented in [11], where attention was restricted to a binary state channel with two input power values (ON or OFF). In the current paper, we consider arbitrary number of channel states and power levels. A related paper is [12], in which a remote estimation over packetdrop channels with Markovian state is considered. It is assumed that the sensor and the receiver know the channel state. It is shown that optimal estimation strategies are like Kalman filter. A detailed comparison with [12] is presented in Section VA.
Several approaches for computing the optimal transmission strategies have been proposed in the literature. For noiseless channels, these include dynamic programing based approaches [4, 5, 13], approximate dynamic programming based approaches [14], renewal theory based approaches [15]. It is shown in [16] that for eventtriggered scheduling, the posterior density follows a generalized closed skew normal (GCSN) distribution. For Markovian channels (when the state is not observed), a change of measure technique to evaluate the performance of an eventtriggered scheme is presented in [17]. In this paper, we present a renewal theory based Monte Carlo approach for computing the optimal thresholds. A preliminary version of the results was presented in [9] for a channel with i.i.d. packet drops.
IB Contributions
In this paper, we investigate team optimal transmission and estimation strategies for remote estimation over time varying packetdrop channels. We consider two models for the source: finite state Markov source and first order autoregressive source (over either integers or reals). Our main contributions are as follows.

For finite sources, we identify sufficient statistics for both the transmitter and the receiver and obtain a dynamic programming decomposition to compute optimal transmission and estimation strategies.

For autoregressive sources, we identify qualitative properties of optimal transmission and estimation strategies. In particular, we show that the optimal estimation strategy is like Kalman filter and the optimal transmission strategy only depends on the current source realization and the previous channel state (and does not depend on the receiver’s belief of the source). Furthermore, when the channel state is stochastically monotone (see Assumption IIIB for definition), then for any value of the channel state, the optimal transmission strategy is symmetric and quasiconvex in the source realization. Consequently, when the power levels are finite, the optimal transmission strategy is thresholdbased, where the thresholds only depend on the previous channel state.

We show that the above qualitative properties extend naturally to infinite horizon models.

For infinite horizon models, we present a Renewal Theory based MonteCarlo algorithm to evaluate the performance of any thresholdbased strategy. We then combine it with a simultaneous perturbation based stochastic approximation algorithm to provide an algorithm to compute the optimal thresholds. We illustrate our results with a numerical example of a remote estimation problem with a transmitter with two power levels and a GilbertElliott erasure channel.

We show that the problem of transmitting over one of available i.i.d. packetdrop channels (at a constant power level) can be considered as special case of our model. We show that there exist thresholds , such that it is optimal to transmit over channel if the error state . See Sec. VC for details.
IC Notation
We use uppercase letters to denote random variables (e.g, , , etc), lowercase letters to denote their realizations (e.g., , , etc.). , and denote respectively the sets of integers, of nonnegative integers and of positive integers. Similarly, , and denote respectively the sets of reals, of nonnegative reals and of positive reals. For any set , let denote its indicator function, i.e., is if , else . denotes the cardinality of set . denotes the space of probability distributions of . For any vector , denotes the th component of . For any vector and an interval of , means that equals if ; equals if ; and equals if . Given a Borel subset and a density , we use the notation . For any vector , denotes the derivative with respect to .
ID The communication system
We consider a remote estimation system shown in Fig. 1. The different components of the system are explained below.
ID1 Source model
The source is a firstorder timehomogeneous Markov chain , . We consider two models for the source.

Finite state Markov source. In this model, we assume that is a finite set and denote the state transition matrix by , i.e., for any , .

Firstorder autoregressive source. In this model, we assume that is either or . The initial state and for , the source evolves as
(1) where and is an i.i.d. sequence where is distributed according to a symmetric and unimodal distribution^{1}^{1}1With a slight abuse of notation, when , we consider to the probability density function and when , we consider to be the probability mass function. .
ID2 Channel model
The channel is a packetdrop channel with state. The state process , is a firstorder timehomogeneous Markov chain with transition probability matrix . We assume that is finite. This is a standard model for timevarying wireless channels [18, 19].
The input alphabet of the channel is and the output alphabet is where the symbols denotes that no packet was received. At time , the channel output is denoted by .
The packet drop probability depends on the input power , where is the set of allowed power levels. We assume that is a subset of and is either a finite set of the form or an interval of the form , i.e., is uncountable. When , it means that the transmitter does not send a packet. In particular, for any realization of , we have
(2) 
and
(3) 
where is the probability that a packet transmitted with power level when the channel is in state is dropped. We assume that the set of the channel states is an ordered set where a larger state means a better channel quality. Then, for all , is (weakly) decreasing in with and . Furthermore, we assume that for all , is decreasing in .
IE The decision makers and the information structure
There are two decision makers in the system—the transmitter and the receiver. At time , the transmitter chooses the transmit power while the receiver chooses an estimate . Let and denote the information sets at the transmitter and the receiver respectively.
The transmitter observes the source realization . In addition, there is onestep delayed feedback from the receiver to the transmitter.^{2}^{2}2Note that feedback of requires 1 bit to indicate whether the packet was received or not and feedback of requires bits. Thus, the information available at the transmitter is
The transmitter chooses the transmit power according to
(4) 
where is called the transmission rule at time . The collection for all time is called the transmission strategy.
The receiver observes and, in addition, observes the channel state . Thus, the information available at the receiver is
The receiver chooses the estimate is chosen according to
(5) 
where is called the estimation rule at time . The collection for all time is called the estimation strategy.
The collection is called a communication strategy.
IF The performance measures and problem formulation
At each time , the system incurs two costs: a transmission cost and a distortion or estimation error . Thus, the perstep cost is
We assume that is (weakly) increasing in with and . For the autoregressive source model, we assume that the distortion is given by , where is even and quasiconvex with .
We are interested in the following optimization problems:
Problem (Finite horizon)
In the model described above, identify a communication strategy that minimizes the total cost given by
(6) 
Problem (Infinite horizon)
In the model described above, given a discount factor , identify a communication strategy that minimizes the total cost given as follows:

For ,
(7) 
For ,
(8)
Remark 1
In the above model, it has been assumed that whenever the transmitter transmits (i.e., ), it sends the source realization uncoded. This is without loss of generality because the channel input alphabet is the same as the source alphabet and the channel is symmetric. For such models, coding does not improve performance [20].
Problems IF and IF are decentralized stochastic control problems. The main conceptual difficulty in solving such problems is that the information available to the decision makers and hence the domain of their strategies grow with time, making the optimization problem combinatorial. One could circumvent this issue by identifying a suitable information state at the decision makers, which do not grow with time. In the following section, we discuss one such method to establish the structural results.
Ii Main results for finite state Markov sources
Iia Structure of optimal communication strategies
We establish two types of structural results. First, we use personbyperson approach to show that is irrelevant at the transmitter (Lemma IIA); then, we use the common information approach of [21] and establish a beliefstate for the common information between the transmitter and the receiver (Theorem IIA).
Lemma
For any estimation strategy of the form (5), there is no loss of optimality in restricting attention to transmission strategies of the form
(9) 
The proof proceeds by establishing that the process is a controlled Markov process controlled by . See Appendix A for details.
For any strategy of the form (9) and any realization of , define as
Furthermore, define conditional probability measures and on as follows: for any ,
We call the pretransmission belief and the posttransmission belief. Note that when are random variables, then and are also random variables (taking values in ), which we denote by and .
For the ease of notation, define as follows:
(10) 
Furthermore, define as follows:
(11) 
Then, using Baye’s rule one can show the following:
Lemma
Given any transmission strategy of the form (9):

there exists a function such that
(12) 
there exists a function such that
(13)
Note that in (12), we are treating as a rowvector and in (13), denotes a Dirac measure centered at . The update equations (12) and (13) are standard nonlinear filtering equations. See supplementary material for proof.
Theorem
In Problem IF with finite state Markov source, we have that:

Structure of optimal strategies: There is no loss of optimality in restricting attention to transmission and estimation strategies of the form:
(14) (15) 
Dynamic program: Let denote the space of probability distributions on . Define value functions and as follows: for any ,
(16) and for
(17) (18) where
Let denote the arg min of the right hand side of (17) and . Then, the optimal transmission strategy is given by
and the optimal estimation strategy is given by .
Remark 2
Remark 3
Remark 4
Note that the dynamic program in Theorem IIA is similar to a dynamic program for a partially observable Markov Decision Process (POMDP) with finite state space and finite or uncountable action space (see Remark 3). Thus, the dynamic program can be extended to infinite horizon discounted cost model after verifying standard assumptions. However, doing so does not provide any additional insight, so we do not present infinite horizon results for this model. We will do so for the autoregressive source model later in the paper, where we provide an algorithm to find the optimal timehomogeneous strategy for infinite horizon criteria.
Iii Main results for autoregressive sources
Iiia Structure of optimal trategies for finite horizon model
We start with a change of variables. Define a process as follows: and for ,
Next, define processes , , which we call the error processes and as follows:
The processes and are related as follows: , , and for ,
(19) 
The above dynamics may be rewritten as
(20) 
Since , we have that . Thus, with this change of variables, the perstep cost may be written as .
Note that is a deterministic function of . Hence, at time , is measurable at the transmitter and thus is measurable at the transmitter. Moreover, at time , is measurable at the receiver.
Lemma
For any transmission and estimation strategies of the form (9) and (5), there exists an equivalent transmission and estimation strategy of the form:
(21)  
(22) 
Moreover, for any transmission and estimation strategies of the form (21)–(22), there exist transmission and estimation strategies of the form (9) and (5) that are equivalent.
The proof is given in Appendix C.
An implication of Lemma IIIA is that we may assume that the transmitter transmits and the receiver estimates
For this model, we can further simplify the structures of optimal transmitter and estimator as follows.
Theorem
In Problem IF with firstorder autoregressive source, we have that:

Structure of optimal estimation strategy: At each time , there is no loss of optimality in choosing the estimates as
or, equivalently, choosing the estimates as: , and for ,
(23) 
Structure of optimal transmission strategy: There is no loss of optimality in restricting attention to transmission strategies of the form
(24) 
Dynamic programming decomposition: Recursively define the following value functions: for any and ,
(25) and for , (26) where
Let denote the arg min of the right hand side of (26). Then the transmission strategy is optimal.
See Appendix D for the proof.
IiiB Monotonicity and quasiconvexity of the optimal solution
For autoregressive sources we can establish monotonicity and quasiconvexity of the optimal solution. To that end, let us assume the following.
Assumption
The channel transition matrix is stochastic monotone, i.e., for all such that and for any ,
Theorem
For any , we have the following:

For all , is even and quasiconvex in .
Furthermore, under Assumption IIIB,

For every , is decreasing in .

For every , the transmission strategy is even and quasiconvex in .
Sufficient conditions under which the value function and the optimal strategy are even and quasiconvex are identified in [22, Theorem 1]. Properties 1 and 3 follow because the above model satisfies these sufficient conditions. Property 2 follows from standard stochastic monotonicity arguments. The details are presented in the supplementary material.
An immediate consequence of Theorem IIIB is the following:
Corollary
Suppose that Assumption IIIB is satisfied and is finite set given by . For any , define^{3}^{3}3Note that and Theorem IIIB implies for any .
For ease of notation, define .
Then, the optimal strategy is a threshold based strategy given as follows: for any , and ,
(27) 
Some remarks

Since the distortion function is even and quasiconvex, we can write the threshold conditions
in (27) as
Thus, if we define distortion levels , then we can say that the optimal strategy is to transmit at power level if .

When , the update of the optimal estimate is same as the update equation of Kalman filter. For this reason, we refer to the estimation strategy (23) as a Kalmanfilter like estimator.
IiiC Generalization to infinite horizon model
Given a communication strategy , let and denote respectively the expected distortion and expected transmitted prower when the system starts in state , i.e., for ,
and for ,
Then, the performance of the strategy when the system starts in state is given by
The structure of optimal estimator, as established in Theorem IIIA, continues to hold for the infinite horizon setup as well. Thus, we can restrict attention to Kalmanfilter like estimator given by (23) and look at the problem of finding the best response transmission strategy. This is a single agent stochastic control problem. If the perstep distortion is unbounded, then we need the following assumption—which implies that there exists a strategy whose performance is bounded—for the infinite horizon problem to be meaningful.
Assumption
Let denote the transmission strategy that always transmits at power level and denote the Kalmanfilter like strategy given by (23). Then, for given , and for all and , .
Assumption IIIC is always satisfied if is bounded. For , and , the condition is sufficient for Assumption IIIC to hold (see [23, Theorem 8] and [24, Corollary 12]). Similar sufficient conditions are given in [25, Theorem 1] for vectorvalued Markov source processes with a Markovian packetdrop channel.
We now state the main theorem of this section.
Theorem
In Problem IF with firstorder autoregressive processes under Assumption IIIC, we have that

Structure of optimal estimation strategy: The timehomogeneous strategy , where is given by (23), is optimal.

Structure of optimal transmission strategy: There is no loss of optimality in restricting attention to timehomogeneous transmission strategies of the form

Dynamic programming decomposition: For , let be the smallest bounded solution of the following fixed point equation: for all and ,
(28) where
Let denote the arg min of the right hand side of (28). Then the transmission is optimal.

Results for : Let be any limit point of as . Then, is optimal strategy for Problem IF with .
The proof is given in Appendix E.
Remark 5
We are not asserting that the dynamic program (28) has a unique fixed point. To make such an assertion, we would need to check the sufficient conditions for Banach fixed point theorem. These conditions [26] are harder to check than the sufficient conditions (P1)–(P3) of Proposition EB that we verify in Appendix E.
Corollary
The monotonicity properties of Theorem IIIB hold for the infinite horizon value function and transmission strategy as well.
An immediate consequence of Corollary IIIC is the following:
Iv Computing optimal thresholds for autoregressive sources with finite actions
Suppose the power levels are finite and given by
with and . From Corollary IIIC, we know that the optimal strategy for Problem IF is a timehomogeneous thresholdbased strategy of the form (27). Let denote the thresholds and denote the strategy (29). In this section, we first derive formulas for computing the performance of a general thresholdbased strategy of the form (27) and then propose a stochastic approximation based algorithm to identify the optimal thresholds.
It is conceptually simpler to work with a postdecision model where the predecision state is and the postdecision state is given by (19). The timeline of the various system variables is shown in Fig. 2. In this model, the perstep cost is given by .^{5}^{5}5From Theorem IIIA, we have that . Thus, .
Iva Performance of an arbitrary thresholdbased strategy
For , pick a reference channel state . Given an arbitrary thresholdbased strategy , suppose the system starts in state and follows strategy . Then, the process is a Markov process. Let and for let
denote the stopping times when the Markov process revisits . We say that the Markov process regenerates at times and refer to the interval as the th regenerative cycle.
Define the following:

: the expected cost during a regenerative cycle, i.e.,
(30) 
: the expected time during a regenerative cycle, i.e.,
(31)
Using ideas from renewal theory, we have the following.
Theorem
For any , the performance of thresholdbased strategy is given by
(32) 
See Appendix F for the proof.
IvB Necessary condition for optimality
In order to find the optimal threshold, we first observe the following.
Lemma
For any , and are differentiable with respect to . Consequently, is also differentiable.
The proof of Lemma IVB follows from first principles using an argument similar to that in the supplementary material for [15].
Let , and denote the derivatives of , and respectively. Then, a sufficient condition for optimality is the following.
Proposition
A necessary condition for thresholds to be optimal is that , where