Timing Channel: Achievable Rate in the Finite Block-Length Regime

Timing Channel: Achievable Rate in the Finite Block-Length Regime

1 Introduction

2 Basic Definitions and Conventions

  • Symbols denoting random variables are capitalized but symbols denoting their realization are not.

  • Bold face is used for vectors.

  • Blackboard bold or non-italic face are used for sets, calligraphic face for system of sets.

  • We use as a shorthand for the difference for .

  • is the set of all integers and .

  • is the set .

  • denotes the set .

  • Given the input and output sets and a channel is a function such that for each is a probability measure on . is some -algebra over .

  • We define the finite sets and to be and , repectively, and associate the -algebras and with them. indicates the power set.

  • Given a distribution on we define the distribution and the function for and . This function is called the information density.

  • The expected value of a random variable is denoted by .

  • The mutual information between two random vectors and is .

  • The capacity of the channel is where is such that the rate does not exceed .

  • The binary entropy function .

  • We drop subscripts whenever they are clear from the context. For example .

3 System Description and Preliminaries

The communication channel we consider is an interesting example of a channel with memory. It is essentially a probabilistic single server queuing system with the length of the queue being the memory of channel. At each discrete time instance the random variable , indicates if there was an arrival at the back of the queue at time . The initial length of the queue is a non-negative integer-valued random variable with distribution . The variable together with the vector , is considered the channel input vector . We define the random variables and for such that

(1)
(2)
(3)

where the binary random variables are conditionally independent given and are distributed according to the transition probability function

(4)

The vector , is considered the channel output vector .

Clearly, () counts the total number of arrivals (departures), denotes the length of the queue at time and indicates if there was a departure from the front of the queue at time . The described relationships are illustrated in Figure 1.

\psfrag

xt \psfragx \psfragq \psfragZ \psfragE \psfragy \psfragyt \psfrage \psfragQ=0 \psfragQ>0 \psfrag0 \psfrag1

Figure 1: Timing Channel Diagram

In principle the distribution on could be chosen arbitrary but we will assume , to be i.i.d. Bernoulli random variables for the following reason:

Theorem 1.

The distribution on that achieves capacity makes , i.i.d. Bernoulli.

Proof.

The proof can be found in [Bedekar98onthe]. ∎

Most of the analysis in this paper is built on the theory of Markov chains and so clearly we have to start with a proper definition of this kind of stochastic process. Given a probability space a discrete time stochastic process on a state space is, for our purposes, a collection of random variables , where each takes values in the set . The random variables are measurable with respect to some given -field over . We only consider countable state spaces here and can hence take to be the product space and take to be the product -algebra defined in the usual way. If holds, we call the process Markov. Given an initial measure and transition probabilities there exists a probability law satisfying by the Kolmogorov extension theorem [meyntwee09, gray09].

The length of the queue forms an irreducible Markov chain with transition probabilities

(5)

If and only if , there exists a probability measure on that solves the system of equations

(6)

for all and this measure is called the invariant measure. Note that for irreducible Markov chains the existence of such a probability measure is equivalent to positive recurrence. For the transition probabilities given it can be checked that

(7)

where we defined

(8)

Note that if and only if . In the remainder of the paper we will always assume that the arrival rate is smaller than the serving rate .

Theorem 2.

The distribution on that achieves capacity makes distributed according to .

Proof.

The proof can be found in [Bedekar98onthe]. ∎

And we will hence always assume that is distributed according to .

Definition 1.

Let . An -Code of size is defined as a sequence such that , , are mutually disjoint and . Let be the supremum of the set of integers such that an -Code of size exists.

Note that equals the rate of the code.

Theorem 3.

The expression for the capacity of the channel simplifies to

(9)

and

(10)
Proof.

A proof of a similar result for the continuos time case appeared in the landmark paper “Bits through Queues” [Anantharam96] and the stated discrete time result was proved in [Bedekar98onthe]. ∎

Theorem 4.

(Burke’s Theorem) Given the queue is in equilibrium and the random variables , are i.i.d. Bernoulli then the random variables , are also i.i.d. Bernoulli.

Proof.

The proof is similar to the one for continuous time queues and can be found in [takagi93]. A concise proof is also given in the appendix. ∎

By the above theorem and

(11)

The following lemma uses arguments introduced by Feinstein [feinstein54] to give a lower bound on .

Lemma 1.

(Feinstein) for all .

Proof.

The proof is short and elegant and reproduced in the appendix. ∎

4 Finite-Length Scaling

The distributions and factor and hence

(12)

where we defined

(13)

The composed state again forms a positive recurrent Markov chain whose transition probabilities are illustrated in Figure 2.

\psfrag

Q=0,Yt=0 \psfragQ=1,Yt=0 \psfragQ=2,Yt=0 \psfragQ=1,Yt=1 \psfragQ=2,Yt=1 \psfragldmd \psfraglmd \psfragld \psfraglm \psfragldm

Figure 2: Possible Transitions in the Markov Chain

The invariant measure for this chain is only a slight extension to :

(14)

The proof of the following theorem is one of the main contributions of this paper because it can be used to proof bounds and an asymptotic on the quantity .

Theorem 5.

The asymptotic variance

(15)

is well defined, positive and finite, and

(16)

Further the following Berry-Esseen type bound holds:

(17)
Proof.

A detailed proof can be found in appendix. We only give a sketch here. The Markov Chain is aperiodic and irreducible. The state space of can be chosen to be . First we verify that there exists a Lyapunov function , finite at some , a finite set , and such that

(18)

The chain is skip-free and the found Lyapunov function is linear and hence also Lipschitz. These properties imply that the chain is geometric ergodic [spieksma1994, CTCNMeyn2007, meyntwee09] and the bound in Equation 17 hence holds by arguments made in [Kontoyiannis01spectraltheory]. ∎

An explicit solution to the asymptotic variance of a general irreducible positive recurrent Markov chain is not available. Significant research in the area of steady-state stochastic simulation has focused on obtaining an expression for this quantity [whitt1991, burman1980, KemenySnell1960] and has yielded a closed form solution for the class of homogeneous birth-death processes when simply returns the integer valued state itself.

We build up on an idea introduced in [grassmann1987] to give an explicit closed form solution to the asymptotic variance in Equation 15.

Theorem 6.

The closed form expressions in Equation 24 equals the asymptotic variance defined in Equation 15.

Proof.

Again we only sketch the proof here and refer to the appendix for a detailed version. For the computation of the sum we will setup and solve a recursion. Grassmann proposed this approach in [grassmann1987] to obtain the asymptotic variance of a continuous time finte state birth death process.

We define

(19)

Clearly

(20)

and

(21)

Note however that for the computation of the asymptotic variance we actually do not even need to know this covariance for each . It is sufficient to know its sum. So we define

(22)

write

(23)

and derive an expression for . ∎

(24)
(25)
(26)
(27)
(28)

Using the result stated in Theorem 5 we can finally prove the core contribution of this paper:

Theorem 7.
(29)

where and is defined as in Theorem 5.

Proof.

By Theorem 5

(30)

Let and . Set and the application of Lemma 1 yields

(31)
(32)

This confirms that is the operational capacity of the channel and any rate is achievable. The real beauty of Theorem 7 is, however, that we can use the asymptotic

(33)

as an approximation to the channel coding rate and then anticipate the achievable rate on this channel in the finite block length regime. For illustraton we plotted this asymptotic for blocklengths ranging between and and the example values , and in Figure 3.

\psfrag

blocklength nblocklength \psfragachievable coding rateachievable coding rate

Figure 3: Channel Coding Rate in the Finite Block-Length Regime

5 Proof of Theorem 4

6 Proof of Lemma 1

Proof.

Assume is the maximal size of an -Code such that , where

(34)

Then we have

(35)

and

(36)

Let . By the maximality of N it follows that

(37)

Or equivalently

(38)

Multiplying this inequality with and integrating it over then yields

(39)

Putting everything together we obtain the result

(40)

7 Proof of Theorem 5

The Markov Chain is aperiodic and irreducible. The state space of can be chosen to be . First we verify that Foster’s Criterion holds

Lemma 2.

There exists a Lyapunov function , finite at some , a finite set , and such that

(41)

Further this function is Lipschitz, i.e., for some

(42)

and for some and

(43)
Proof.

We need to find a function such that for all but a finite number of . If we simply choose for some sufficiently large constant then the requirement is clearly satisfied for all such that but it fails to hold otherwise. To fix this shortcoming we reward the transitions to a state with by a decreasing difference . In particular we choose . Standard calculations reveal that for that choice for all with . Linear functions are always Lipschitz and is bounded almost surely. ∎

By the results in [spieksma1994] or Proposition A.5.7. in [CTCNMeyn2007] the chain is then geometrically ergodic.

By Theorem A.5.8. in [CTCNMeyn2007] the asymptotic variance is well defined, non-negative and finite, and

(44)

Finally, is a bounded, nonlattice (I still have to check that!), real-valued functional on the state space and hence

(45)
(46)

where denotes the density of the standard Normal distribution , is the solution to Poissons equation and is a constant [Kontoyiannis01spectraltheory]. The solution can be chosen such that and the claim follows by averaging out .

8 Proof of Theorem 6

Using the representation of in Equation 16 it remains to find explicit expressions for and the sum .

The term is easy to compute

(47)

Equation 28 holds true.

Proof.

For the computation of the sum we will setup and solve a recursion. Grassmann proposed this approach in [grassmann1987] to obtain the asymptotic variance of a continuous time finte state birth death process.

We define

(48)

Clearly

(49)

and

(50)

Note however that for the computation of the asymptotic variance we actually do not even need to know this covariance for each . It is sufficient to know its sum. So we define

(51)

write

(52)

and derive a recursion for .

For mean ergodic Markov processes as and hence

(53)

Summing in from zero to infinity then clearly yields . By the Chapman-Kolmogorov equations

(54)

and thus

(55)

If this expression for is also summed in from zero to infinity and then compared to the above result of the same sum, we obtain

(56)

For notational convenience we abbreviate the right hand-side of Equation 56 by .

For with and

(57)

For with and

(58)

And for with and

(59)

Adding Equations 8 and 8 yields

(60)

We now sum Equation 56 in two ways:

(61)

for where we defined

(62)

and

(63)

for where we defined

(64)

We can combine Equation 61 and 8 to obtain the first order recurrence

(65)

for where

(66)

Note that

(67)

and with this Equation 66 becomes

(68)

But

So we obtain

(69)

and the generating function

(70)

We now define two new sequences and such that

(71)

Clearly, and . By substituting Equation 71 into Equation 65 we find that

(72)

and

(73)

The solution to the recurrence is obvious

(74)

In order to obtain the solution to the recurrence we employ the generating function method [west]