Exact Asymptotics for the Random Coding Error Probability

Exact Asymptotics for the Random Coding Error Probability

\authorblockNJunya Honda \authorblockAGraduate School of Frontier Sciences, The University of Tokyo
Kashiwa-shi Chiba 277–8561, Japan
Email: honda@it.k.u-tokyo.ac.jp
Abstract

Error probabilities of random codes for memoryless channels are considered in this paper111This paper is the full version of [1] in ISIT2015 with some corrections and refinements.. In the area of communication systems, admissible error probability is very small and it is sometimes more important to discuss the relative gap between the achievable error probability and its bound than to discuss the absolute gap. Scarlett et al. derived a good upper bound of a random coding union bound based on the technique of saddlepoint approximation but it is not proved that the relative gap of their bound converges to zero. This paper derives a new bound on the achievable error probability in this viewpoint for a class of memoryless channels. The derived bound is strictly smaller than that by Scarlett et al. and its relative gap with the random coding error probability (not a union bound) vanishes as the block length increases for a fixed coding rate.

channel coding, random coding, error exponent, finite-length analysis, asymptotic expansion.

I Introduction

It is one of the most important task of information theory to clarify the achievable performance of channel codes under finite block length. For this purpose Polyanskiy [2] and Hayashi [3] considered the achievable coding rate under a fixed error probability and a block length. They revealed that the next term to the channel capacity is for the block length and expressed by a percentile of a normal distribution.

The essential point for derivation of such a bound is to evaluate error probabilities of channel codes with an accurate form. For this evaluation an asymptotic expansion of sums of random variables is used in [2]. On the other hand, the admissible error probability in communication systems is very small, say, for example. In such cases it is sometimes more important to consider the relative gap between the achievable error probability and its bound than the absolute gap. Nevertheless, an approximation of a tail probability obtained by the asymptotic expansion sometimes results in a large relative gap and it is known that the technique of saddlepoint approximation and the (higher-order) large deviation principle is a more powerful tool rather than the asymptotic expansion [4].

Bounds of the error probability of random codes with a small relative gap have been researched extensively although most of them treat a fixed rate whereas [2][3] consider varying rate for the fixed error probability. Gallager [5] derived an upper bound called a random coding union bound on the rate of exponential decay of the random coding error probability for fixed rate . It is proved that this exponent of the random code is tight for both rates below the critical rate [5] and above the critical rate [6].

There have also been many researches on tight bounds of the random coding error probability with vanishing or constant relative error for a fixed rate . Dobrushin [7] derived a bound of the random coding error probability for symmetric channels in the strong sense that each row and the column of the transition probability matrix are permutations of the others. The relative error of this bound is asymptotically bounded by a constant. In particular, it vanishes in the case that the channel satisfies a nonlattice condition.

For general class of discrete memoryless channels, Gallager [8] derived a bound with a vanishing relative error for the rate below the critical rate based on the technique of exact asymptotics for i.i.d. random variables, and Altuğ and Wagner [9] corrected his result for singular channels. For general (possibly variable) rate , Scarlett et al. [10] derived a simple upper bound (we write this as ) of a random coding union bound based on the technique of saddlepoint approximation and showed that for nonsingular finite-alphabet discrete memoryless channels [10]. However, This bound does not assure .

In this paper we consider the error probability of random coding for a fixed but arbitrary rate below the capacity. We derive a new bound which satisfies for (possibly infinite-alphabet or nondiscrete) nonsingular memoryless channels such that random variables associated with the channels satisfy a condition called a strongly nonlattice condition. The derived bound matches that by Gallager [8] for the rate below the critical rate222In the ISIT proceedings version it was described that the result contradicts the bound in [8] but it was the confirmation error of the author because of the difference of notations between this paper and [11]. See Remark 4 for detail. .

The essential point to derive the new bound is that we optimize the parameter depending on the sent and the received sequences to bound the error probability. This fact contrasts to discussion in [10] and the classic random coding error exponent where the parameter is first fixed and optimized after the expectation over is taken. We confirm that this difference actually affects the derived bound and by this difference we can assure that the bound also becomes a lower bound of the probability with a vanishing relative error.

Ii Preliminary

We consider a memoryless channel with input alphabet and output alphabet . The output distribution for input is denoted by . Let be a random variable with distribution and be following given . We define as the marginal distribution of . We assume that is absolutely continuous with respect to for any with density

We also assume that the mutual information is finite, that is, .

Let be a random variable with the same distribution as and independent of and define . Since holds almost surely we have is well-defined almost surely. denotes independent copies of . We define .

We consider the error probability of a random code such that each element of codewords is generated independently from distribution . The coding rate of this code is given by . We use the maximum likelihood decoding with ties broken uniformly at random.

Ii-a Error Exponent

Define a random variable on the space of functions by

and its derivatives by

which we sometimes write by . Here denotes the expectation over for given . We define333We omit the discussion on the multi-valuedness of . The discussion involving logarithm of a complex number in this paper arises by following [12, Sect. XVI.2] and refer this to see that no problem occurs.

where and is the imaginary unit. Here we always consider the case and define . We define

and are defined in the same way.

The random coding error exponent for is denoted by

(1)

and we write the optimal solution of as . We write .

In the strict sense the random coding error exponent represents the supremum of (1) over but for notational simplicity we fix and omit its dependence. See [9, Theorem 2] for a condition that there exists which attains this supremum.

Let be the probability measure such that . We write the expectation under by and define

From derivatives of in and we have

(2)
(3)

where is the critical rate, that is, the largest such that the optimal solution of (1) is . We assume that , or equivalently, where is the support of . This corresponds to the non-singular assumption in [10][13] for the finite alphabet.

To avoid somewhat technical argument on the continuity and integrability we also assume that there exists and a neighborhood of such that for any

(4)

where is given later. Note that these conditions trivially hold if the input and output alphabets are finite.

Ii-B Lattice and Nonlattice Distributions

In the asymptotic expansion with an order higher than the central-limit theorem, it is necessary to consider cases that the distribution is lattice or nonlattice separately. Here we call that a random variable has a lattice distribution if almost surely for some and linearly independent vectors . For the case we call the largest satisfying the above condition the span of the lattice.

On the other hand, we call that has a strongly nonlattice distribution if for all , where denotes the inner product. Note that a one dimensional random variable is lattice or strongly nonlattice but, in general, there exists a random variable which is not lattice and not strongly nonlattice.

As given above, a lattice distribution is defined for a random variable in standard references such as [14]. In this paper we call that the distribution of is lattice if the conditional distribution of given is lattice and nonlattice otherwise. It is easy to see that no contradiction occurs under this definition.

We consider the following condition regarding lattice and nonlattice distributions.

Definition 1.

We call that the log-likelihood ratio satisfies the lattice condition with span if the conditional distribution of given is lattice with span almost surely where may depend on and is the largest value satisfying this condition.

For notational simplicity we define the span of the lattice for to be if does not satisfy the lattice condition. Other than the classification of , we also discuss cases that is strongly nonlattice or not separately.

Note that a one-dimentional random variable with support is always lattice if , and is strongly nonlattice except for some special cases if . Similarly, a two-dimensional random variable is always not strongly nonlattice if , and is strongly nonlattice except for some special cases if . Based on this observation we see that most channels with input and output alphabet sizes larger than 3 are strongly nonlattice. Another example of each class of channels (excluding those with specially chosen parameters) are given in Table. I.

not strongly nonlattice strongly nonlattice
log-likelihood ratio lattice BSC asymmetric BEC
nonlattice ternary symmetric channels binary asymmetric channels
TABLE I: Classification of Nonsingular Channels.
Remark 1.

The above conditions are different from the condition considered in [10] as a classification of lattice and nonlattice cases. This difference arises from two reasons. First, we consider in addition to to derive an accurate bound. Second, the proof of [10, Lemma 1] does not use the correct span when applying the result [15, Sect. VII.1, Thm. 2].

Iii Main Result

Define

for . Here we define for and therefore . We give some properties on in Appendix -A. Now we can represent the random coding error probability as follows.

Theorem 1.

Fix any and , and let be sufficiently small. Then, for the span of the lattice for , there exists such that for all

By this theorem we can reduce the evaluation of error probability into that of an expectation over two-dimensional random variable , although this expectation is still difficult to compute. If is strongly nonlattice then we can derive the following bound which gives an explicit representation for the asymptotic behavior of .

Theorem 2.

Fix and assume that has a strongly nonlattice distribution. Then

(5)

where

for the gamma function .

We prove Theorems 1 and 2 in Sections IV and V, respectively. From this theorem we see that at least for the strongly nonlattice case the error probability of the random coding is

(6)

The RHS of (6) for is the same expression as the upper bounds in [10][13] but our bound is tighter in its coefficient and is also assured to be the lower bound.

It may be possible to derive a similar bound as Theorem 2 for the case that is not strongly nonlattice by replacement of integrals with summations, but for this case the author was not able to find an expression of the asymptotic expansion straightforwardly applicable to our problem and this remains as a future work.

Remark 2.

We can show in the same way as Theorem 2 that the random coding union bound is obtained by replacement of with

On the other hand, the terms and in the square roots of (5) are the characteristic parts of the analysis of this paper obtained by the optimization of parameter depending on . Thus, the optimization of is necessary to derive a tight coefficient whether we evaluate the error probability itself or the union bound.

Remark 3.

The results in this paper assume a fixed coding rate and are weaker in this sense than the result by Scarlett et al. [10] where they assure an upper bound for varying rate by leaving an integral (or a summation) to a form such that the integrant depends on . It may be possible to extend Theorem 1 for varying rate since the most part of the proof deals with and the error probability of each codeword separately. However, the proof of Theorem 2 heavily depends on fixed and it is also an important problem to derive an easily computable bound for varying rate.

Remark 4.

In [8] it is shown for discrete nonlattice444There is a calculation error for the lattice case in [8] with a redundant factor . channels with that

(7)

where

(8)

for

The author misunderstood that in the ISIT version and described that Theorem 2 contradicts (7). The correct calculation show that and

for . Therefore no contradiction occurs between this paper and [8].

Iv First Asymptotic Expansion

In this section we give a sketch of the proof of Theorem 1. We prove Theorem 1 separately depending on whether satisfies the lattice condition or not. The proofs are different to each other in some places for two reasons. First, we cannot ignore the case that a codeword has the same likelihood as that of the sent codeword under the lattice condition whereas such a case is almost negligible in the nonlattice case. Second, especially in the case of infinite alphabet we have to use the asymptotic expansion with a careful attention to components implicitly assumed to be fixed and the derivation of asymptotic expansion varies in some places between the lattice and nonlattice cases regarding this aspect.

Here we give a proof of Theorem 1 for the case that satisfies the lattice condition with span . The proof for the nonlattice case is easier than the lattice case in most places because ties of likelihoods can be almost ignored as described above. See Appendix -D for the difference of the proof in the nonlattice case.

Now define

(9)

The last equation of (9) holds since and the offset of the lattice of equals to that of given . Under the maximum likelihood decoding, the average error probability is expressed as for

(10)

Here the first term corresponds to the probability that the likelihood of some codeword exceeds that of the sent codeword, and each component of the second term corresponds to the probability that codewords have the same likelihood as the sent codeword and the others do not exceed this likelihood.

One of the most basic bound for this quantity is to use a union bound given by

A lower can also be found in, e.g., [16, Chap. 23]. For evaluation of the error probability with a vanishing relative error the following lemma is useful.

Lemma 1.

It holds for any that

We prove this lemma in Appendix -E. We see from this theorem that the error probability can be approximated by

for satisfying some regularity condition.

Next we consider the evaluation of and . We use Lemma 2 in the following as a fundamental tool of the proof. Let be (possibly not identically distributed) independent lattice random variables such that the greatest common divisor of their spans555 The greatest common divisor for a set , is defined as if is the maximum number such that for all and defined as if such does not exist. is . Define

Then its large deviation probability is evaluated as follows.

Lemma 2.

Fix such that and define as the solution of . Let and be arbitrary. Then there exists such that

hold for all satisfying

The proof of this lemma is largely the same as that of [17, Thm. 3.7.4] for the i.i.d. case and given in Appendix -B.

Let satisfy . To apply Lemma 2 we consider the following sets to formulate regularity conditions.

where and are the spaces of continuous functions and , respectively, and is a constant determined from with Lemma 2.

We define the event as

where we regard as function . Under this condition we can bound the excess probability of the likelihood of each codeword given the sent codeword and the received sequence as follows.

Lemma 3.

Let be arbitrary and in the definition of be sufficiently small with respect to . Then, there exists such that under the event it holds for all that,

Proof.

Note that and for all from and (3). From the convexity of in , if we set then is minimized at a point in with

Thus the lemma follows from Lemma 2. ∎

Next we define

where

Then the error probability can be evaluated as follows.

Lemma 4.

Fix the coding rate and assume that the same condition as Lemma 3 holds. Then, for all sufficiently large ,

This lemma is straightforward from Lemmas 1 and 3. We use the following lemma to evaluate the contribution of the case .

Lemma 5.

Let . Then

(11)
(12)

Furthermore, for sufficiently large and sufficiently small and we have

We prove this lemma in Appendix -C. The proof is obtained by Cramér’s theorem for general topological vector spaces [17, Theorem 6.1.3] with the fact that and are separable Banach spaces under the max norm.

Proof of Theorem 1.

From Lemma 4, it holds for and sufficiently large that

Thus we obtain from Lemma 5 that