Variations on a theme by Schalkwijk and Kailath

Variations on a theme by Schalkwijk and Kailath

Robert G. Gallager       Barış Nakiboğlu
July 3, 2019
Abstract

Schalkwijk and Kailath (1966) developed a class of block codes for Gaussian channels with ideal feedback for which the probability of decoding error decreases as a second-order exponent in block length for rates below capacity. This well-known but surprising result is explained and simply derived here in terms of a result by Elias (1956) concerning the minimum mean-square distortion achievable in transmitting a single Gaussian random variable over multiple uses of the same Gaussian channel. A simple modification of the Schalkwijk-Kailath scheme is then shown to have an error probability that decreases with an exponential order which is linearly increasing with block length. In the infinite bandwidth limit, this scheme produces zero error probability using bounded expected energy at all rates below capacity. A lower bound on error probability for the finite bandwidth case is then derived in which the error probability decreases with an exponential order which is linearly increasing in block length at the same rate as the upper bound.

I Introduction

This note describes coding and decoding strategies for discrete-time additive memoryless Gaussian-noise (DAMGN) channels with ideal feedback. It was shown by Shannon [14] in 1961 that feedback does not increase the capacity of memoryless channels, and was shown by Pinsker [10] in 1968 that fixed-length block codes on Gaussian-noise channels with feedback can not exceed the sphere packing bound if the energy per codeword is bounded independently of the noise realization. It is clear, however, that reliable communication can be simplified by the use of feedback, as illustrated by standard automatic repeat strategies at the data link control layer. There is a substantial literature (for example [11], [3], [9]) on using variable-length strategies to substantially improve the rate of exponential decay of error probability with expected coding constraint length. These strategies essentially use the feedback to coordinate postponement of the final decision when the noise would otherwise cause errors. Thus small error probabilities can be achieved through the use of occasional long delays, while keeping the expected delay small.

For DAMGN channels an additional mechanism for using feedback exists whereby the transmitter can transmit unusually large amplitude signals when it observes that the receiver is in danger of making a decoding error. The power (i.e., the expected squared amplitude) can be kept small because these large amplitude signals are rarely required. In 1966, Schalkwijk and Kailath [13] used this mechanism in a fixed-length block-coding scheme for infinite bandwidth Gaussian noise channels with ideal feedback. They demonstrated the surprising result that the resulting probability of decoding error decreases as a second order exponential111For integer , the th order exponent function is defined as with repetitions of exp. A function is said to decrease as a th order exponential if for some constant and all sufficiently large , . in the code constraint length at all transmission rates less than capacity. Schalkwijk [12] extended this result to the finite bandwidth case, i.e., DAMGN channels. Later, Kramer [8] (for the infinite bandwidth case) and Zigangirov [15] (for the finite bandwidth case) showed that the above doubly exponential bounds could be replaced by th order exponential bounds for any in the limit of arbitrarily large block lengths. Later encoding schemes inspired by the Schalkwijk and Kailath approach have been developed for multi-user communication with DAMGN [16], [17], [18], [19], [20], secure communication with DAMGN [21] and point to point communication for Gaussian noise channels with memory [22].

The purpose of this paper is three-fold. First, the existing results for DAMGN channels with ideal feedback are made more transparent by expressing them in terms of a 1956 paper by Elias on transmitting a single signal from a Gaussian source via multiple uses of a DAMGN channel with feedback. Second, using an approach similar to that of Zigangirov in [15], we strengthen the results of [8] and [15], showing that error probability can be made to decrease with blocklength at least with an exponential order for given coefficients and . Third, a lower bound is derived. This lower bound decreases with an exponential order in equal to where is the same as in the upper bound and is a sublinear function 222i.e. of the block length .

Neither this paper nor the earlier results in [12], [13], [8], and [15] are intended to be practical. Indeed, these second and higher order exponents require unbounded amplitudes (see [10], [2], [9]). Also Kim et al [7] have recently shown that if the feedback is ideal except for additive Gaussian noise, then the error probability decreases only as a single exponential in block length, although the exponent increases with increasing signal-to-noise ratio in the feedback channel. Thus our purpose here is simply to provide increased understanding of the ideal conditions assumed.

We first review the Elias result [4] and use it to get an almost trivial derivation of the Schalkwijk and Kailath results. The derivation yields an exact expression for error probability, optimized over a class of algorithms including those in [12], [13]. The linear processing inherent in that class of algorithms is relaxed to obtain error probabilities that decrease with block length at a rate much faster than an exponential order of 2. Finally a lower bound to the probability of decoding error is derived. This lower bound is first derived for the case of two codewords and is then generalized to arbitrary rates less than capacity.

Ii The feedback channel and the Elias result

Let represent successive inputs to a discrete-time additive memoryless Gaussian noise (DAMGN) channel with ideal feedback. That is, the channel outputs satisfy where is an -tuple of statistically independent Gaussian random variables, each with zero mean and variance , denoted . The channel inputs are constrained to some given average power constraint in the sense that the inputs must satisfy the second-moment constraint

(1)

Without loss of generality, we take . Thus is both a power constraint and a signal-to-noise ratio constraint.

A discrete-time channel is said to have ideal feedback if each output , is made known to the transmitter in time to generate input (see Figure 1). Let be the random source symbol to be communicated via this -tuple of channel uses. Then each channel input is some function of the source and previous outputs. Assume (as usual) that is statistically independent of .

Decoder
Fig. 1: The setup for channel uses per source use with ideal feedback.

Elias [4] was interested in the situation where is a Gaussian random variable rather than a discrete message. For , the rate-distortion bound (with a mean-square distortion measure) is achieved without coding or feedback. For , attempts to map into an dimensional channel input in the absence of feedback involve non-linear or twisted modulation techniques that are ugly at best. Using the ideal feedback, however, Elias constructed a simple and elegant procedure for using the channel symbols to send in such a way as to meet the rate-distortion bound with equality.

Let be an arbitrary choice of energy, i.e., second moment, for each , . It will be shown shortly that the optimal choice for , subject to (1), is for . Elias’s strategy starts by choosing the first transmitted signal to be a linear scaling of the source variable , scaled to meet the second-moment constraint, i.e.,

At the receiver, the minimum mean-square error (MMSE) estimate of is , and the error in that estimate is . It is more convenient to keep track of the MMSE estimate of and the error in that estimate. Since and are the same except for the scale factor , these are given by

(2)
(3)

where and .

Using the feedback, the transmitter can calculate the error term at time . Elias’s strategy is to use as the source signal (without a second-moment constraint) for the second transmission. This unconstrained signal is then linearly scaled to meet the second moment constraint for the second transmission. Thus the second transmitted signal is given by

We use this notational device throughout, referring to the unconstrained source signal to be sent at time by and to the linear scaling of , scaled to meet the second moment constraint , as .

The receiver calculates the MMSE estimate and the transmitter then calculates the error in this estimate, . Note that

Thus can be viewed as the error arising from estimating by . The receiver continues to update its estimate of on subsequent channel uses, and the transmitter continues to transmit linearly scaled versions of the current estimation error. Then the general expressions are as follows:

(4)
(5)
(6)

where and .

Iterating on equation (6) from to yields

(7)

Similarly, iterating on , we get

(8)

This says that the error arising from estimating by is . This is valid for any (non-negative) choice of , and this is minimized, subject to , by for . With this optimal assignment, the mean square estimation error in after channel uses is

(9)

We now show that this is the minimum mean-square error over all ways of using the channel. The rate-distortion function for this Gaussian source with a squared-difference distortion measure is well known to be

This is the minimum mutual information, over all channels, required to achieve a mean-square error (distortion) equal to . For , is , which is the capacity of this channel over uses (it was shown by Shannon [14] that feedback does not increase the capacity of memoryless channels). Thus the Elias scheme actually meets the rate-distortion bound with equality, and no other coding system, no matter how complex, can achieve a smaller mean-square error. Note that (9) is also valid in the degenerate case . What is surprising about this result is not so much that it meets the rate-distortion bound, but rather that the mean-square estimation error goes down geometrically with . It is this property that leads directly to the doubly exponential error probability of the Schalkwijk-Kailath scheme.

Iii The Schalkwijk-Kailath scheme

The Schalkwijk and Kailath (SK) scheme will now be defined in terms of the Elias scheme,333The analysis here is tutorial and was carried out in slightly simplified form in [5, p481]. A very readable further simplified analysis is in [23]. still assuming the discrete-time channel model of Figure 1 and the power constraint of (1). The source is a set of equiprobable symbols, denoted by . The channel uses will now be numbered from to , since the use at time 0 will be quite distinct from the others. The source signal, is a standard -PAM modulation of the source symbol. That is, for each symbol , , from the source alphabet, is mapped into the signal where . Thus the signals in are symmetric around 0 with unit spacing. Assuming equiprobable symbols, the second moment of is . The initial channel input is a linear scaling of , scaled to have an energy to be determined later. Thus is an -PAM encoding, with signal separation .

(10)

The received signal is fed back to the transmitter, which, knowing , determines . In the following channel uses, the Elias scheme is used to send the Gaussian random variable to the receiver, thus reducing the effect of the noise on the original transmission. After the transmissions to convey , the receiver combines its estimate of with to get an estimate of , from which the -ary signal is detected.

Specifically, the transmitted and received signals for times are given by equations (4), (5) and (6). At time 1, the unconstrained signal is and . Thus the transmitted signal is given by , where the second moment is to be selected later. We choose for for optimized use of the Elias scheme, and thus the power constraint in (1) becomes . At the end of transmission , the receiver’s estimate of from is given by (7) as

The error in this estimate, , is a zero-mean Gaussian random variable with variance , where is given by (9) to be

(11)

Since and we have

(12)

where

Note that is a function of the noise vector and is thus statistically independent444Furthermore, for the given feedback strategy, Gaussian estimation theory can be used to show, first, that is independent of , and, second, that is a sufficient statistic for based on , (i.e. ). Thus this detection strategy is not as ad hoc as it might initially seem. of . Thus, detecting from (which is known at the receiver.) is the simplest of classical detection problems, namely that of detecting an -PAM signal from the signal plus an independent Gaussian noise variable . Using maximum likelihood detection, an error occurs only if exceeds half the distance between signal points, i.e., if . Since the variance of is , the probability of error is given by555The term in (13) arises because the largest and smallest signals each have only one nearest neighbor, whereas all other signals have two nearest neighbors.

(13)

where and is the complementary distribution function of , i.e.,

(14)

Choosing and , subject , to maximize (and thus minimize ), we get . That is, if is less than 1, all the energy is used to send and the feedback is unused. We assume in what follows, since for any given this holds for large enough . In this case, is one unit larger than , leading to

(15)

Substituting (15) into (13),

(16)

where

This is an exact expression for error probability, optimized over energy distribution, and using -PAM followed by the Elias scheme and ML detection. It can be simplified as an upper bound by replacing the coefficient by 1. Also, since is a decreasing function of its argument, can be further upper bounded by replacing by . Thus,

(17)

where

For large , which is the case of interest, the above bound is very tight and is essentially an equality, as first derived by Schalkwijk666Schalkwijk’s work was independent of Elias’s. He interpreted the steps in the algorithm as successive improvements in estimating rather than as estimating . in Eq. 12 of [12]. Recalling that we can further lower bound (thus upper bounding ). Substituting and we get

(18)

The term in brackets is decreasing in . Thus,

(19)
(20)

Using this together with equations (17) and (18) we get,

(21)

or more simply yet,

(22)

Note that for , decreases as a second order exponential in .

In summary, then, we see that the use of standard -PAM at time 0, followed by the Elias algorithm over the next transmissions, followed by ML detection, gives rise to a probability of error that decreases as a second-order exponential for all . Also satisfies (21) and (22) for all .

Although decreases as a second-order exponential with this algorithm, the algorithm does not minimize over all algorithms using ideal feedback. The use of standard -PAM at time 0 could be replaced by PAM with non-equal spacing of the signal points for a modest reduction in . Also, as shown in the next section, allowing transmissions 1 to to make use of the discrete nature of allows for a major reduction in .777Indeed, Zigangirov [15] developed an algorithm quite similar to that developed in the next section. The initial phase of that algorithm is very similar to the algorithm [12] just described, with the following differences. Instead of starting with standard -PAM, [15] starts with a random ensemble of non-equally-spaced -PAM codes ingeniously arranged to form a Gaussian random variable. The Elias scheme is then used, starting with this Gaussian random variable. Thus the algorithm in [15] has different constraints than those above. It turns out to have an insignificantly larger (over this phase) than the algorithm here for greater than and an insignificantly smaller otherwise.

The algorithm above, however, does have the property that it is optimal among schemes in which, first, standard PAM is used at time 0 and, second, for each , , is a linear function of and . The reason for this is that and are then jointly Gaussian and the Elias scheme minimizes the mean square error in and thus also minimizes .

Iii-a Broadband Analysis:

Translating these results to a continuous time formulation where the channel is used 2W times per second,888This is usually referred to as a channel bandlimited to . This is a harmless and universally used abuse of the word bandwidth for channels without feedback, and refers to the ability to satisfy the Nyquist criterion with arbitrarily little power sent out of band. It is more problematic with feedback, since it assumes that the sum of the propagation delay, the duration of the transmit pulse, the duration of the matched filter at the receiver, and the corresponding quantities for the feedback, is at most . Even allowing for a small fraction of out-of-band energy, this requires considerably more than bandwidth . the capacity (in nats per second) is . Letting and letting be the rate in nats per second, this formula becomes

(23)

Let be the continuous-time power constraint, so that . In the broadband limit as for fixed , . Since (23) applies for all , we can simply go to the broadband limit, . Since the algorithm is basically a discrete time algorithm, however, it makes more sense to view the infinite bandwidth limit as a limit in which the number of available degrees of freedom increases faster than linearly with the constraint time . In this case, the signal-to-noise ratio per degree of freedom, goes to 0 with increasing . Rewriting in (17) for this case,

(24)
(25)

where the inequality was used. Note that if increases quadratically with , then the term is simply a constant which becomes negligible as the coefficient on the quadratic becomes large. For example, if , then this term is at most and (25) simplifies to

(26)

This is essentially the same as the broadband SK result (see the final equation in [13]). The result in [13] used degrees of freedom, but chose the subsequent energy levels to be decreasing harmonically, thus slightly weakening the coefficient of the result. The broadband result is quite insensitive to the energy levels used for each degree of freedom999To see this, replace in (13) by , each term of which can be lower bounded by the inequality ., so long as is close to 1 and the other are close to 0. This partly explains why the harmonic choice of energy levels in [13] comes reasonably close to the optimum result.

Iv An alternative PAM Scheme in the high signal-to-noise regime

In the previous section, Elias’s scheme was used to allow the receiver to estimate the noise originally added to the PAM signal at time . This gave rise to an equivalent observation, with attenuated noise as given in (12). The geometric attenuation of with is the reason why the error probability in the Schalkwijk and Kailath (SK) [13] scheme decreases as a second order exponential in time.

In this section, we explore an alternative strategy that is again based on the use of -PAM at time 0, but is quite different from the SK strategy at times 1 to . The analysis is restricted to situations in which the signal-to-noise ratio (SNR) at time 0 is so large that the distance between successive PAM signal points in is large relative to the standard deviation of the noise. In this high SNR regime, a simpler and more effective strategy than the Elias scheme suggests itself (see Figure 2). This new strategy is limited to the high SNR regime, but Section V develops a two-phase scheme that uses the SK strategy for the first part of the block, and switches to this new strategy when the SNR is sufficiently large.

In this new strategy for the high SNR regime, the receiver makes a tentative ML decision at time 0. As seen in the figure, that decision is correct unless the noise exceeds half the distance to either the signal value on the right or the left of the sample value of . Each of these two events has probability .

Fig. 2: Given that is the sample value of the PAM source signal , the sample value of is where . The figure illustrates the probability density of given this conditioning and shows the -PAM signal points for that are neighbors to the sample value . Note that this density is , i.e., it is the density of , shifted to be centered at . Detection using maximum likelihood at this point simply quantizes to the nearest signal point.

The transmitter uses the feedback to calculate and chooses the next signal (in the absence of a second-moment constraint) to be a shifted version of the original -PAM signal, shifted so that where is the original message symbol being transmitted. In other words, is the integer-valued error in the receiver’s tentative decision of . The corresponding transmitted signal is essentially given by , where is the energy allocated to .

We now give an approximate explanation of why this strategy makes sense and how the subsequent transmissions are chosen. This is followed by a precise analysis. Temporarily ignoring the case where either or (i.e., where has only one neighbor), is with probability . The probability that is two or more is essentially negligible, so with a probability approximately equal to . Thus

(27)

This means that is not only a shifted version of , but (since ) is also scaled up by a factor that is exponential in when is sufficiently large. Thus the separation between adjacent signal points in is exponentially increasing with .

This also means that when is transmitted, the situation is roughly the same as that in Figure 2, except that the distance between signal points is increased by a factor exponential in . Thus a tentative decision at time 1 will have an error probability that decreases as a second order exponential in .

Repeating the same procedure at time 2 will then give rise to a third order exponential in , etc. We now turn to a precise analysis and description of the algorithm at times 1 to .

The following lemma provides an upper bound to the second moment of , which was approximated in (27).

Lemma IV.1

For any , let be a -quantization of a normal random variable in the sense that for each integer , if , then . Then is upper bounded by

(28)

Note from Figure 2 that, aside from a slight exception described below, is the same as the -quantization of where . The slight exception is that should always lie between and . If , then , whereas the -quantization takes on a larger integer value. There is a similar limit for . This reduces the magnitude of in the above exceptional cases, and thus reduces the second moment. Thus the bound in the lemma also applies to . For simplicity in what follows, we avoid this complication by assuming that the receiver allows to be larger than or smaller than 1. This increases both the error probability and the energy over true ML tentative decisions, so the bounds also apply to the case with true ML tentative decisions.

{proof}

From the definition of , we see that if . Thus, for ,

From symmetry, , so the second moment of is given by

Using the standard upper bound for , and recognizing that , this becomes

(29)

We now define the rest of this new algorithm. We have defined the unconstrained signal at time 1 to be but have not specified the energy constraint to be used in amplifying to . The analysis is simplified by defining in terms of a specified scaling factor between and . The energy in is determined later by this scaling. In particular, let

The peculiar expression for above looks less peculiar when expressed as . When is received, we can visualize the situation from Figure 2 again, where now is replaced by . The signal set for is again a PAM set but it now has signal spacing and is centered on the signal corresponding to the transmitted source symbol . The signals are no longer equally likely, but the analysis is simplified if a maximum likelihood tentative decision is again made. We see that where is the -quantization of (and where the receiver again allows to be an arbitrary integer) . We can now state the algorithm for each time , .

(30)
(31)
(32)
(33)

where is the -quantization of .

Lemma IV.2

For , the algorithm of (30)-(33) satisfies the following for all alphabet sizes and all message symbols :

(34)
(35)
(36)
(37)

where with exponentials.

{proof}

From the definition of in (30),

This establishes the first part of (34) and the inequality follows since and is increasing in .

Next, since , we can use (34) and Lemma IV.1 to see that

where we have canceled the exponential terms, establishing (35).

To establish (36), note that each is increasing as a function of , and thus each is upper bounded by taking to be 4. Then , , and the other terms can be bounded in a geometric series with a sum less than 0.12.

Finally, to establish (37), note that

where we have used Lemma IV.1 in , the fact that in , and equation (34) in and .

We have now shown that, in this high SNR regime, the error probability decreases with time as an th order exponent. The constants involved, such as are somewhat ad hoc, and the details of the derivation are similarly ad hoc. What is happening, as stated before, is that by using PAM centered on the receiver’s current tentative decision, one can achieve rapidly expanding signal point separation with small energy. This is the critical idea driving this algorithm, and in essence this idea was used earlier by101010However unlike the scheme presented above, in Zigangirov’s scheme the total amount of energy needed for transmission is increasing linearly with time. Zigangirov [15]

V A two-phase strategy

We now combine the Shalkwijk-Kailath (SK) scheme of Section III and the high SNR scheme of Section IV into a two phase strategy. The first phase, of block length , uses the SK scheme. At time , the equivalent received signal , (see (12)), is used in an ML decoder to detect the original PAM signal in the presence of additive Gaussian noise of variance .

Note that if we scale the equivalent received signal, by a factor of so as to have an equivalent unit variance additive noise, we see that the distance between adjacent signal points in the normalized PAM is where is given in (13). If is selected to be large enough to satisfy , then this detection at time satisfies the criterion assumed at time 0 of the high SNR algorithm of Section IV. In other words, the SK algorithm not only achieves the error probability calculated in Section III, but also, if the block length of the SK phase is chosen to be large enough, it creates the initial condition for the high SNR algorithm. That is, it provides the receiver and the transmitter at time with the output of a high signal-to-noise ratio PAM. Consequently not only is the tentative ML decision at time correct with moderately high probability, but also the probability of the distant neighbors of the decoded messages vanishes rapidly.

The intuition behind this two-phase scheme is that the SK algorithm seems to be quite efficient when the signal points are so close (relative to the noise) that the discrete nature of the signal is not of great benefit. When the SK scheme is used enough times, however, the signal points becomes far apart relative to the noise, and the discrete nature of the signal becomes important. The increased effective distance between the signal points of the original PAM also makes the high SNR scheme, feasible. Thus the two-phase strategy switches to the high SNR scheme at this point and the high SNR scheme drives the error probability to 0 as an order exponential.

We now turn to the detailed analysis of this two-phase scheme. Note that 5 units of energy must be reserved for phase 2 of the algorithm, so the power constraint for the first phase of the algorithm is . For any fixed rate , we will find that the remaining time units is a linearly increasing function of and yields an error probability upper bounded by .

V-a The finite-bandwidth case

For the finite-bandwidth case, we assume an overall block length , an overall power constraint , and an overall rate . The overall energy available for phase is at least , so the average power in phase is at least .

We observed that the distance between adjacent signal points, assuming that signal and noise are normalized to unit noise variance, is twice the parameter given in (16). Rewriting (16) for the power constraint ,

(38)

where to get we assumed that . We can also show that the multiplicative term, , is a decreasing function of satisfying

This establishes (38). In order to satisfy , it suffices for the right-hand side of (38) to be greater than or equal to . Letting , this condition can be rewritten as

(39)

Define by

This is a concave increasing function for and can be interpreted as the capacity of the given channel if the number of available degrees of freedom is reduced from to without changing the available energy per block, i.e., it can be interpreted as the capacity of a continuous time channel whose bandwidth has been reduced by a factor of . We can then rewrite (39) as

(40)

where . This is interpreted in Figure 3.