# Toward Fast Reliable Communication at Rates Near Capacity with Gaussian Noise

###### Abstract

For the additive Gaussian noise channel with average codeword power constraint, sparse superposition codes and adaptive successive decoding is developed. Codewords are linear combinations of subsets of vectors, with the message indexed by the choice of subset. A feasible decoding algorithm is presented. Communication is reliable with error probability exponentially small for all rates below the Shannon capacity.

## I Introduction

Sparse superposition codes with computationally feasible decoding is shown to achieve exponentially small error probability for any rate below the capacity. A companion presentation [6] gives bounds for optimal least squares decoding.

Code construction is by linear combination of vectors of length . Let be a dictionary of such vectors. Organize it in a matrix of columns, partitioned into sections of size a power of . Codewords are superpositions with each section having term non-zero. The set of such is not closed under linear combination, so these are not linear codes in the algebraic coding sense. Nevertheless, they are fast to code and decode.

The message is conveyed by the choice of the subset of terms, with one from each section. From an input bit string , with , encoding is realized by regarding as a concatenation of numbers, each with bits, specifying the selected columns. The codewords have power , which will be near when averaged across the possible codewords. The received vector is with distributed N().

A decoder maps the received vector into an estimate . With being the terms sent, the decoder produces estimates . Overall block error is the event and section error is the event . The fraction of section mistakes is .

The reliability requirement is that the mistake rate is small with high probability or the block error probability is small, averaged over input strings as well as the distribution of . The supremum of reliable communication rates is the channel capacity , as in [29], [11].

The challenge is to achieve arbitrary rates below the capacity, with reliable decoding in manageable computation time. Here communication rates are identified which are moderately close to the capacity and a fast decoding scheme is devised. It is demonstrated to have probability that is exponentially small in of there being more than a moderately small fraction of section mistakes.

The setting adopted is the discrete-time channel with real-valued inputs and outputs and independent Gaussian noise. Standard communication models have been reduced to this setting as in [17], [15], when there is a frequency band constraint with specified noise spectrum. Solution to the coding problem, married to appropriate modulation, is relevant to myriad settings involving transmission over wires or cables for internet, television, or telephone or in wireless radio, TV, phone, satellite or other space communications. Previous standard approaches, as discussed in [15], entail a decomposition into separate problems of modulation, of shaping of a multivariate signal constellation, and of coding. Though there are practical schemes with empirically good performance, theory for practical schemes achieving capacity is lacking. In our analysis, shaping is built directly into the code design.

The entries of are generated with the independent standard normal distribution. The coefficients are equal to for in and equal to otherwise, with sum of squares matching the power constraint. In the simplest case, the same power is allocated to each section . We also consider the choice of variable power with proportional to and a slight variant of this allocation in which the power is variable across most and then levels for near .

For a rate code, , so the codelength and the subset size agree to within a log factor. Setting is sensible, or, for a target codelength , one may set and . For the best case developed here, the rate is chosen to have a drop from capacity that is near , to within a loglog factor. When the signal to noise ratio is large, one finds it desirable to arrange to be at least as large as to achieve at least a constant fraction of capacity.

Let’s summarize our findings. With constant power allocation, a two-step algorithm and a multi-step improvement reliably achieve rates up to a rate less than capacity. With variable power and order steps, we bring the achievable rate up near capacity , albeit with a gap from capacity of order . With the variant in which the power is leveled for near , the gap from capacity is reduced to order , to within a loglog factor, and, moreover, the section mistake rate is less than a constant times , except in an event of probability exponentially small in , as we report here. Subsequent to the submission of this conference paper, we have refined this probability bound, obtaining that it is exponentially small in , or equivalently , to within a loglog factor, as will be reported in the upcoming journal submission.

The performance, as measured by the gap from capacity at a similar reliability level, is comparable to benchmarks of performance for schemes not demonstrated to be practical, including [6] for least squares decoding of related superposition codes, and [25] for theoretically optimal codes. For a gap from capacity of order , the best error probability is exponentially small in .

The decoder initially computes for the received , its inner product with the terms in the dictionary, and sees which are above a threshold. Such a set of inner products and comparisons is performed in parallel by a basic computational unit, e.g. a signal-processing chip with parallel accumulators, in time of order . These are pipelined so that the inner products are updated in constant time as each element of arrives.

The threshold, set high enough that incorrect terms are unlikely to be above threshold, leads to only a small fraction of terms decoded in any one such step. Additional steps are used to bring the total fraction decoded near . These steps take the inner products with residuals of the fit from the terms previously above threshold. A variant of the inner product with residuals is found to be somewhat more amenable to analysis.

The decoder does not predetermine which sections are to be decoded on any one step, rather it adapts the choice in accordance with which has inner product observed to be above threshold. Thus we call it adaptive successive decoding.

We determine a function mapping from into , which has the role that if is a likely fraction of sections correctly decoded from previous steps up to then , slightly adjusted, provides a value of total fraction of sections likely to be correctly decoded by step . This function depends on the power allocation rule and the choice of rate. A choice of communication rate is acceptable if the function is greater than over most of the interval. Such a function is said to be accumulative, allowing the succession of steps to build up a large fraction of correctly decoded sections, with only a small fraction of mistakes remaining. The role of is illustrated in Figure .

Our analysis provides summary formulas for the rate and the target fraction of mistakes that arise from bounding the extent of positivity of . These summary formulas provide proof of a favorable scaling of rate by our scheme for the particular reliability targets, indexed by the size of the code.

Moreover, the function can be evaluated in detail to choose settings of parameters (, , and below). This allows computation of the best communication rate our analysis achieves, for given error probability and target mistake rates.

The parameter arises in the threshold of the standardized inner products. The parameter sets the height at which the variable power is leveled, with power chosen to be proportional to , with where .

Allowing power proportional to , with , for between and , interpolates between the constant and variable cases.

Figure plots the rate as a function of , from optimization of , , and , maintaining the bound on the probability of a fraction of mistakes exceed . Both the case , and a large limit are shown as well as some results of simulation of the algorithm with .

Signed superposition coding in which the ’th non-zero coefficient value is increases the number of codewords to with the same reliability bounds, thereby improving the rate by a factor of , above what is shown in Figure 2. Arbitrary term subset coding (without partitioning) is possible, though not as simple, for a total rate improvement by a factor. For this presentation, we focus on the unsigned, partitioned superposition code case.

To prevent block errors, our subset superposition codes combine with error correction codes. The idea is to arrange sufficient distance between the subsets. Consider composition with an outer Reed-Solomon (RS) code of rate near one, for an overall rate . The alphabet of the RS code is taken to be of size . Interpret its codewords as providing the sequence of labels of the terms selected from the sections. The RS codelength is taken to be either or using a standard extension. RS code properties as in [24] guarantee correction of any fraction of section mistakes less than . For advocacy of code concatenation see [14]. As a consequence of our result for the inner code, the composite code makes no mistakes, except in an event inheriting the exponentially small probability in .

A fascinating alternative approach is channel polarization [2, 3], which achieves high rates for binary signaling with feasible decoding, with error probability exponentially small in . For our scheme the error probability is exponentially small in for any and communication is permitted at higher rates beyond that associated with binary signalling.

Codes empirically demonstrated to be good include low density parity check codes and turbo codes, both with iterative statistical belief propagation decoding, but mathematically proof of performance near capacity is so far limited to special cases such as the binary erasure channel [22, 23].

Another approach to sparse superposition decoding is convex projection with constraint, arising from analogous problems of statistical learning and signal recovery. Iterative procedures and properties for such projection are in [19],[4],[21],[5], [18], with preliminary findings for communication in [30]. Each iteration would find in each section the term of highest inner product with the residuals and use it to update the convex combination. It is unclear to us whether convex projection for communication can be reliable at rates up to capacity.

The conclusions may be expressed in the language of sparse signal recovery. terms from a dictionary are linearly combined and subject to noise. For signals recovery of the terms from the received noisy is possible provided the number of observations is at least . Recovery using constrained convex optimization is accurate provided in the equal power case. For our variable power designs, our results establish recovery by other means at higher . These findings complement work in [31],[32],[13],[12],[8], [26],[20]. For typical signal recovery problems there is greater freedom of design with non-zero coefficients values regarded as unknown, leading to bounds based on the minimum non-zero signal size, rather than exclusively based on the total signal power as in the communication capacity.

Superposition codes began with [10] for the broadcast channel, and later for multiple-access channels [9],[28]. Our purpose of computational feasibility is different from the original purpose of identifying the set of achievable rates. Another connection is the consideration of rate splitting and successive decoding. Our adaptive successive decoding yields feasibility in the single-user case and should work also in multi-user settings.

## Ii The Decoder

From the received and knowledge of the dictionary, decode which terms were sent by an iterative procedure. In the constant power allocation case set . For the variable power case let for in section .

First Step: For each term of the dictionary compute the statistic .The terms for which the statistic is above a threshold are regarded as decoded terms. Denote the associated event . The idea of the first step threshold is that very few of the terms not sent will be above threshold. Yet a positive fraction of the terms sent will be above threshold and hence will be correctly decoded on this first step, with an average likely to be at least a positive value as will be quantified.

Let be the set of terms decoded on this step. The first step provides the fit .

Second Step: For each of the remaining terms, form the inner product with the vector of residuals , that is, compute or its normalized form . A quantity with similar properties is found to be equally easy to compute and somewhat simpler to analyze. Indeed, compute which is the part of in the direction and the vector which is the part orthogonal to . For each of the remaining compute . Then form the combined test statistic

with . The specified is chosen to maximize the mean separation between correct and wrong terms. For the two-step version, complete the decoding, in each section not previously decoded, by picking the term for which this statistic is largest, with no need for a second step threshold in that case.

Extension to Multiple Steps: We briefly describe how the algorithm is extended to multiple steps to provide increased reliability. The process initializes with the vectors of terms in the dictionary with index set consisting of all the terms. From the first step, is the received vector and the statistics are for in with associated events .

For the second step the vector is formed, which is the part of orthogonal to . The set of terms investigated on this step is . For in , the statistic is computed as well as the combined statistic , where and . What is different on the second step is consideration of the events with the same threshold , leading to an additional part of the fit . The second step provides some increase in separation, without attempting to resolve all in two steps.

Proceed, iteratively, to perform the following loop of calculations, for . From the output of step , there is available the partial fit vector and for the previously stored vectors and statistics at for in the previous set . Plus there is a set of remaining terms for us to consider at step . From the residual , one may compute . Instead, for simplification of the analysis, compute the part of orthogonal to the previous and for each in compute

and the combined statistic

where the value of we shall specify is again chosen to maximize a measure of separation between correct and wrong terms. The statistics are similar, entailing empirically determined values of . The statistics are compared to the threshold, leading to the events . The output of step is the vector

providing the update . Also the vector and the statistics are appended to what was previously stored, at least for the terms in . This step updates the set of decoded terms to be and updates the set of terms remaining for further consideration . This completes the actions of step of the loop. The idea is that on each step we decode a substantial part of what remains, because of growth of the mean separation between terms sent and the others.

## Iii Reliability

Let be the fraction of correct detections and false alarms at step . Also let be the total fraction of false alarms after steps. For the variable power case let and use and , as weighted fractions, relative to the total weight of terms sent.

It is not hard to see that is a lower bound on the total weighted fraction of correct detections from steps to .

Let’s specify a target false alarm rate that arise in our analysis for each step. For step , for given , set

and likewise set values . Recall that the threshold . Indeed, it is unlikely that exceeds .

Similarly, using distributional properties of using the function discussed below, we specify a value for which we expect that is likely to be at least . Further define, and for ,

where . These are used in setting the weight and in expressing the mean separation between terms sent and terms not sent. Indeed with

measuring the increase in a quantity used in specifying the separation. For establishing reliability, the critical matter is to demonstrate that grows to a value near . Define

Here and , where is a small number positive number.

The are not normally distributed, nevertheless, it is demonstrated by induction that in a set of high probability, they are greater than normal random variables which have mean for terms not sent and mean for terms sent. Across the terms , the joint normal distribution that arises in this construction has a covariance where , for which it is shown that the joint density is not more than a constant times the joint density that would arise if they were independent standard normal.

In the constant power case with , let . where . Then for terms sent and at . If exceeds , then there is room to set just below , so that if is small enough, then is indeed larger than .

The stays above a positive for all . For the constant power case the positivity holds at provided is separated from by at least a polynomial in , and this gap at is the minimum value in provided and where .

Lemma 1: If is at least a positive on an interval , choose small positive and . Arrange to be positive and for and arrange where . Then the increase on each step for which is at least , where satisfies , quadratic in with solution . Moreover, the number of steps required such that on step , the first exceeds , is bounded by steps. At the final step exceeds .

We also consider the variable power case. A quantity needed in our analysis is . With proportional to , this becomes , where the value is near the capacity . Then is near when is near the capacity . In the variable power case, the mean separation of the is given by for section . Likewise the role of the function is played by

When is proportional to this is at least the value of a nearby integral

This is found to compare favorably to , to yield the required growth of the .

Consider the case allowing leveling with which , for which the normalizing sum is found to be , where is near , bounded by , with and . The function is defined as above with an analogous nearby integral with in place of . Set and consider the rate

where with . Setting a suitably small false alarm rate to not interfere with the accumulation of correct detections, the resulting is of order , so all three sources of rate drop above, , and are of order to within a loglog factor. A relevant lemma is the following.

Lemma 2: Let be near with . For any non-negative , , and , with the rate given above, the function for , is minimized at .

The proof is based on an evaluation of the integral which has expression in terms of the variable which is one-to-one with . The value corresponds to a point with favorable properties. Expressing the function in terms of , one makes separate treatment of the behavior for , where the function is close to decreasing, and for , where the function is close to symmetric, slightly skewed to be lower at .

The value of is near . Consider choices that approximately optimize the overall rate, yielding near , at which the of at is at least a value near , positive for . Moreover, choosing such that the false alarm rate equals , so that the conditions of Lemma 1 are satisfied, it produces a value of of the indicated form.

Let’s state the result regarding reliability of the multi-step adaptive successive decoder. The proof is based on the above-mentioned normal approximation bound and a large deviation bound for weighted combinations of Bernoulli random variables, for which one may see the full manuscript [7].

Theorem 3: Suppose the communication rate and power allocation are such that is accumulative, with on . Pick and such that the conditions of Lemma 1 are satisfied, or more generally arrange so that the increase remains positive for . If the penultimate step is such that is the first with value at least , then with , the step single-dictionary decoder incurs a fraction of errors less than , except in an event of probability not more than the sum for from to of

Here refers to the Kullback-Leibler divergence between two Bernoulli random variables; equal the corresponding divided by ; and which is at least . Also , where , and . Moreover, , approximately a constant multiple of for the designs investigated here.

To produce each step from , one may set a constant difference and invoke the bound . A preferred tactic, used in producing the curves shown earlier, is each step to choose to produce constancy of the exponent at a prescribed value, equalizing the contributions to the above probability bound from each step.

## Acknowledgment

Creighton Heaukulani is thanked for helpful simulations.

## References

- [1]
- [2] E. Arikan, “Channel polarization,” IEEE Trans. Inform. Theory, v.55, 2009.
- [3] E. Arikan and E. Telatar, “On the rate of channel polarization,” Preprint. ArXiv, Jul.2008.
- [4] A.R. Barron, “Universal approximation bounds for superpositions of a sigmoidal function,” IEEE Trans. Inform. Theory, v.39, 930-944, 1993.
- [5] A. Barron, A. Cohen, W. Dahmen and R. Devore, “Approximation and learning by greedy algorithms.” Ann. Statist. v.36, 64-94, 2007.
- [6] A.R. Barron, A. Joseph, “Least squares superposition coding of moderate dictionary size, reliable at rates up to channel capacity,” Proc. Internat. Symp. Inform. Theory, Austin, Texas, June 2010.
- [7] A.R. Barron, A. Joseph, “Fast Reliable Communication at Rates not Far from Capacity with Gaussian Noise,” Dept. Stat., Yale Univ.
- [8] E. Candes and Palm, “Near-ideal model selection by minimization,” Annals of Statistics, 2009.
- [9] J. Cao and E.M. Yeh, “Asymptotically optimal multiple-access cummincation via distributed rate splitting,” IEEE Trans. Inform. Theory, v.53, 304-319, Jan.2007.
- [10] T.M. Cover, “Broadcast channels,” IEEE Trans. Inform. Theory, v.18, 2-14, 1972.
- [11] T.M. Cover and J.A. Thomas, Elements of Information Theory, New York, Wiley-Interscience, 2006.
- [12] D.L. Donoho, M. Elad, and V.M. Temlyakov, “Stable recovery of sparse overcomplete representations in the presence of noise,” IEEE Trans. Inform. Theory, v.52, 6-18, Jan. 2006.
- [13] A.K. Fletcher, S. Rangan, and V.K. Goyal, “Necessary and sufficient conditions for sparsity pattern recovery,” IEEE Trans. Inform. Theory, v.55, 5758-5773, 2009.
- [14] G.D. Forney, Jr. Concatenated Codes, Research Monograph No.37, Cambridge, Massachusetts, M.I.T. Press. 1966.
- [15] G.D. Forney and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Inform. Theory, v.44, Oct.1998.
- [16] R.G. Gallager, Low Density Parity-Check Codes, Cambridge, Massachusetts, M.I.T. Press. 1963.
- [17] R.G. Gallager, Information Theory and Reliable Communication, New York, John Wiley and Sons, 1968.
- [18] C. Huang, A.R. Barron, and G.H.L. Cheang, Preprint, 2008.
- [19] L. Jones, “A simple lemma for optimization in a Hilbert space, with application to projection pursuit and neural net training,” Annals of Statistics, v.20, 608-613, 1992.
- [20] A. Karbasi, A. Hormati, S. Mohajer, and M. Vetterli, “Support recovery in compressed sensiing: an estimation theoretic approach. IEEE Internat. Symp. Inform. Theory, 679-683, Seoul Korea, June 2009.
- [21] W.S. Lee, P. Bartlett, B. Williamson, IEEE Trans. Inform. Theory, 42, 2118-2132, 1996.
- [22] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, and D.A. Spielman, “Efficient erasure correcting codes.” IEEE Trans. Inform. Theory, v.47, 569-584, Feb.2001.
- [23] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, and D.A. Spielman, “Improved Low-Density Parity-Check Codes Using Irregular Graphs.” IEEE Trans. Inform. Theory, v.47, 585-598, Feb. 2001.
- [24] F.J. MacWilliams and N.J.A. Sloane, The Theory of Error-Correcting Codes, Amsterdam and New York, North-Holland Publishing Co.
- [25] Y. Polyanskiy, H.V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime.” IEEE Trans. Inform. Theory. 2010.
- [26] K.R. Rad, “Sharp upper bound on error probability of exact sparsity recovery,” Proc. Conference Inform. Sciences Systems, 2009.
- [27] I.S. Reed and G. Solomon, “Polynomial codes over certain finite fields.” J. SIAM, v.8, 300-304, June 1960.
- [28] B. Rimoldi and R. Urbanke, “A rate-splitting approach to the Gaussian multiple-access channel capacity.” IEEE Trans. Inform. Theory, v.47, 364-375, Mar. 2001.
- [29] C.E. Shannon, “A mathematical theory of communication.” Bell Syst. Tech. J., v.27, 379-423 and 623-656, 1948.
- [30] J. Tropp, “Just relax: convex programming methods for identifying sparse signals in noise,” IEEE Trans. Inform. Theory, v.52, 1030-1051, Mar. 2006.
- [31] M.J. Wainwright, “Sharp thresholds for high-dimensional and noisy sparsity recovery using -constrained quadratic programming (Lasso).” IEEE Trans. Inform. Theory, v.55, 2183-2202, May 2009.
- [32] M.J. Wainwright, “Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting,” IEEE Trans. Inform. Theory, v.55, 5728-5741, Dec. 2009.