A New Random Coding Technique that Generalizes Superposition Coding and Binning

A New Random Coding Technique that Generalizes Superposition Coding and Binning

Abstract

Proving capacity for networks without feedback or cooperation usually involves two fundamental random coding techniques: superposition coding and binning. Although conceptually very different, these two techniques often achieve the same performance, suggesting an underlying similarity. In this correspondence we propose a new random coding technique that generalizes superposition coding and binning and provides new insight on relationship among the two With this new theoretical tool, we derive new achievable regions for three classical information theoretical models: multi-access channel, broadcast channel, the interference channel, and show that, unfortunately, it does not improve over the largest known achievable regions for these cases.

random coding, superposition coding, binning, multi access channel, broadcast channel, interference channel.

1Introduction

Apart from few notable exceptions [1], the capacity region for a general multi-terminal network is shown using random coding techniques such as rate splitting, time sharing, superposition coding, binning, Markov encoding, quantize, and forward and few others. For networks with no feedback or cooperation, the two random coding techniques are usually considered when proving capacity: superposition coding and binning. Superposition coding can be intuitively be thought of as stacking codewords on top of each other [3] and is obtained by generating the codewords of the “top” codebook conditionally dependent on the “base” codeword. A typical representation of this encoding technique is the one Figure 1 [3] where the codeword is superposed to the codewords . A codeword in the base codebook is randomly selected from the typical set . For each base codeword, a top codebook is generated by selecting random elements from the typical set . Superposition coding is often thought of as placing spheres, or clouds, in the typical set : the base codewords are “cloud centers” while top codewords are “satellite codewords”. If the number of spheres is small enough–a low rate for the base codewords–and their size is small enough–a low rate for the top codewords–then codewords are sufficiently spaced apart to allow successful decoding.

Binning

1 allows a transmitter to “pre-cancel” (portions of) the interference experienced at a receiver. A usual representation of binning [6] is the one in Figure 2: here the codewords is binned against . The codebook of is generated as in superposition coding while the codewords for and are selected from the typical set and placed in bins. Codewords in the same bin are associated with the same message, i.e. multiple codewords can be used to communicate the same message. The codeword in the bins are selected for transmission when it belongs to the typical set , even though generated independently from . It is possible to find a codeword that satisfies this condition if the size of each bin is sufficiently large. Binning is commonly interpreted as dividing the typical set in the partitions formed by the bins in which the codewords and are placed. Encoding is successful when the size of the bins is sufficiently large–large binning rate–while decoding is successful if the transmitted codewords are sufficiently far apart–low message rate. In certain cases, it is possible to simultaneously bin two codewords against each other: this coding technique is usually referred to as joint binning.

Figure 1: A graphical representation of superposition coding.
Figure 1: A graphical representation of superposition coding.
Figure 2: A graphical representation of joint binning.
Figure 2: A graphical representation of joint binning.

Despite the difference in this two random coding techniques, in many cases they have identical performance [7]: this suggests an underlying similarity in the way the two techniques select the codewords to be transmitted. To gain a better understanding of the properties of theses strategies, we develop a new random coding technique that encompasses superposition coding and binning as special cases. In this scheme codewords are first superposed according to a certain distribution, the codebook distribution, and successively binned to appear as if generated according to a different distribution, the encoding distribution. Classical superposition coding corresponds to the case where the binning distribution is the same as the codebook distribution while binning is obtained when the codebook distribution has independent codewords. All the strategy in between these two cases have never been previously considered in literature. We use this new random coding technique to derive achievable regions for the multi-access channel, the broadcast channel, and the interference channel. Unfortunately these new achievable regions do not improve on the largest known achievable regions for the broadcast channel and the interference channel but show that these regions can be obtained with a wider set of encoding strategies than what was previously.

Paper Organization: Section 2 introduces the a new random coding techniques that generalizes superposition coding and binning. Section 3 presents new achievable regions for classical communication models. Section 4 concludes the paper.

2Combining Superposition Coding and Binning

We introduce the new random coding technique that generalizes superposition coding and binning with a simple example. Consider a classical Broadcast Channel with a common message (BC-CM) where two messages and are encoded at transmitter 1, message is decoded at receiver 2 while message is decoded both decoder 1 and decoder 2. The channel outputs are obtained from the channel input from the channel transition probability .

Take any distribution for the codebook generation and the encoding procedure and let be the messages to be transmitted for .

Codebook Generation

1) Generate codewords with iid draws from the distribution . Index these codewords as for with .

2) For each , generate codewords with iid draws from the distribution . Index these codewords as for with

Encoding Procedure

For each message set , choose the bin indexes and so that If no such set exists, pick two indexes at random. Generate the channel input as a deterministic function of the Random Variables (RVs) and .

Decoding Procedure

1) Decoder 1 looks for a set of indexes such that

2) Decoder 2 looks for a set of indexes

To determine the performance of the achievable scheme above we need to determine the encoding and decoding error probabilities.

The key techniques in bounding the encoding error is related to the probability that there exists a random vector in the typical set of certain distribution , that also belongs to the typical set of a different distribution . The probability of this event can be bounded using [8].

The quantity

is referred to as inaccuracy and it can be used to derive a more general version of the covering lemma [4].

With Th. ? we derive the achievable region of the proposed random coding strategy.

is obtained by having the distribution imposed at encoding equal the distribution of the codebook , i.e. In this case the binning rates and can be set to zero, thus obtaining the region

corresponds to the case where the distribution of the codebook equals the product of the marginals of encoding distribution: which results in the region

The region with binning is obtained from the region with joint binning of above by setting either or to zero.

3Achievable Regions for Classical Channel

We apply the new random coding technique in Section 2 to the Multi-Access Channel with Common Messages (MAC-CM) [10], the Broadcast Channel (BC) [3] and the InterFerence Channel (IFC) [14]. Capacity is known for the MAC-CM and for a subsets of both the BC and the IFC. The largest achievable regions in each case can be achieved by employing a combination of rate splitting,superposition coding and binning [7]. In the following we adopt the notation in [7] to describe the channel model and distribution of messages and codewords. In particular, the codeword , with rate , encodes the messages from the set of transmitters to the set of receivers.

3.1The Multi-Access Channel with Common Messages

In the classical MAC [10], two transmitters communicate a message each to a single decoder. In the MAC-CM an additional common message is transmitted by each source to the decoder [16]. Let be the codeword associated with the message from transmitter to receiver 1, for respectively. Using the random coding technique in Section 2, we can superpose and over and successively bin and against .

After the Fourier-Motzkin Elimination (FME) of the region in , we obtain the classical region [16] union over all the possible distributions in which is indeed capacity. Cor. ? shows that the capacity of the MAC-CM can be achieved with any distribution of the codewords and for as long as the codewords can be further binned to impose the distribution of the matching outer bound. The distance between the codewords at generation and after encoding has no effect on the resulting achievable scheme.

3.2The Broadcast Channel

In the BC [3] one encoder wants to communicate to two decoders a message each. For this channel model, rate splitting can be applied so as to split each message in private and common part; the two common part can then be embedded into a single common message. This transforms the problem of achieving the rate vector in the problem of achieving the rate vector where for any and . The random coding technique of Section 2 can be applied to the BC after the rate splitting in by superposing and over and successively jointly binning and .

After the FME of the binning rates we obtain the region with With the equivalence in we conclude that the region in is equivalent to Marton’s region [17] which is the largest known achievable region for a general BC. As for Cor. ?, Cor. ? shows that the achievable region is not determined by the distribution of the codewords in the codebook but only on the distribution after encoding.

3.3The Interference Channel

The IFC is four-terminal network where two pairs of transmitter/receiver pairs want to communicate a message over the channel each. As for the BC, we can rate-split each message into a public and private part: the private messages, and respectively, are decoded only at the intended transmitter while the public messages, and , are decoded by both decoders. Rate-splitting transforms the problem of achieving the rate vector in the problem of achieving the rate vector where for any , , . The new random coding technique of Section 2 can be applied to the IFC after rate splitting by superposing the onto and jointly binning . The same encoding procedure is applied to the codewords and .

From the FME of the binning rates one obtains that the largest achievable region in is achieved with the choice and . With this choice the achievable region in Cor. ? becomes equivalent to the Han and Kobayashi region [15], which is the largest known achievable region for a general IFC.

As for the MAC , Cor. ?, and the BC, Cor. ?, Cor. ? does not improve on the largest known region for the IFC but shows that a larger set of transmission strategies than superposition coding and binning can be used to achieve this region.

In [7] we have shown that superposition coding and binning can both be used to achieve the largest known inner bound for the MAC, BC and IFC. In the examples above we have shown that combining the two encoding strategies into a new and more general transmission strategy still achieves the same performance. With these considerations in mind, we can provide an insight on the error performance of these two coding techniques. Both in superposition coding and binning one creates multiple codewords to transmit the same message. In superposition coding, the message encoded in top codebook is associated to multiple codewords, one for each possible base codeword. while, in binning, codewords in the same bin are associated to the same message. When , with rate , is superposed to , with rate , the number of possible codewords used to encode the same message in is . In binning, the number of excess codewords depends on the joint probability distribution between the codewords imposed by the encoding procedure. When is binned against , the smallest number of codeword in each bin is . In both cases, the transmitted codewords belong to the typical set but superposition coding usually requires a larger number of excess codewords than binning to achieve the desired typicality property and this number is fixed and does not depend on . While binning is more advantageous than superposition coding at encoding, it performs worst at decoding. In superposition coding, after the decoding of the base codeword , the receiver looks for the transmitted top codeword in a codebook of size . In binning, instead, after has been correctly decoded, the possible transmitted codewords are . The knowledge of helps the decoder in determining in that must appear as if generated according to the encoding distributions, but it does not reduce the number of possible transmitted codewords . Interestingly the encoding and decoding benefits provided by superposition coding and binning seem to balance each other in the proposed random coding technique.

4Conclusion

In this paper we present a new achievable strategies that encompasses superposition coding and binning. The error analysis of this new achievable scheme requires a more general version of the classical covering lemma that is based on the inaccuracy between typical sequences. With this new random coding technique we derive achievable regions for the multi-access channel, broadcast channel and interference channel. These inner bounds do not improve on the largest known achievable regions but show that the same error performance can be achieved with a large set of encoding strategies.

Acknowledgment

The author would like to thank Prof. Gerhard Kramer for suggesting the inaccuracy to measure the distance between codebook and encoding distribution.

Footnotes

  1. sometimes referred to as Cover’s random binning [4] or Gel’fand-Pinsker coding [5].

References

  1. S. Sridharan, A. Jafarian, S. Vishwanath, and S. Jafar, “Capacity of symmetric K-user gaussian very strong interference channels,” in Global Telecommunications Conference, 2008. IEEE GLOBECOM 2008. IEEE.1em plus 0.5em minus 0.4emIEEE, 2008, pp. 1–5.
  2. E. Abbe and E. Telatar, “Mac polar codes and matroids,” in Information Theory and Applications Workshop (ITA), 2010.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 1–8.
  3. T. Cover, “Broadcast channels,” IEEE Trans. Inf. Theory, vol. 18, no. 1, pp. 2–14, 1972.
  4. A. El Gamal and Y.-H. Kim, “Lecture notes on network information theory,” preprint at arXiv:1001.3404, 2010.
  5. S. Gel’fand and M. Pinsker, “Coding for channel with random parameters,” Problems of control and information theory, vol. 9, no. 1, pp. 19–31, 1980.
  6. T. Cover and J. Thomas, Elements of information theory.1em plus 0.5em minus 0.4emWiley, 1991.
  7. S. Rini, “An achievable region for a general multi-terminal network and the corresponding chain graph representation,” Arxiv preprint arXiv:1112.1497, 2011.
  8. I. Csiszar and J. Körner, Information theory: Coding theorems for discrete memoryless channels.1em plus 0.5em minus 0.4em Budapest: Akademiai Kiado, 1981.
  9. S. Rini, “An extension to the chain graph representation of an achievable scheme,” IEEE Trans. Inf. Theory, 2012, in preparation.
  10. R. Ahlswede, “Multi-way communication channels,” in Proc. 2nd IEEE International Symposium on Information Theory (ISIT), 1971, pp. 103–135.
  11. H. Liao, “Multiple access channels.” DTIC Document, Tech. Rep., 1972.
  12. P. Bergmans, “Random coding theorem for broadcast channels with degraded components,” IEEE Trans. Inf. Theory, vol. 19, no. 2, pp. 197–207, 1973.
  13. R. Gallager, “Capacity and coding for degraded broadcast channels,” Problems in the Transmission of Information, vol. 10, no. 3, pp. 3–14, 1974.
  14. R. Ahlswede, “The capacity region of a channel with two senders and two receivers,” The annals of probability, vol. 2, no. 5, pp. 805–814, 1974.
  15. T. Han and K. Kobayashi, “A New Achievable Rate Region for the Interference Channel,” IEEE Trans. Inf. Theory, vol. 27, no. 1, pp. 49–60, Jan 1981.
  16. D. Slepian, “A coding theorem for multiple-access channel with correlated sources,” Bell Syst. Tech. J., vol. 51, pp. 1037–1076, 1973.
  17. K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” IEEE Trans. Inf. Theory, vol. 25, no. 3, pp. 306–311, 1979.
7242
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description