On Models for Multi-User Gaussian Channels with Fading

On Models for Multi-User Gaussian Channels with Fading

Abstract

An analytically tractable model for Gaussian multiuser channels with fading is studied, and the capacity region of this model is found to be a good approximation of the capacity region of the original Gaussian network. This work extends the existing body of work on deterministic models for Gaussian multiuser channels to include the physical phenomenon of fading. In particular, it generalizes these results to a unicast, multiple node network setting with fading.

1 Introduction

As capacity results for Gaussian multiuser networks are in general difficult to obtain, meaningful models that capture the capacity trends of these networks are very useful. Recently, seminal work in this domain by Avestimehr, Diggavi and Tse [1, 2, 9] has resulted in deterministic models which are easier to analyze that the original Gaussian network and can be shown through examples to approximate the actual capacity of the channel fairly well. A bound of the difference between the capacity of the deterministic model and the general, Gaussian unicast network has also been found [3]. The core idea is the representation of the channel in terms of a deterministic input and output alphabet relationship that reflects the signal to noise ratio (SNR) at each node in the network.

The goal of this paper is to introduce fading into this modeling framework. In general, fading, modeled in its simplest form as a multiplicative channel state, adds an additional dimension of complexity to a capacity problem. There are Gaussian channels where capacity without fading is known but with fading unknown (an example is the fading broadcast channel where the transmitter does not know the state). Thus, analytically tractable models that can, with a fair degree of accuracy, capture fading in Gaussian channels can prove very useful in capacity characterizations for Gaussian networks with fading. This paper assumes that, in each case, only the receiver(s) know the fading state and the transmitter(s) do not.

We introduce the term “quasi-deterministic network” in this paper, to describe most generally, any network which is deterministic, given some random state variable which is independent of all inputs. In this paper, is over each timestep. The network models studied in each of the papers [3],[6], and [8] are all examples of quasi-deterministic networks.

This paper has a relatively straightforward progression. The next section describes the quasi-deterministic model presented in this paper using the point-to-point channel, and summarizes the main results obtained for different multiuser channels. In Section 3, a closed-form expression for the capacity region of the multiple access channel (MAC) is derived, and the model is compared to the Gaussian case. Section 4 illustrates the case of the semi-deterministic broadcast channel. Section 5 demonstrates that the cut-set bound on capacity of a unicast network of such fading channels, and in fact any quasi-deterministic network, is achievable when the fading state is available to the final destination.

2 Model for Fading Gaussian Channels

2.1 Notation

For a vector of length denote by the most significant bit, i.e. is the most significant bit and is the least significant. Also, denotes logarithm base . For addition, “” is the bit-level by bit-level finite-field summation of two vectors, whereas “” is the algebraic addition of two signals. For a matrix, “rank” is the rank, i.e. the number of linearly independent rows (or columns).

2.2 Model

The simplified model that we introduce for fading Gaussian channels is based on the work on deterministic modeling of Gaussian channels introduced in [1], and is similar to the model presented in [10]. For motivation, and to capture the spirit of the modeling assumptions, we briefly describe the translation of the point-to-point fading Gaussian channel to our quasi-static model.

In [1], the case of a real AWGN channel with unit noise power and unit power constraint, i.e. where is the average power constraint and , is considered. The capacity of such a channel, can be approximated as . Thus, the paper intuitively models a point-to-point Gaussian channel as a pipe which truncates the transmitted signal and only passes the bits which are above noise level. The point-to-point Gaussian channel has thus been modeled as a bit pipe which transmits some number of the most significant bits of the input, where for real signals.

This paper takes a similar approach to modeling fading channels. As in [1], the input to our point-to-point channel model, , will consist of a vector of fixed length bits. The output of the channel at time will consist of a vector of length bits. The effect of receiver fading is modeled as the random variation in over time, which is denoted by the random variable . The number of (most significant) bits received (which is a realization of ) is determined by the fading and is independent of the input and known only at the receiver. The number of received bits is a random variable which takes on integer values : say .

Figure 1: Model for the point-to-point channel

That is, if , then where is the realization of the fading random variable .

The capacity of this model for the fading point-to-point channel is therefore

Since and are independent, the first term is zero; a uniform binary input for maximizes the second term as , that is, the average number of bits seen by the receiving node. In fact,

(1)
(2)

where (1) comes from the fact that is a deterministic function of and . Intuitively, this result corresponds to that of the fading Gaussian point-to-point channel, with capacity . Figure 1 illustrates the point-to-point model. An -bit vector is truncated into an -bit vector depending on the realization of the fading random variable . The main difference between the model and the fading Gaussian is that has integer realizations and has in general, real valued realizations. Thus, some difference or ”loss” corresponds to the integer truncation of each rate term. Therefore, for high (), we can write:

(3)

3 Multiple Access Channel

In a the two-user Gaussian fading MAC channel, the received signal is given by

where , and are the fading channel gains. We assume without loss of generality. For the model depicted in Figure a, we define the number of bit-levels randomly received from user at the receiver by . The receiver knows both fading states . The two inputs to this MAC are the length vectors , , while the single output is a vector bit-level by bit-level finite-field summation of and , appropriately shifted by the fading levels . Specifically, denoting by the most significant bit in the bit-level expansion of the vector , we can write as

(4)

where , , and we set for .

The capacity region of the MAC channel is therefore given by

(5)
(6)
(7)

Figure b illustrates the capacity region of this model and compares it to a simulated Gaussian case where , and .

(a) Model for MAC
(b) Difference between model and Gaussian
Figure 2: \subrefsubfig:mac1 Model for MAC \subrefsubfig:mac2 Comparison with the Gaussian MAC capacity

The achieved capacity is at most within 1.5 bits from that of the Gaussian MAC with fading. In fact,

(8)

The model hence gives a good approximation of the Gaussian MAC channel under the presented fading model. It can be seen from Equation 3 that the capacity of this model lies within 1.5 bits of the Gaussian MAC capacity.

4 Broadcast Channel and the Capacity of the Semi-Deterministic Broadcast Case

Since the capacity of the fading Gaussian broadcast channel is yet unknown, a model for the corresponding simplified channel model can serve two purposes. First, it may help us benchmark the performance of practical wireless communication systems with fading. Second, it may suggest achievable schemes for the original Gaussian fading broadcast channel.

The input for the fading broadcast channel model will consist of a vector of a fixed number bits. Receiver sees the most significant bits of the input, while Receiver sees the most significant bits. The values and are realizations of the independent random variables and and are known to the their respective receivers, only.

In [10], Yates et al. find an achievable region for the fading broadcast channel, that lies within a constant gap of bits/s/Hz of the capacity region.

We now turn our attention to the semi-deterministic case, where we determine capacity in the hope of finding better achievable schemes to approximate the capacity of the one-sided fading Gaussian broadcast channel. Note that a single letter characterization for semi-deterministic channels is known, but here we use the Körner-Marton outer bound as our starting point for the analysis (which is tight on the capacity region of semi-deterministic channels). The motivation for this is to shed light on the choice of auxiliary random variable which motivates one particular coding scheme that achieves capacity.

The semi-deterministic broadcast model studied here can be summarized by the expressions

(9)

with input , where constant with

, with .

For the channel model described in Equation (9), we first show that the Körner-Marton outer bound  [7] (equivalently, semi-deterministic capacity region) for this broadcast channel is easy to evaluate, and then show that it is achievable using superposition coding. Note that, for a general semi-deterministic channel, superposition coding is not sufficient to achieve capacity.

4.1 Converse

Note that the boundary defined by the following optimization problem

(10)

for all is an outer bound on the Körner-Marton region [7], and thus we focus on this optimization problem instead.

Because the receiver has access to the channel state,

as and are independent. Thus, the optimization problem in 10 translates into

(11)

where , and is thus a non-decreasing sequence.

Let be such that and . It is clear that choosing s independent maximizes the objective in (4.1). In addition, must include the following two components: the first components of the input and the last components of the input (that are never received by Receiver 1). This assignment is illustrated in Figure 3.

Figure 3: Relationship of and to

4.2 Achievability and Discussion

The converse helps determine what form the auxiliary random variables and should take in the achievability argument. We have Marton’s Inner Bound [7]:

for some .

Choose any integer such that , and let and be uniformly binary random vectors of length and , respectively. The length (uniformly distributed) binary vector is formed by concatenating first bits of (in the most significant positions of ), then the bits of , and finally the remaining bits of in the least significant positions.

From this choice of auxiliary random variables, it is clear that is zero, is and

Intuitively, this strategy has a straightforward implication. Since the lowest level bits are never received by Receiver , they should always be assigned to Receiver . If the user desires to dedicate more bits to Receiver , it is immaterial to Receiver which bits he chooses, since each contributes an equal amount of rate. However, to maximize the amount of data that can be transmitted to the second receiver, the user should first assign the bits which are most likely to be received (specifically, the most significant bits) to Receiver before any others. Also note that this achievability can also be easily generalized to the two-sided fading broadcast channel. In fact, this coding scheme and observations were also made by Yates and Tse in [10].

5 General Unicast Network

We consider a general unicast network of fading channels with each channel modeled as in Section 2 and having broadcast and multiple access properties. The network is a directed graph , where each node has some power and therefore can transmit the symbol , i.e. each symbol has bit levels. Note that is the most significant bit. In this scheme, symbol fading or fast fading is assumed, and all the fading states are known to the ultimate destination and to the respective receivers in each transmission. This network is actually a particular case of a quasi-deterministic network which we define next.

5.1 Quasi-Deterministic Networks

A quasi-deterministic network is a general network in which the channel model with input , output and state is given by where is a deterministic function and is independent of . Fading state is a random variable which is for each timeslot in this work, i.e. fast fading is assumed.

5.2 Network Model

The network model studied in this paper is the linear finite-field deterministic model presented in [1], augmented with fading as explained in Section 2. This network is a particular case of the quasi-deterministic network. Here, is a directed acyclic graph. Then, every node has a number of bit-levels, and each bit-level receives the finite-field sum in . In other terms, the signal received at a node j, similarly to the signal in Section 3, is given by

where is the set of nodes with edges incident on node , and is the fading realization for edge at time , and the summation is of the type .

It is useful to note a difference between the model presented here and the model given by Avestimehr et al. in  [4], where the channel gains are also chosen from a set for each link, however the fading state distribution is unknown at the sender. In this paper, we assume that the distribution of the fading state is known at the sender and therefore we can achieve a rate better than the of the cut-set bound in  [4], i.e. the worst case. In fact, it turns out that the average value of the cut-set bound is achievable.

5.3 Upper Bound

Let be the set of vertices of , a random vector of size , where is the number of edges of . is a collection of all the state random variables in the network for a particular timeslot. For a quasi-deterministic network, can be thought of as the state of the network at each time instant. The set of all cuts of the network is denoted by . For the special case of this fading network, we define, similarly to [1], to be the random total transfer matrix associated with a cut , i.e. the relationship between the concatenated signal sent by the nodes on the left side of the cut and the resulting signal received by the nodes on the right side of the cut is .

The randomness of the matrix is a result of the randomness of the random vector , for a fixed cut . Now, using the general cut-set upper bound for a general network, we can write by [5] and [2],

(12)

In fact, for the particular fading model studied in this paper (model in 5.2),

(13)
(14)

where (13) is the cut-set upper bound for the general quasi-deterministic network and (14) is its particular value for our fading network model, where is the transfer matrix for a certain cut .

6 Achievability in Quasi-deterministic Unicast Networks with Random Coding

The goal now is to show that, using random coding, we can achieve rates arbitrarily close to the upper bound specified in 5.3 for the network model in 5.2. Also, the bound given by Theorem 6.1 is achievable for quasi-deterministic networks.

Theorem 6.1

Given a quasi-deterministic unicast network with the model specified in Section 5, the rate given by

(15)

is achievable, and is equivalent to the upper bound given by 12 for the fading network, i.e. for the fading network model defined in 5.2, 12 and 14 are equivalent. Here, is a cut, and the set of all cuts.

To prove this, we need to prove that the upper bound in Section 5.3 is achievable. We will proceed along similar lines to the proofs in [8] and [2] and use random coding arguments to get the result.

Let , where is the desired rate, the number of blocks to send and the block size. If is the longest path in the network, the transmission will take place in timeslots, achieving a rate of , which approaches as gets large.

6.1 Encoding and Decoding

As in [8], each node generates codebooks, where each codeword is bits long, being the number of levels at each node and codewords are all generated with the distribution where X is a Bernoulli() random vector of size . The final destination knows all codebooks and all the states of the network during transmission time. Denoting by the transmitted signal of node during the transmission of block , where is the block received on ’s incoming edge during transmission time of block , and is the random function chosen at each block period for every outgoing edge of node .

To decode the message, the destination node deterministically simulates all the messages, knowing all the fading states and all the codebooks used during transmission time. If the output observed when simulating exactly one is identical to the actual signal, then was transmitted, otherwise an error is declared. Thus, an error occurs if the fading pattern is not typical or if two codewords produce the same output at the destination node, which we shall detail next.

6.2 Probability of Error Calculation

An error occurs at the destination node if the fading is not typical, the probability of which can be made small when a large enough is chosen. Let us turn our attention to the error event where two codewords produce the same output, which is more involved. Suppose that codeword is transmitted. Define to be the event that codewords and produce the same output after the simulation of the network by the destination node. Then the error event associated with transmitting is

Let and denote the nodes on the source and the destination side, respectively. As in  [8], define, for a cut , as the event that after the block is simulated, the inputs to all the nodes in are identical and at least two of the inputs of the nodes in are different. So, if and produce the same inputs at the destination node, one of the events has occurred. So we can write as

where is a sequence of cuts corresponding to transmission times of blocks and is the set of all sequences of cuts. To calculate , we will use the union bound for all sequences of cuts.

Note in this case that the event is only dependent on the event , since random coding is performed independently on each outgoing edge and for each block. We assume the final destination knows all the fading realizations in the timeslots. Using the worst-case cut sequence and the union bound over all possible sequences of cuts, and denoting by the total number of sequences of cuts (which is finite), we can then write

(16)
(17)

Now using lemma and the proof of lemma from [2], we have that for any ,

can now be upper bounded by

Using the union bound for the probability of error we get

where the last inequality is obtained for large enough. Hence, for , and the rate in 12 is achievable. In the particular case of the fading network, evaluates to , as mentioned in [1] and hence the result in (14).

7 Conclusion

In this paper, an equivalent quasi-deterministic model of the Gaussian channel was presented, along with the comparison to the original Gaussian channel in the fading point-to-point, MAC and semi-deterministic broadcast case. For the general unicast network, it was proven that the min cut is achievable for the quasi-deterministic network model using random coding. Combining our result with the result of [3] shows that we can find the capacity of the corresponding Gaussian network to within a constant bound independent of the channel parameters, similarly to [4].

Footnotes

  1. This work supported by a grant from the Army Research Office and the National Science Foundation CAREER award.

References

  1. A. S. Avestimehr, S. Diggavi, and D. Tse, “A deterministic approach to wireless relay networks,” in Proc. Allerton Conference on Commun. Control and Computing, 2007.
  2. ——, “Wireless network information flow,” in Proc. Allerton Conference on Commun. Control and Computing, 2007.
  3. ——, “Approximate capacity of gaussian relay networks,” in IEEE Symposium on Information Theory (ISIT), 2008.
  4. S. Avestimehr, S. Diggavi, and D. Tse, “Information flow over compound wireless relay networks,” in Int. Zurich Seminar on Communications, 2008.
  5. T. M. Cover and J. A. Thomas, Elements of information theory, ser. Wiley Series in Telecommunications.   New York: John Wiley & Sons Inc., 1991, a Wiley-Interscience Publication.
  6. A. Dana, R. Gowaikar, R. Palanki, B. Hassibi, and M. Effros, “Capacity of wireless erasure networks,” IEEE Transactions on Information Theory, vol. IT-52, pp. 789–804, March 2006.
  7. K. Marton, “A coding theorem for the discrete memoryless broadcast channel,” IEEE Transactions on Information Theory, vol. IT-25, pp. 306–311, May 1979.
  8. B. Smith and S. Vishwanath, “Unicast transmission over multiple access erasure networks: Capacity and duality,” in Information Theory Workshop, 2007.
  9. D. Tse, “A deterministic model for wireless channels and its applications,” in Information Theory Workshop, Lake Tahoe, 2007.
  10. D. Tse, R. Yates, and Z. Li, “Fading broadcast channels with state information at the receivers,” in Proc. Allerton Conference on Commun. Control and Computing, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
122325
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description