Interference Networks with Point-to-Point Codes

Interference Networks with Point-to-Point Codes

Abstract

The paper establishes the capacity region of the Gaussian interference channel with many transmitter-receiver pairs constrained to use point-to-point codes. The capacity region is shown to be strictly larger in general than the achievable rate regions when treating interference as noise, using successive interference cancellation decoding, and using joint decoding. The gains in coverage and achievable rate using the optimal decoder are analyzed in terms of ensemble averages using stochastic geometry. In a spatial network where the nodes are distributed according to a Poisson point process and the channel path loss exponent is , it is shown that the density of users that can be supported by treating interference as noise can scale no faster than as the bandwidth grows, while the density of users can scale linearly with under optimal decoding.

N

etwork information theory, interference, successive interference cancelation, joint decoding, stochastic geometry, coverage, ad hoc network, stochastic network, performance evaluation.

1 Introduction

Most wireless communication systems employ point-to-point codes with receivers that treat interference as noise (IAN). This architecture is also assumed in most wireless networking studies. While using point-to-point codes has several advantages, including leveraging many years of development of good codes and receiver design for the point-to-point AWGN channel and requiring no significant coordination between the transmitters, treating interference as noise is not necessarily the optimal decoding rule. Motivated by results in network information theory, recent wireless networking studies have considered point-to-point codes with successive interference cancellation decoding (SIC) (e.g., see [8]), where each receiver decodes and cancels the interfering codewords from other transmitters one at a time before decoding the codeword from its tagged transmitter, and joint decoding [2] (JD), where the receiver treats the network as a multiple access channel and decodes all the messages jointly.

In this paper, we ask a more fundamental question: given that transmitters use point-to-point codes, what is the performance achievable by the optimal decoding rule? The context we consider is a wireless network of multiple transmitter-receiver pairs, modeled as a Gaussian interference channel. The first result we establish in this direction is the capacity region of this channel when all the transmitters use Gaussian point-to-point codes. We show that none of the above decoding rules alone is optimal. Rather, a combination of treating interference as noise and joint decoding is shown to be capacity-achieving. Second, we show that this result can be extended to the case when the transmitters are only constrained to use codes that are capacity-achieving for the point-to-point and multiple access channels, but not necessarily Gaussian-like.

We then specialize the results to find a simple formula for computing the symmetric capacity for these codes. Assuming a wireless network model with users distributed according to a spatial Poisson process, we use simulations to study the gain in achievable symmetric rate and coverage when the receivers use the optimal decoding rule (OPT) for point-to-point Gaussian codes as compared to treating interference as noise, successive cancellation decoding, and joint decoding. We then use stochastic geometry techniques to study the performance in the wideband limit, where a high density of users share a very wide bandwidth. Under a channel model where the attenuation with distance is of the form with , it is shown that the density of users that can be supported by treating interference as noise can scale no faster than as the bandwidth grows, while the density of users can scale linearly with under optimal decoding. For an attenuation of the form , the density of users scales linearly with , but when the distance between the tagged transmitter and its receiver tends to infinity, the rate for OPT scales like the wideband capacity of a point-to-point Gaussian channel without interference.

2 Capacity Region with Gaussian Point-to-point Codes

Consider a Gaussian interference channel with transmitter-receiver pairs, where each transmitter wishes to send an independent message to its corresponding receiver at rate (in the unit of bits/s/Hz). The signal at receiver when the complex signals are transmitted is

where are the complex channel gains and is a complex circularly symmetric Gaussian noise with an average power of . We assume each transmitter is subject to the same power constraint (in the unit of Watts/Hz). Define the received power from transmitter at receiver as . Without further constraints on the transmitters’ codes, the capacity region of this channel is not known even for the two transmitter-receiver pair case (see [6] for known results on this problem). In this section we establish the capacity region using Gaussian generated point-to-point codes for an arbitrary number of transmitter-receiver pairs.

We define an Gaussian point-to-point (G-ptp) code 1 to consist of a set of randomly and independently generated codewords , , , each according to an i.i.d. sequence, for some . We assume each transmitter in the Gaussian interference channel uses such a code with each receiver assigning an estimate of message to each received sequence . We define the probability of error for a G-ptp code as

We denote the average of this probability of error over G-ptp codes as . A rate tuple is said to be achievable via a sequence of G-ptp codes if as . The capacity region with G-ptp is the closure of the set of achievable rate tuples .

Remarks:

  1. Our definition of codes precludes the use of time sharing and power control (although in general one can use time sharing with ptp codes). The justification is that time sharing (or the special cases of time/frequency division) require additional coordination.

  2. Note that if a rate tuple is achievable via a sequence of G-ptp codes then there exists a sequence of (deterministic) codes that achieves this rate tuple. We use the definition of achievability via the average probability of error over codes to simplify the proof of the converse. The results, however, can be shown to apply to sequences of G-ptp codes almost surely, and to an even more general class of (deterministic) codes in Section 3.

Let be a nonempty subset of and be its complement. Define to be the vector of transmitted signals such that , and define the sum . Similarly define , , and .

Consider a Gaussian multiple access channel (MAC) with transmitters , receiver , where , and additive Gaussian noise power . Recall that the capacity region of this MAC is

where for . All logarithms are base in this paper.

Now, define the rate regions

and

(1)

One of the main results in this paper is establishing the capacity region of the Gaussian interference channel with G-ptp codes.

Theorem 1

The capacity region of the Gaussian transmitter-receiver pair interference channel with G-ptp codes is .

By symmetry of the capacity expression, we only need to establish achievability and the converse for the rate region , which ensures reliable decoding of transmitter ’s message at receiver . Hence from this point onward, we focus on receiver . We will refer to this receiver and its corresponding transmitter as tagged. We also refer to other transmitters as interferers. We relabel the signal from the tagged receiver, its gains, and additive noise as

We also relabel the received power from the tagged transmitter as and the received power from interferer , , as (for interference). For any subset of interferers , we denote as the sum of the received power from these interferers. We will also drop the index from the notations and .

For clarity of presentation, first consider the case of . Here the signal of the tagged receiver is

For this receiver, there are two subsets to consider, and . The region is the set of rate pairs such that

and the region is the set of rate pairs such that

Hence, the region for the tagged receiver is the union of these two regions.

It is interesting to compare to the achievable rate regions for other schemes that use point-to-point codes. Define the rate regions:

The region is achieved by a receiver that decodes the tagged transmitter’s message while treating interference as Gaussian noise. The region is achieved by the successive interference cancellation receiver; the interferer’s message is first decoded, treating the tagged transmitter’s signal as Gaussian noise with power , and then the message from the tagged receiver is decoded after canceling the interferer’s signal. The region is the two transmitter-receiver pair Gaussian MAC capacity region. It is the set of achievable rates when the receiver insists on correctly decoding both messages, which is the achievable region using joint decoding in Blomer and Jindal [2].

It is not difficult to see that the following relationships between the regions hold (see Figure 1):

\psfrag

r1[b] \psfragr2[l] \psfraga[c] \psfragb[c] \psfragc[t] \psfragd[t]

Figure 1: is the shaded region in Figure (i). The region is depicted on Figure (ii). is on Figure (iii) and is on (iv). , , , .

Note that the last relationship above says that the receiver can do no better than treating interference as Gaussian noise or jointly decoding the messages from the tagged transmitter and the interferer.

In the following, we first establish the capacity region for the case , and then extend the result to arbitrary . In Section 3, we also show that our results extend to the class of MAC capacity-achieving codes.

2.1 Proof of Theorem 1 for

Proof of Achievability. The prove the achievability of any rate pair in the interior of , we use Gaussian ptp codes with average power and joint typicality decoding as in [4]. Further, we use simultaneous decoding [6] in which receiver declares that the message is sent if it is the unique message such that is jointly typical or is jointly typical for some . A straightforward analysis of the average probability of error shows that as if either

(2)

or

The first constraint (2) is , the IAN region. Denote the region defined by the second set of constraints by ; it is the same as the MAC region but with the constraint on removed. Hence, the resulting achievable rate region appears to be larger than . It is easy to see from Figure 1, however, that it actually coincides with . Hence, receiver can correctly decode if treating interference as noise fails but simultaneous decoding succeeds even though it does not require it. We establish the converse for the original characterization of , hence providing an alternative proof that the two regions coincide.

Remark: Although we presented the decoding rule as a two-step procedure, since the receiver knows the transmission rates, it already knows whether to apply IAN or simultaneous decoding.

Proof of the converse. To prove the converse, suppose we are given a sequence of random G-ptp codes and decoders with rate pair and such that the average probability of error approaches as . We want to show that . Consider two cases:

  1. : Under this condition and by the assumption that the tagged receiver can reliably decode its message, the tagged receiver can cancel off the received signal from the tagged transmitter and then reliably decode the message from transmitter 1. Hence is in the capacity region of the MAC with transmitters and receiver , and hence in .

  2. : Fix an , and let , where and are independent Gaussian noise components with variances and , respectively, such that

    Consider the AWGN channel

    (3)

    Since we are assuming G-ptp codes and , the average probability of decoding error over this channel approaches zero as . Hence, by Fano’s inequality, the mutual information over a block of symbols, averaged over G-ptp codes, is

    where as . Denoting by the differential entropy of averaged over the G-ptp codes, this implies that

    Now, let . By the conditional entropy power inequality, we have

    Hence,

    The fact that and the last lower bound give an upper bound on the average mutual information for the tagged transmitter-receiver pair

    Since this is true for all , we have

    Since we assume the tagged receiver can decode its intended message, , and hence . This completes the proof of Theorem 1 for .

Remarks:

  1. What the above proof showed is that if the message of transmitter is reliably decoded, then either: (1) the interferer ’s message can be jointly decoded as well, in which case the rate vector is in the 2-transmitter MAC capacity region, or (2) the interference plus the background noise is close to i.i.d. Gaussian, in which case decoding transmitter ’s message treating transmitter ’s interference plus background noise as Gaussian is optimal.

  2. One may think that since the interferer uses a Gaussian random code, the interference must be Gaussian and hence the interference plus background noise must also be Gaussian. This thinking is misguided, however, since what is important to the communication problem are the statistics of the interference plus noise conditional on a realization of the interferer’s random code. Given a realization of the code, the interference is discrete, coming from a code, and hence it is not in general true that the interference plus noise is close to i.i.d. Gaussian. What we showed in the above converse is that this holds when the message from the interferer cannot be jointly decoded with the message from transmitter .

2.2 Proof of Theorem 1 for arbitrary

Now, consider the general case with transmitter-receiver pairs.

Proof of achievability. The proof is a straightforward generalization of the proof for , and the condition for the probability of error to approach is that the rate vector lies in the region:

(4)

where

is the augmented MAC region for the subset of transmitters treating the transmitters in as Gaussian noise.

As in the case, the region appears to be larger than . We again establish the converse for the original characterization of , hence showing that coincides with .

Proof of the converse. The proof for the case identifies, for a given a rate vector, a maximal set of interferers whose messages can be jointly decoded with the tagged transmitter’s message. This set depends on the given rates of the interferer; if , the set is , otherwise it is . The key to the proof is to show that whichever the case may be, the residual interference created by the transmitters whose messages are not decoded plus the background noise must be asymptotically i.i.d. Gaussian. We generalize this proof to an arbitrary number of interferers. In this general setting, however, explicitly identifying a maximal set of interferers whose messages can be jointly decoded with the tagged transmitter’s message is a combinatorially difficult task. Instead, we identify it existentially.

Suppose the transmission rate vector is and the average probability of error for the tagged receiver approaches zero as . Consider the set of subsets of interferers

Intuitively, these are all the subsets of interferers whose messages can be jointly decoded after decoding while treating the other transmitted signals as Gaussian noise. Let be a maximal set in , i.e., there is no larger subset that contains . Since the message is decodable by the assumption of the converse, the tagged receiver can cancel off the tagged transmitter’s signal. Next, the messages of the interferers in can be decoded, treating the interference from the remaining interferers plus the background noise as Gaussian. This is because by assumption and all interferers are using G-ptp codes. After canceling off the signals from the interferers in , the tagged receiver is left with interferers in . Since no further messages can be decoded treating the rest as Gaussian noise (by the maximality of ), it follows that for any subset , is not in the capacity region of the MAC with transmitters in and Gaussian noise with power . Let

In the scenario, is either or . In the first case, both messages are decoded, hence the power of the residual interference plus that of the background noise is automatically Gaussian. In the second case, the interferer’s message is not decoded, and our earlier argument shows the interferer must be communicating above the capacity of the point-to-point Gaussian channel to receiver . Hence the aggregate interference plus noise must be asymptotically i.i.d. Gaussian. In the general scenario with interferers, there may be more than one residual interferer left after decoding a maximal set . The following lemma, which is proved in the following subsection, shows that this situation generalizes appropriately.

Lemma 1

Consider a -transmitter MAC

where the received power from transmitter is and . Let

(5)

If the transmitters use G-ptp codes at rate vector and , then

that is, the received sequence is asymptotically i.i.d. Gaussian.

Lemma 1 shows that the interference after decoding the interferers in plus the background noise is asymptotically i.i.d. Gaussian. Hence, , and we can conclude that . This completes the converse proof of Theorem 1 for arbitrary .

2.3 Proof of Lemma 1

The proof needs the following fact about . Recall that the boundary of the MAC capacity region consists of multiple faces. We refer to the one corresponding to the constraint on the total sum rate as the sum rate face.

Fact 1

Let be a rate vector such that is on the boundary of for some but not on its sum-rate face. Then cannot be on the boundary of . In other words, the non-sum-rate faces of the MAC regions are never exposed on the boundary of .

Figure 2 depicts for . Here, the boundary of consists of three segments, each of which is a sum-rate face of a MAC region. The two non-sum-rate faces of are not exposed.

\psfrag

r1[b] \psfragr2[l] \psfraga[r] \psfragd[t] \psfragr[l] \psfragr3[l] \psfragr4[b]

Figure 2: The boundary of for has three segments, all of which are sum-rate faces. A rate-tuple on the boundary of can lie on one of them.

Proof of Fact 1: Let be a rate vector such that is on the boundary of for some but not on its sum rate face. Then there is a subset of such that

(6)

and for all subsets strictly containing and inside ,

(7)

Subtracting (6) from (7) implies that for all such sets ,

This implies that is in the strict interior of , Hence, cannot be on the boundary of . This completes the proof of Fact 1.  \QED

Proof of Lemma 1: The proof is by induction on the number of transmitters .

: this just says that for a point-to-point Gaussian channel, if we transmit at a rate above capacity using a G-ptp code, then the output is Gaussian. This is a well-known fact.

Assume the lemma holds for all . Consider the case with transmitters.

Express , where and are independent Gaussians with variances and , respectively, where is chosen such that is on the boundary of for the MAC

Here, is the same as except that the background noise power is replaced by . Let be the collection of all subsets for which ( is the same as except that the background noise power is replaced by ). Pick a maximal subset from that collection. By Fact 1, must be on the sum-rate face of . The MAC can be decomposed as

By the maximality of , no further transmitted messages can be decoded beyond the ones for the transmitters in (otherwise, there would exist a bigger subset containing and for which ). This implies in particular that for any subset , the rate vector cannot be in the region ; otherwise if such a exists, the receiver could have first decoded the messages of transmitters in , cancelled their signals, and then decoded the messages of the transmitters in , treating the residual interference plus noise as Gaussian. Hence if we consider the smaller MAC

we can apply the induction hypothesis to show that is asymptotically i.i.d. Gaussian. So now we have a Gaussian MAC for transmitters in

and since the rate vector lies on the sum rate boundary of this MAC, we now have a situation of a super-transmitter, i.e., a combination of all transmitters in , sending at the capacity of this Gaussian channel. Using a very similar argument as in the proof, one can show that is asymptotically i.i.d. Gaussian. Adding back the removed noise yields the desired conclusion. This completes the proof of Lemma 1.  \QED

3 Capacity Region with MAC-Capacity-Achieving Codes

The converse in Theorem 1 says that if the transmitters use Gaussian random codes, then one can do no better than treating interference as Gaussian noise or joint decoding. The present section shows that this converse result generalizes to a certain class of (deterministic) “MAC-capacity-achieving” codes, to be defined precisely below. We first focus on the two-transmitter-receiver pair case and then generalize to the -transmitter case.

An (deterministic) single-user code satisfying the transmit power constraint is said to achieve a rate over a point-to-point Gaussian channel if the probability of error as the block length . An code is said to be point-to-point (ptp) capacity-achieving if it achieves a rate of over every point-to-point Gaussian channel with capacity greater than .

Now consider the two transmitter-receiver pair Gaussian interference channel. A rate-pair is said to be achievable over the interference channel via a sequence of ptp-capacity-achieving codes if there exists a sequence of such codes for each transmitter such that the probability of error

approaches as . The capacity region with ptp-capacity-achieving codes is the closure of the set of achievable rates. The theorem below is a counterpart to the converse in Theorem 1 for G-ptp codes.

Theorem 2

The capacity region of the two transmitter-receiver pair interference channel with ptp-capacity achieving codes is no larger than , as defined in (1) for .

{proof}

The result follows from the observation that in the proof of the converse for Theorem 1, the only property we used about the G-ptp codes is that the average decoding error probability of the interferer’s message after canceling the message of the intended transmitter goes to zero whenever . This property remains true if the interferer uses a ptp-capacity-achieving code instead of a G-ptp code.

Theorem 2 says that as long as the codes of the transmitters are designed to optimize point-to-point performance, the region is the fundamental limit on their performance over the interference channel. This is true even if the codes do not “look like” randomly generated Gaussian codes.

Now let us consider the -transmitter interference channel for general . Is still an outer bound to the capacity region if all the transmitters use ptp-capacity-achieving codes? The answer is no. A counter-example can be found in [3] (Section IIB), which considers a 3-transmitter many-to-one interference channel with interference occurring only at receiver . There, it is shown that if each of the transmitters uses a lattice code, which is ptp-capacity-achieving, one can do better than both joint decoding all transmitters’ messages and decoding just transmitter ’s message treating the rest of the signal as Gaussian noise at receiver . The key is to use lattice codes for transmitter and , and have them align at receiver so that the two interferers appear as one interferer. Hence, it is no longer necessary for receiver to decode the messages of both interferers in order to decode the message from transmitter ; decoding the sum of the two interferers is sufficient. At the same time, treating the interference from and as Gaussian noise is also strictly sub-optimal.

In this counter-example, the transmitters’ codes are ptp-capacity-achieving but not ”MAC capacity-achieving” in the sense that receiver cannot jointly decode the individual messages of the interferers. A careful examination of the proof of the converse in Theorem 1 for general reveals that the converse in fact holds whenever the codes of the transmitters satisfy such a MAC-capacity-achieving property.

Consider a -transmitter Gaussian MAC

and a subset . A (deterministic) code for this MAC, where each transmitter satisfies the same transmit power constraint , is said to achieve the rate-tuple over the MAC if the probability of error

approaches as . An code is said to be MAC-capacity-achieving if for every , it achieves a rate over every Gaussian MAC whose capacity region contains . Recall that the region is the capacity region of the MAC with transmitters and the signals from the rest of the transmitters treated as Gaussian noise. Thus this definition says that a MAC capacity-achieving code is good enough to achieve this performance for any subset of transmitters.

Now consider the transmitter-receiver pair Gaussian interference channel. A rate-tuple is said to be achievable on the interference channel via a sequence of MAC-capacity-achieving codes if there exists a sequence of MAC-capacity-achieving codes for every subset containing transmitters such that the probability of error

approaches zero as . The capacity region with MAC-capacity-achieving codes is the closure of all such rates.

Theorem 3

The capacity region of the Gaussian -transmitter interference channel with MAC-capacity achieving codes is no larger than , as defined in (1).

{proof}

The result follows from the observation that in the proof of the converse in Theorem 1, the only property that was used about the G-ptp codes of the transmitters is precisely the MAC-capacity-achieving property defined above.

The counter-example above shows that one can indeed do better than the region , for example using interference alignment. Interference alignment, however, requires careful coordination and accurate channel knowledge at the transmitters. On the other hand, one can satisfy the MAC-capacity-achieving property without the need of such careful coordination. So, if one takes the MAC-capacity-achieving property as a definition of lack of coordination between the transmitters, then the above theorem delineates the fundamental limit to the performance on the interference channel if the transmitters are not coordinated.

4 Symmetric Rate

We specialize the results in the previous sections to the case when all messages have the same rate . This will help us compare the network performance of the optimal decoder to other decoders for Gaussian ptp codes. Throughout the section, we assume that , and define and . When , we will assume that is finite, hence as .

4.1 Optimal Decoder

Focusing again on the tagged receiver , define the symmetric rate as the supremum over such that . We can express the symmetric rate as the solution of a simple optimization problem.

Lemma 2

The symmetric rate under G-ptp codes is

(8)
{proof}

From the reduced characterization of in (4), we have

where is the symmetric rate of the region . The second equality follows from the observation that the reduced MAC region is monotonically increasing in the received powers from the transmitters in and decreasing in the interference power from transmitters in . Hence, among all subsets of size , the one with the largest symmetric rate is (the one with the highest powered transmitters and lowest powered interferers).

Taking into account all constraints of the region , we have

The desired result (8) now follows from the fact that among all the subsets of size , the one with the smallest total power is .

4.2 Other Decoders

We will use the following nomenclature for the rest of the paper:

  • IAN refers to treating interference as noise decoding. The condition for IAN is

  • SIC() refers to successive interference cancellation in which the tagged receiver sequentially decodes and cancels the signals from the strongest transmitters treating other signals as noise and then decodes the message from the tagged transmitter while treating the remaining signals as Gaussian noise. The conditions for SIC are

  • JD() refers to joint decoding of the messages of the first transmitters and treating the rest as Gaussian noise. The conditions for JD() are

    (9)

    The JD() conditions are not monotonic in , that is, the fact that JD() holds neither implies that JD() holds for nor for in general.

  • OPT() refers to the optimal decoder used in the proof of Theorem 1 if there were only interferers. OPT() or simply OPT refers to the optimal decoding rule. The conditions for OPT() are with given by (8). Since the condition for OPT is the union of the JD() conditions for , if OPT() holds, so does OPT() for all .

4.3 Number of Interferer Messages Decoded

Lemma 2 shows that, for finite, the optimal decoding strategy is to use with

(10)

where

(11)

provided the argmax in question is uniquely defined.

The following lemma is focused on the case , which will be considered in the next sections, and where one may fear that the maximum is not defined in (8), i.e., the argmax in (10) is not defined. Fortunately, this is not the case.

Lemma 3

If , and , then .

{proof}

We have

Hence is a positive function bounded from above by a function that tends to 0 when tends to infinity. The values where is maximal are then all finite unless it is 0 everywhere. But this is not the case since our assumptions on and imply that .

The following lemma will be used later.

Lemma 4

Let

Then, a sufficient condition for achievability by OPT at rate is that

(12)

Further, if this condition holds, then the conditions for JD() are met.

{proof}

We can derive a lower bound for the symmetric rate in (8) in terms of :

The first inequality is obtained by choosing in the outer maximization; the second inequality is obtained by lower bounding the received powers of all the interferers with index by ; the last equality follows from the fact that is a monotonically decreasing function of . The sufficient condition (12) for achievability is now obtained by requiring the target rate to be less than this lower bound.

Lemma 4 gives a guideline on how to select the set of interferers to jointly decode: under the condition (12), the success of joint decoding at rate is guaranteed when decoding all interferers with a received power larger than that of the tagged transmitter. This is only a bound, however, and as we will see in the simulation section, one can often succeed in decoding more than transmitters.

5 Spatial Network Models and Simulation Results

The aim of the simulations we provide in this section is to illustrate the performance improvements of OPT versus IAN and JD. The framework chosen for these simulations is a spatial network with a denumerable collection of randomly located nodes. In the following section we also use this spatial network model for mathematical analysis.

5.1 Spatial Network Models

All the spatial network models considered below feature a set of transmitter nodes located in the Euclidean plane. The channel gains defined in Section 2, or equivalently the received signal power and the interference powers , , at the tagged receiver are evaluated using a path loss function , where is distance. Here are two examples used in the literature (and in some examples below):

  • , with (case with pole),

  • , with and a constant (case without pole); it makes sense to take equal to the wavelength.

More precisely, if we denote the locations of the transmitters by , and that of the tagged receiver by and if we assume that the tagged receiver selects the interferers with the strongest received powers to be jointly decoded, then and , or equivalently and for . Here denotes the transmit power. Since we assume that , the strongest interferer is the closest one to (excluding the tagged transmitter). Let be the total interference at the tagged receiver, namely .

The simulations also consider the following extensions of this basic model:

  • The fading case, where the channel gain is further multiplied by , where represents the effect of fading from transmitter to . In this case, the strongest interferer is not necessarily the closest to .

  • The case where the power constraint is not the same for all transmitters. Then and for , with the power constraint of transmitter .

5.2 Ian, Sic(), Jd(), and OPT() Cells

Definitions

Fix some rate . For each decoding rule (i.e., IAN, …) as defined above, let be the set of locations in the plane where the conditions for rule are met with respect to the tagged transmitter and for . We refer to this set as the cell for rate . The main objects of interest are hence the cells , , and .

Inclusions

Rather than looking at the increase of rate obtained when moving from a decoding rule to another, we fix and compare the cells of the two decoding rules. In view of the comparison results in Section 2, we have

For all pairs of conditions and , we define to be the set of locations in the plane where the condition for is met but the condition for is not met. For instance,

Simulations

In the simulation plots below, the transmitters are randomly located according to a Poisson point process. The attenuation function is of the form or .

Figure 3 compares and . Notice that SIC does not increase the region that is covered compared to IAN, whereas OPT(1) does.

Figure 3: The top plot is for and the bottom plot is for for the tagged transmitter. The transmitters are denoted by crosses. The contours denote the boundaries of the IAN cells of different transmitters. The spatial user density is 0.1. The power constraints are here randomly chosen according to a uniform distribution over . Variable transmission powers show up when devices are heterogeneous or power controlled. bits/s/Hz and . The tagged transmitter is at the center of the plot (at ). The attenuation is .

Figure 4 compares OPT() to JD() and IAN. Note that there is no gain moving from JD() to OPT() outside the IAN cell. Also, in such a spatial network, one of the practical weaknesses of JD() is its lack of coverage continuity (the JD() cell has holes and may even lack connectivity as shown in the plots). These holes are due to the unnecessary symmetry between the tagged transmitter and the strongest interferer, which penalizes the former.

Figure 4: Figure (i) depicts for the tagged transmitter, located at , and Figure (ii) . Figure (iii) shows and Figure (iv) . The path loss exponent is , the power constraint is for all users; the threshold is bits/s/Hz, and the user density is . The attenuation is .

5.3 Sic() versus OPT()

There are interesting differences between SIC() and OPT(). Let

where is the cell of transmitter using decoding rule (IAN, OPT, SIC). Also let

denote the number of transmitters covering location under condition . Consider the following observations.

  1. We have

    (13)

    that is, the region of in the plane covered when treating interference as noise is identical to that of successive interference cancellation. This follows from the condition for SIC(1), which implies that the location under consideration is included in the cell of another transmitter in the symmetrical rate case. The gain of SIC(1) is hence only in the diversity of transmitters that can be received at any location , i.e.,

    (14)
  2. We have

    (15)

    As we see in Figure 3, this inclusion is strict for some parameter values, that is, optimal decoding increases global coverage, whereas successive interference cancellation does not. We also have

    (16)
  3. Finally, we have

    (17)

    There is no general comparison between and , however.