Two-Way Interference Channel Capacity: How to Have the Cake and Eat it Too

Two-Way Interference Channel Capacity:
How to Have the Cake and Eat it Too

Changho Suh, Jaewoong Cho and David Tse
C. Suh and J. Cho are with the School of Electrical Engineering at Korea Advanced Institute of Science and Technology, South Korea (Email: )D. Tse is with the Electrical Engineering Department at Stanford University, CA, USA (Email: ).
Abstract

Two-way communication is prevalent and its fundamental limits are first studied in the point-to-point setting by Shannon [1]. One natural extension is a two-way interference channel (IC) with four independent messages: two associated with each direction of communication. In this work, we explore a deterministic two-way IC which captures key properties of the wireless Gaussian channel. Our main contribution lies in the complete capacity region characterization of the two-way IC (w.r.t. the forward and backward sum-rate pair) via a new achievable scheme and a new converse. One surprising consequence of this result is that not only we can get an interaction gain over the one-way non-feedback capacities, we can sometimes get all the way to perfect feedback capacities in both directions simultaneously. In addition, our novel outer bound characterizes channel regimes in which interaction has no bearing on capacity.

{keywords}

Feedback Capacity, Interaction, Perfect Feedback, Two-Way Interference Channels

I Introduction

Two-way communication, where two nodes want to communicate data to each other, is prevalent. The first study of such two-way channels was done by Shannon [1] in the setting of point-to-point memoryless channels. When the point-to-point channels in the two directions are orthogonal (such as when the two directions are allocated different time slots or different frequency bands, or when the transmitted signal can be canceled perfectly as in full-duplex communication), the problem is not interesting as feedback does not increase point-to-point capacity. Hence, communication in one direction cannot increase the capacity of the other direction and no interaction gain is possible. One can achieve no more than the one-way capacity in each direction.

The situation changes in network scenarios where feedback can increase capacity. In these scenarios, communication in one direction can potentially increase the capacity of the other direction by providing feedback in addition to communicating data. One scenario of particular interest is the setting of the two-way interference channel (two-way IC), modeling two interfering two-way communication links (Fig. 1). Not only is this scenario common in wireless communication networks, it has also been demonstrated that feedback provides a significant gain for communication over (one-way) IC’s [2, 3, 4]. In particular, [3] reveals that the feedback gain can be unbounded, i.e., the gap between the feedback and non-feedback capacities can be arbitrarily large for certain channel parameters. This suggests the potential of significant interaction gain in two-way IC’s. On the other hand, the feedback result [3] assumes a dedicated infinite-capacity feedback link. In the two-way setting, any feedback needs to be transmitted through a backward IC, which also needs to carry its own backward data traffic. The question is when we take in consideration the competition with the backward traffic, whether there is still any net interaction gain through feedback?

Fig. 1: Two interfering two-way communication links, consisting of two IC’s, one in each direction. The IC’s are orthogonal to each other and do not necessarily have the same channel gains.

To answer this question, [5] investigated a two-way IC under the linear deterministic model [6], which approximates a Gaussian channel. A scheme is proposed to demonstrate a net interaction gain, i.e., one can simultaneously achieve better than the non-feedback capacities in both directions. While an outer bound is also derived, it has a gap to the lower bound. Hence, there has been limited understanding on the maximal gain that can be reaped by feedback. In particular, whether or not one can get all the way to perfect feedback capacities in both directions has been unanswered. Recently Cheng-Devroye [7] derived an outer bound, but it does not give a proper answer as the result assumes a partial interaction scenario in which interaction is enabled only at two nodes, while no interaction is permitted at the other two nodes.

Fig. 2: When can one have the cake and eat it too? The plot is over two channel parameters of the deterministic model, and , where is the ratio of the interference-to-noise ratio (in dB) to the signal-to-noise ratio (in dB) of the IC in the forward direction and is the corresponding quantity of the IC in the backward direction. The parameter is the ratio of the backward signal-to-noise ratio (in dB) to the forward signal-to-noise ratio (in dB), and is fixed to be a value between and . White region: feedback does not increase capacity in either direction and thus interaction is not useful. Purple: feedback does increase capacity but interaction cannot provide such increase. Light blue: feedback can be provided through interaction and there is a net interaction gain. Dark blue: interaction is so efficient that one can achieve perfect feedback capacity simultaneously in both directions. This implies that one can obtain the maximal feedback gain without any sacrifice for feedback transmission (have the cake and eat it too).

In this work, we settle this open problem and completely characterize the capacity region of the deterministic two-way IC via a new capacity-achieving transmission scheme as well as a novel outer bound. For simplicity, we assume the IC in each direction is symmetrical between the two users; however the IC’s in the two directions are not necessarily the same (for example, they may use different frequency bands). For some channel gains, the new scheme simultaneously achieves the perfect feedback sum-capacities of the IC’s in both directions. This occurs even when feedback offers gains in both directions and thus feedback must be explicitly or implicitly carried over each IC while sending the traffic in its own direction. Fig. 2 shows for what channel gains this happens.

In the new scheme, feedback allows the exploitation of the following as side information: (i) past received signals; (ii) users’ own messages; (iii) even the future information via retrospective decoding (to be detailed later; see Remark 3 in particular). While the first two were already shown to offer a feedback gain in literature, the third is newly exploited. It turns out this new exploitation leads us to achieve the perfect feedback capacities in both directions, which can never be done by the prior schemes [3, 4, 5].

Our new outer bound leads to the characterization of channel regimes in which interaction provides no gain in capacity. The bound is neither cutset nor more sophisticated bounds such as genie-aided bounds [8, 2, 3, 9, 10, 11, 14] and the generalized network sharing bound [12]. We employ a notion called triple mutual information, also known as interaction information [13]. In particular, we exploit one key property of the notion, commutativity, to derive the bound.

Ii Model

Fig. 3 describes a two-way deterministic IC where user wants to send its own message to user , while user wishes to send its own message to user , . We assume that are independent and uniformly distributed. For simplicity, we consider a setting where both forward and backward ICs are symmetric but not necessarily the same. In the forward IC, and indicate the number of signal bit levels for direct and cross links respectively. The corresponding values in the backward IC are denoted by . Let be user ’s transmitted signal and be a part of visible to user . Similarly let be user ’s transmitted signal and be a part of visible to user . The deterministic model abstracts broadcast and superposition of signals in the wireless Gaussian channel. See [6] for explicit details. A signal bit level observed by both users is broadcasted. If multiple signal levels arrive at the same signal level at a user, we assume modulo-2-addition. The encoded signal of user at time is a function of its own message and past received signals: . We define where denotes user ’s received signal at time , offered through the backward IC. Similarly the encoded signal of user at time is a function of its own message and past received signals: .

Fig. 3: A two-way deterministic interference channel (IC).

A rate tuple is said to be achievable if there exists a family of codebooks and encoder/decoder functions such that the decoding error probabilities go to zero as code length tends to infinity.

For simplicity, we focus on a sum-rate pair regarding the forward and bacward ICs: .111The extension to the four-rate tuple case is not that challenging although it requires a complicated yet tedious analysis. Given our results (to be presented soon) and the tradeoff w.r.t. (or ) already characterized in [3], the extension does not provide any additional insights. Hence, here we consider a simpler sum-rate pair setting. The capacity region is defined as the closure of the set of achievable sum-rate pairs: where denotes the one w.r.t. the high-dimensional rate tuple.

Iii Main Results

Our main contribution lies in characterization of the capacity region of the two-way IC, formally stated below.

Theorem 1 (Capacity region)

The capacity region of the two-way IC is the set of such that

(1)
(2)
(3)
(4)

where and indicate the perfect feedback sum-capacities of the forward and backward IC’s, respectively [3].

{proof}

The achievability proof relies on two novel transmission schemes. In particular, we highlight key features of the second scheme - that we call retrospective decoding - which plays a crucial role to achieve perfect feedback capacities in both directions. The first feature is that it consists of two stages, each comprising a sufficiently large number of time slots. The second feature is that in the second stage, feedback-aided successive refinement w.r.t. the fresh symbols sent in the first stage occurs in a retrospective manner: the fresh symbol sent in time of stage I is refined in time of stage II where . See Section IV for the detailed proof.

For the converse proof, we first note that the first two bounds (1) and (2) match the perfect-feedback bound [3, 14, 5]. So one can prove them with a simple modification to the proof in the references. The third bound is due to cutset: and . Our contribution lies in the derivation of the last bound. See Section V-B for the proof.

We state two baselines for comparison to our main result.

Baseline 1 ([8, 15])

The capacity region for the non-interactive scenario is the set of such that

Baseline 2 ([3])

The capacity region for the perfect feedback scenario is .

With Theorem 1 and Baseline 1, one can readily see that feedback gain (in terms of capacity region) occurs as long as , where and . A careful inspection reveals that there are channel regimes in which one can enhance (or ) without sacrificing the other counterpart. This implies a net interaction gain.

Definition 1 (Interaction gain)

We say that an interaction gain occurs if one can achieve for some and such that .

A tedious yet straightforward calculation with this definition leads us to identify channel regimes which exhibit an interaction gain, marked in light blue in Fig. 2.

We also find the regimes in which feedback does increase capacity but interaction cannot provide such increase, meaning that whenever , must be and vice versa. These are and marked in purple in Fig. 2. The cutset bound (3) proves this for . The regime of has been open as to whether both and can be non-negative. Our novel bound (4) cracks the open regime, demonstrating that there is no interaction gain in the regime.

Achieving perfect feedback capacities: One interesting observation is that there are channel regimes in which both and can be strictly positive. This is unexpected because it implies that not only feedback does not sacrifice one transmission for the other, it can actually improve both simultaneously. More interestingly, and can reach up to the maximal feedback gains, reflected in and . The dark blue regimes in Fig. 2 indicate such channel regimes when . Note that such regimes depend on . The amount of feedback that one can send is limited by available resources offered by the backward (or forward) IC. Hence, the feedback gain can be saturated depending on availability of the resources, which is affected by the channel asymmetry parameter . One point to note here is that for any , there always exists a non-empty set of in which perfect feedback capacities can be achieved. Corollary 1 stated below exhibits all of such channel regimes.

Corollary 1

Consider a case in which feedback helps in both ICs: and . In this case, the channel regimes in which are:

{proof}

A tedious yet straightforward calculation with Theorem 1 completes the proof.

Remark 1 (Why the Perfect Feedback Regimes?)

When and , indicates the total number of resource levels at the receivers in the backward channel. Hence, one can interpret as the remaining resource levels (resource holes) that can potentially be utilized to aid forward transmission. It turns out feedback can maximize resource utilization by filling up the resource holes under-utilized in the non-interactive case. Note that represents the amount of feedback that needs to be sent for achieving . Hence, the condition (similarly ) in Corollary 1 implies that as long as we have enough resource holes, we can get all the way to perfect feedback capacity. We will later provide an intuition as to why feedback can do so while describing our achievability; see Remark 3 in particular.

Iv Achievability Proof of Theorem 1

We first illustrate new transmission schemes via two toy examples in which the key ingredients of our achievability idea are well presented. Once the description of the schemes is done via the examples, we will then outline the proof for generalization while leaving a detailed proof for arbitrary channel parameters in Appendix A.

Iv-a Example 1:

See Fig. 4 for the channel structure of the example. The claimed rate region in this example reads . This is the case in which one can achieve while maintaining . We introduce a new transmission scheme (that we call Scheme 1) to achieve the claimed rate region.

Fig. 4: A perfect feedback scheme for where (top); a nonfeedback scheme for where (bottom).

Perfect feedback scheme: A perfect feedback scheme was presented in [3]. Here we consider a different scheme which allows us to resolve the tension between feedback and independent messages when translated into a two-way scenario. The scheme operates in two stages. See Fig. 4. In stage I, four fresh symbols ( from user 1 and from user 2) are transmitted. The scheme in [3] feeds back to user 1, so that user 1 can decode which turns out to help refining the corrupted symbol in stage II. On the other hand, here we send back to user 2. This way, user 2 can get by removing its own symbol . Similarly user 1 can get . Now in stage II, user 2 intends to re-send on top, as the is corrupted due to in stage I. But here a challenge arises. The challenge is that the causes interference to user at the bottom level. But here the symbol obtained via feedback at user 1 can play a role. The idea of interference neutralization [16] comes into play. User 1 sending the on bottom enables neutralizing the interference. This then allows user 1 to transmit another fresh symbol, say , without being interfered. Similarly user 2 can carry interference-free. This way, we send 6 symbols during two time slots, thus achieving . As for the backward IC, we employ a nonfeedback scheme in [15]. User and send on top levels. This yields .

We are now ready to illustrate our achievability. Like the perfect feedback scheme, it still operates in two stages and the operation of stage I remains unchanged. A new idea comes in feedback strategy. Recall that is the one that is desired to be fed back to user 2. But the has a conflict with transmission of . It seems an explicit selection needs to be made between the two competing transmissions. But it turns out the two transmissions come without the conflict. The idea is to combine the XORing scheme introduced in network coding literature [17] with interference neutralization [16]. See Fig. 5. User simply sends the XOR of and on top. User 1 can then extract by using its own symbol as side information. But it is still interfered with by . Here a key observation is that is also available at user - it was received cleanly at the top level in stage I. User sending the on bottom enables user 1 to achieve interference neutralization at the bottom level, thereby decoding of interest. Now consider user 2 side. User 2 can exploit to obtain . Note that is not the same as wanted by user 2 in the perfect feedback scheme. Nonetheless can serve the same role as and this will be clearer soon. Similarly, user sending on top while user sending (already delivered via the forward IC) on bottom, user can decode of interest and user 1 can get .

Fig. 5: XORing with interferene neutralization for feedback strategy; Employing interference alignment and neutralization for refinement of the past corrupted symbols.

Now in stage II, we take a similar approach as in the perfect feedback case. User 2 intends to re-send on top. Recall in the perfect feedback scheme that user 1 sent the fedback symbol on bottom, in order to remove the interference caused to user . But the situation is different here. User 1 has instead. It turns out this can also play the same role. The idea is to use interference alignment and neutralization [18, 19, 16]. User 1 sends on bottom. Here seems to cause interference to user . But this can be canceled as is already decoded at user 2 - see the bottom level at user 2 in the backward channel. User 2 sending on top enables interference neutralization. This allows user 1 to send another fresh symbol on bottom interference-free. Note that can be viewed as the aligned interference w.r.t. . Similarly user 1 sending on top and user 2 sending on bottom, user and can decode and respectively. This way, we achieve as in the perfect feedback case while maintaining . Hence, the claimed rate region is achieved.

Remark 2 (Exploiting Side Information)

Note in Fig. 5 (bottom) that the two backward symbols and the two feedback signals can be transmitted through 2-bit-capacity backward IC. This is because each user can cancel the seemingly interfering information by exploiting what has been received and its own symbols as side information. The side information allows the backward IC to have an effectively larger capacity, thus yielding a gain. This gain equalizes feedback cost, which in turn enables feedback to come for free in the end. The nature of the gain offered by side information coincides with that of the two-way relay channel [20] and many other examples [21, 22, 3, 23, 24].

Iv-B Example 2:

Scheme 1 is intended for the regimes in which feedback provides a gain only in one direction, e.g., and . For the regimes feedback helps in both directions, we develop another transmission scheme (that we call Scheme 2) which enables us to get sometimes all the way to perfect feedback capacities. In this section, we illustrate the scheme via Example 2 in which and one can achieve . See Fig. 6 for the channel structure of the example.

Our scheme operates in two stages. But one noticeable distinction is that each stage comprises a sufficiently large number of time slots. Specifically stage I consists of time slots, while stage II uses time slots. It turns out our scheme ensures transmission of forward symbols and backward symbols, thus yielding:

as . Here are details.

Before describing details, let us review the perfect feedback scheme of the backward IC [3] which takes a relaying idea. User delivers a backward symbol, say , to user 1 via the feedback-assisted path: user user 2 feedback user user 1. Similarly user sends to user 2. This yields .

Fig. 6: Stage I: Employ time slots. The operation in each time slot is similar to stage I’s operation in the perfect feedback case. We simply forward the XOR of a feedback signal and a new independent symbol. Here we see the tension between them.

Stage I: We employ time slots. In each time slot, we mimick the perfect feedback scheme although we have the tension between feedback and independent message transmissions.

Time 1: Four fresh symbols are transmitted over the forward IC. User then extracts the one that is desired to be fed back: . Next we send the XOR of and a backward symbol, say . Similarly user transmits . User 1 then gets using its own symbol . Similarly user 2 gets .

Time 2: User 1 superimposes with another new symbol, say , sending the XOR on bottom. On top is another fresh symbol transmitted. Similarly user 2 sends . User transmits . Similarly user sends . User 1 then gets by using its own signal . Similarly user 2 obtains . Repeating the above, one can readily verify that at time , user 1 and 2 get and respectively; similarly user and get and on bottom, respectively. See Fig. 6.

Fig. 7: Stage II: Time aims at decoding . At time , given (decoded in time ), we decode which in turn helping decoding . We iterate this from to .

Stage II: We employ time slots. We perform refinement w.r.t. the fresh symbols sent in stage I. The novel feature here is that the successive refinement occurs in a retrospective manner: the fresh symbol sent at time is refined at time in stage II where . Here one key point to emphasize is that the refined symbol in stage II acts as side information, which in turn helps refining other past symbols in later time. In the example, the decoding order reads:

(5)

Time : User 1 sends (received at time ) on bottom. It turns out this acts as ignition for refining all the corrupted symbols in the past. Similarly user 2 sends on bottom. User can then obtain which would be forwarded to user 2. User 2 can then decode of interest. Similarly is delivered to user 1.

Time : The decoded symbols turn out to play a key role to refine past forward transmission. Remember that sent by user 2 at time in stage I was corrupted. User 2 re-transmits the on top as in the perfect feedback case. But here the problem is that the situation is different from that in the perfect feedback case where was available at user 1 and helped nulling interference. Note that is not available here. Instead user 1 has an interfered version: . Nonetheless we can effectively do the same as in the perfect feedback case. User 1 sends on bottom. Clearly the neutralization is not perfect as it contains . Here the idea is to exploit the as side information to enable interference alignment and neutralization [18, 19, 16]. Note that user 2 can exploit the knowledge of to construct the aligned interference . Sending the on top, user 2 can completely neutralize the interference as in the perfect feedback case. This enables user 1 to deliver interference-free on bottom. Similarly we can deliver . On the other hand, exploiting (decoded right before) as side information, user can extract from the one received at time . Sending this then allows user 2 to decode . Similarly can be decoded at user 1.

Time Time : We repeat the same as before. At time where , exploiting decoded in time , we decode , which in turn helps decoding .

Now let us compute an achievable rate. In stage I, we sent fresh forward and backward symbols. In stage II, we sent only fresh forward symbols. This yields the desired rate in the limit of .

Remark 3 (Exploiting Future Symbols as Side Information)

Note in Fig. 6 the two types of tension: (1) forward-symbol feedback vs. backward symbols; (2) the other counterpart. As illustrated in Fig. 7, our scheme leads us to resolve both tensions. This then enables us to fully utilize the remaining resource level for sending the forward-symbol feedback of , thereby achieving . Similarly we can fill up the resource holes with the backward-symbol feedback of . This comes from the fact that our feedback scheme exploits the following as side information: (i) past received signals; (ii) users’ own symbols; (iii) partially decoded symbols. While the first two were already shown to be beneficial in the prior works [3, 5] (as well as in Example 1), the third type of information is the newly exploited one which turns out to yield the strong interaction gain. One can view this as future information. Recall the decoding order (5). When decoding , we exploited (future symbols w.r.t. ) as side information. A conventional belief is that feedback allows us to know only about the past. In contrast, we discover a new viewpoint on the role of feedback. Feedback enables exploiting future information as well via retrospective decoding.

Iv-C Proof Outline

We categorize regimes depending on the values of channel parameters. Notice that when . Also by symmetry, it suffices to consider only five regimes - see Fig. 8:

Fig. 8: Regimes to check for achievability proof. By symmetry, it suffices to consider (R1), (R2), (R3), (R4), (R5).

As figured out in Fig. 2, (R1) and (R2) are the ones in which there is no interaction gain. The proof builds only upon the perfect feedback scheme [3]. One thing to note here is that there are many subcases depending on whether or not available resources offered by a channel are enough to achieve the perfect feedback bound. Hence, a tedious yet careful analysis is required to cover all such subcases. On the other hand, (R3) and (R4) are the ones in which there is an interaction gain but only in one direction. So in this case, the nonfeedback scheme suffices for the backward IC while a non-trivial scheme needs to be employed for the forward IC. It turns out Scheme 1 illustrated in Example 1 plays a key role in proving the claimed achievable region. (R5) is the one in which there is an interaction gain and sometimes one can get to perfect feedback capacities. We fully utilize the ideas presented in Scheme 1 and Scheme 2 to prove the claimed rate region. One key feature to emphasize is that the idea of network decomposition developed in [25] is utilized to provide a conceptually simpler proof for generalization. Here we illustrate the network decomposition idea via Example 3, while leaving a detailed proof in Appendix A.

Fig. 9: Achievaility for via network decomposition.

Example 3: : Network decomposition relies on graph coloring. See Fig. 9. For the forward IC, we assign a color (say green) to level 1 and the levels connected to level 1. The green-colored graph then represents a subchannel, say , which has no overlap with the remaining uncolored subchannel . Following the notation in [25], we represent this by: . Similarly the backward channel can be decomposed as: . We then pair up one forward-subchannel and one backward-subchannel, say , and apply Scheme 1 for the pair as in Fig. 5. This gives . For the remaining pair of and , we perform Scheme 2 independently. This yields . Combining these two achieves the desired rate region: .

V Converse Proof of Theorem 1

The first two (1) and (2) are the perfect-feedback bounds [3, 14, 5]. So the proof is immediate via a slight modification. The third bound (3) is cutset: and . The last is a new bound. For completeness, we will provide detailed proof for the cutset and perfect feedback bounds in the subsequent section. We will then derive the new bound in Section V-B.

V-a Proof of the Cutset & Perfect Feedback Bound

Proof of (3): Starting with Fano’s inequality, we get

where follows from the fact that is independent of , and is a function of ; follows from the fact that is a function of ; follows from the fact that conditioning reduces entropy; follows from the fact that the right-hand-side is maximized when are uniformly distributed and independent. Similarly one can show . If is achievable, then as tends to infinity. Therefore, we get the desired bound.

Proof of (1): Starting with Fano’s inequality, we get

where follows from the independence of ; follows from the fact that is a function of , is a function of , and is a function of ; follows from the fact that conditioning reduces entropy. This completes the proof.

V-B Proof of a Novel Outer Bound

The proof hinges upon several lemmas stated below. The proof is streamlined with the help of a key notion, called triple mutual information (or interaction information [13]), which is defined as

(6)

It turns out that the commutative property of the notion plays a crucial role in deriving several key steps in the proof:

(7)

Using this notion and starting with Fano’s inequality, we get

where follows from a chain rule. By symmetry, we get:

Now adding the above two and using Lemma 1 stated below, we get:

Hence, we get the desired bound.

Lemma 1

V-C Proof of Lemma 1

First consider:

where follows from the fact that is a function of ; follows from the fact that is a function of ; and is due to the definition of triple mutual information (6).

Using Lemma 2 stated at the end of this section, we get:

Now combining this with the 5th and 6th terms in summation of LHS gives: