Nested Lattice Codes for Gaussian TwoWay Relay Channels
Abstract
In this paper, we consider a Gaussian twoway relay channel (GTRC), where two sources exchange messages with each other through a relay. We assume that there is no direct link between sources, and all nodes operate in fullduplex mode. By utilizing nested lattice codes for the uplink (i.e., MAC phase), and structured binning for the downlink (i.e., broadcast phase), we propose two achievable schemes. Scheme 1 is based on “compute and forward” scheme of [1] while scheme 2 utilizes two different lattices for source nodes based on a threestage lattice partition chain. We show that scheme 2 can achieve capacity region at the high signaltonoise ratio (SNR). Regardless all channel parameters, the achievable rate of scheme 2 is within 0.2654 bit from the cutset outer bound for user 1. For user 2, the proposed scheme achieves within 0.167 bit from the outer bound if channel coefficient is larger than one, and achieves within 0.2658 bit from the outer bound if channel coefficient is smaller than one. Moreover, sum rate of the proposed scheme is within bits from the sum capacity. These gaps for GTRC are the best gaptocapacity results to date.
I Introduction
In this paper, we study a twoway relay channel, where two nodes exchange their messages with each other via a relay. This can be considered as, e.g., two mobile users communicate to each other via the access point in a WLAN [2] or a satellite which enables data exchange between several earth stations where there is no direct link between the stations.
Twoway or bidirectional communication between two nodes without a relay was first proposed and studied by Shannon in [3]. In this setup, two nodes want to exchange messages with each other, and act as transmitters and receivers at the same time. For this setup, the capacity region is not known in general and only inner and outer bounds on the capacity region are obtained in the literature.
The twoway relay channel, also known as the bidirectional relay channel, consists of two nodes communicating with each other in a bidirectional manner via a relay. This setup was first introduced in [4]; later studied in [4, 5, 6] and an approximate characterization of the capacity region of the Gaussian case is derived.
The traditional relaying protocols require four channel uses to exchange the data of two nodes whereas the twoway relaying protocol in [4] only needs two phases to achieve bidirectional communication between the two nodes. The first phase is referred to as the multiple access (MAC) phase, and the second phase is referred to as the broadcast (BRC) phase. In the MAC phase, both nodes transmit their messages to the relay node which decodes them. In the BRC phase, the relay combines the data from both nodes and broadcasts the combined data back to both nodes. For this phase, there exist several strategies for the processing at the relay node, e.g., an amplifyandforward (AF) strategy [4], a decodeandforward (DF) strategy [4, 7], or a compressandforward (CF) strategy [8].
The AF protocol is a simple scheme, which amplifies the signal transmitted from both nodes and retransmit it to them, and unlike the DF protocol, no decoding at the relay is performed. In the twoway AF relaying strategy, the signals at the relay are actually combined on the symbol level. Due to amplification of noise, its performance degrades at low signaltonoise ratios (SNR). The twoway DF relaying strategy was proposed in [4], where the relay decodes the received data bits from the both nodes. Since the decoded data at the relay can be combined on the symbol level or on the bit level, there has been different data combining schemes at the relay for the twoway DF relaying strategy: superposition coding, network coding, and lattice coding [2]. In the superposition coding scheme, applied in [4], the data from the two nodes are combined on the symbol level, where the relay sends the linear sum of the decoded symbols from both nodes. Shortly after the introduction of the twoway relay channel, its connection to network coding [9] was observed and investigated. The network coding schemes combine the data from nodes on the bit level using the XOR operation, see e.g., [10, 11, 12, 13, 14, 15]. Lattice coding uses modulo addition in a multidimensional space and utilizes nonlinear operations for combining the data. Applying lattice coding in twoway relaying systems was considered in, e.g., [16, 17, 18]. In general, as in CF or partial DF relaying strategies, the relay node need not to decode the source messages, but only need to pass sufficient information to the destination nodes.
In this paper, we focus on the Gaussian TRC (GTRC) as shown in Fig. 1, where nodes 1 and 2 want to communicate with each other at rates and , respectively, while a relay node facilitates the communication between them. TRC with a direct link between nodes 1 and 2 is studied in [19, 20, 21, 22, 23]. Here, we consider a GTRC without a direct link between nodes 1 and 2. Our model is an extension of [18] and it is essentially the same as those considered in [5, 24, 17, 25, 26, 27]. Similar to [18, 17], we apply the nested lattice coding. For a comprehensive review on lattices and their performance analysis, we refer the reader to the study of [28, 29, 30]. Nested lattice codes have been shown to be capacity achieving for the AWGN channel [31, 32], AWGN broadcast channel [32] and the AWGN multiple access channel [1]. Song and Devroye [33], using list decoding technique, show that lattice codes can achieve the DF rate of single source, single destination Gaussian relay channels with one or more relays. It is also shown that the lattice CF scheme, which exploits a lattice WynerZiv binning, for the Gaussian relay channel achieves the same rate as the CoverEl Gamal CF achievable rate [33]. Nazer and Gastpar by introducing computeandforward scheme obtain significantly higher rates between users than previous results in the literature over a relay network [1]. By utilizing a deterministic approach [34] and a simple AF or a particular superposition coding strategy at the relay, [6] achieves fullduplex GTRC capacity region within 3 bits from the cutset bound for all values of channel gains. A general GTRC is considered in [17], and it is shown that regardless of all channel parameters, the capacity region for each user is achievable within bit and the sum rate is within bits from the sum capacity, which was the best gaptocapacity. Communication over a symmetric GTRC when all source and relay nodes have the same transmit powers and noise variances, is considered in [35]. It is shown that lattice codes and lattice decoding at the relay node can achieve capacity region at the high SNR. Lim et.al, using noisy network coding for GTRC, show that the achievable rate of their scheme for each user is within bit from the cutset bound. [36]. The achievable sum rate in [36] is within 1 bit from the cutset bound.
Here, we apply nested lattice codes at the relay to obtain linear combination of messages. By comparing with the cutset outer bound, we show that the achievable rate region, regardless of all channel parameters, is within 0.2654 bit from the outer bound for user 1. For user 2, the proposed scheme achieves within 0.167 bit from the outer bound if the channel coefficient is larger than one, and achieves within 0.2658 bit from the outer bound if channel coefficient is smaller than one. The achievable sum rate by the proposed scheme is within bit from the cutset bound. Thus, the proposed scheme outperforms [17, 6, 36] and the resulting gap is the best gaptocapacity result to date.
The remainder of the paper is organized as follows. We present the preliminaries of lattice codes and the channel model in Section II. In Section III, we introduce two achievable coding schemes. Scheme 1 is based on computeandforward strategy while scheme 2 utilizes two different lattices for source nodes based on a threestage lattice partition chain. In Section IV, we analyze the gap between the achievable rate region and the cutset outer bound for each user. Numerical results and performance comparison between two schemes and the outer bound are presented in Section V. Section VI concludes the paper.
Ii Preliminaries: Lattices and Channel Model
Iia Notations and Channel Model
Throughout the paper, random variables and their realizations are denoted by capital and small letters, respectively. stands for a vector of length , . Also, denotes the Euclidean norm, and all logarithms are with respect to base .
In this paper, we consider a Gaussian twoway relay channel (GTRC), with two sources that exchange messages through a relay. We assume that there is no direct link between the sources and all nodes operate in a fullduplex mode. The system model is depicted in Fig. 1. Communication process takes place in two phases: MAC phase BRC phase, which are described in the following:

MAC phase: In this phase, first the input message to both encoders, , are mapped to
where is the encoder function at node and is the set of past channel outputs at node . Without loss of generality, we assume that both transmitted sequences , are averagepower limited to , i.e.,
(1) Both nodes send their signals to the relay. The received signal at the relay at time is specified by
where and are the signals transmitted from node 1 and node 2 at time , respectively. denotes the channel gain between node 2 and the relay and all other channel gains are assumed to be one. represents an independent identically distributed (i.i.d.) Gaussian random variable with mean zero and variance , which models an additive white Gaussian noise (AWGN) at the relay.

BRC phase: During the broadcast phase, the relay node processes the received signal and retransmits the combined signals back to both nodes, i.e., the relay communicates signal to both nodes 1 and 2. Since the relay has no messages of its own, the relay signal at time , , is a function of the past relay inputs, i.e.,
where is a encoding function at the relay and is a sequence of past relay inputs. The power constraint at the relay is given by
The received signal at each node at time is given by
where is the transmitted signal from the relay node at time and represents an i.i.d AWGN with zero mean and variance .
Node 1, based on the received sequence, , and its own message, , makes an estimate of the other message, , as
where is a decoding function for at node 1. Decoding at node 2 is performed in a similar way. The average probability of error is defined as
A rate pair of nonnegative real values is achievable if there exists encoding and decoding functions with as [37]. The capacity region of the GTRC is the convex closure of the set of achievable rate pairs .
Note that the model depicted in Fig. 1 is referred to as the symmetric model if all source and relay nodes have the same transmit powers, noise variances, i.e., and .
IiB Lattice Definitions
Definition 1.
(Lattice): An dimensional lattice is a set of points in Euclidean space such that, if , then , and if , then . A lattice can always be written in terms of a generator matrix as
where represents integers.
Definition 2.
(Quantizer): The nearest neighbor quantizer associated with the lattice is
Definition 3.
(Voronoi Region): The fundamental Voronoi region of a lattice is set of points in closest to the zero codeword, i.e.,
Definition 4.
(Moments): which is called the second moment of lattice is given by
(2) 
and the normalized second moment of lattice is
where is the Voronoi region volume, i.e., .
Definition 5.
(Modulus): The modulo operation with respect to lattice is defined as
that maps into a point in the fundamental Voronoi region.
Definition 6.
(Quantization Goodness or Rogersgood): A sequence of lattices is good for meansquared error (MSE) quantization if
The sequence is indexed by the lattice dimension . The existence of such lattices is shown in [39, 28].
Definition 7.
(AWGN channel coding goodness or Poltyrevgood): Let be a length Gaussian vector, . The volumetonoise ratio of a lattice is given by
where is chosen such that and is an identity matrix. A sequence of lattices is Poltyrevgood if
and, for fixed volumetonoise ratio greater than , decays exponentially in .
Poltyrev showed that sequences of such lattices exist [40]. The existence of a sequence of lattices which are good in both senses (i.e., simultaneously are Poltyrevgood and Rogersgood) has been shown in [28].
Definition 8.
(Nested Lattices): A lattice is said to be nested in lattice if . is referred to as the coarse lattice and as the fine lattice.
(Nested Lattice Codes): A nested lattice code is the set of all points of a fine lattice that are within the fundamental Voronoi region of a coarse lattice ,
The rate of a nested lattice code is
The existence of nested lattices where the coarse lattice as well as the fine lattice are good in both senses has also been shown in [1, 41]. An interesting property of these codes is that any integer combinations of transmitted codewords are themselves codewords.
In the following, we present a key property of dithered nested lattice codes.
Iii Nested Lattice Codes
Iiia Relay Strategies
In this Section, we introduce two strategies for processing and transmitting at the relay. In both schemes, we recover a linear combination of messages instead of separate recovery of messages at the relay.
IiiA1 Scheme 1: ComputeandForward
The computeandforward strategy is proposed in [1]. In this scheme, the goal is recover an integer linear combination of codewords, and , i.e., we estimate
(3) 
where . Since the transmitted sequences are from lattice codes, it guarantee that any integer linear combination of the codewords is a codeword. However, at the receiver, the received signal which is a linear combination of the transmitted codewords is no longer integer since the channel coefficients are real (or complex). Also, the received signal is corrupted by noise. As a solution, Nazer and Gastpar [1] propose to scale the received signal by a factor such that the obtained vector is made as close as possible to an integer linear combination of the transmitted codewords.
To reach to this goal, for large enough, we assume that there exist three lattices , and such that . and are both Poltyrevgood and Rogers good and is Rogersgood with the second moment
We denote the Voronoi regions of , and with , and , respectively.
Encoding: We choose two codebooks and , such that
Now, for each input node , the message set is arbitrarily onetoone mapped onto . We also define two random dither vectors for . Dither vectors are independent of each other and also independent of the message of each node and the noise. Dither is known to both the input nodes and the relay node. To transmit a message, node chooses associated with the message and sends
Note that by the crypto lemma, is uniformly distributed over and independent of . Thus, the average transmit power at node is equal to , so that the power constraint is satisfied.
Decoding: Upon receiving , the relay node computes
where (a) follows by adding and subtracting the term , and (b) follows from the distributive law of modulo operation [29, 1], i.e.,
. The effective noise is given by
and
Since are independent, and also independent of , and using Crypto lemma, is independent of and and thus independent of . The relay aims to recover from instead of recovering and individually. Due to the lattice chain, i.e., , is a point from . To get an estimate of , this vector is quantized onto modulo the lattice :
where denotes the nearest neighbor lattice quantizer associated with . Thus, the decoding error probability at the relay vanishes as if
(4) 
By using Lemma 8 in [1], we know that the density of can be upper bounded by the density of , which is an zeromean Gaussian vector whose variance approaches
as . From Definition 7, this means that so long as the volumetonoise ratio satisfies
Therefore, for the volume of each Voronoi region, we have:
(5) 
For the volume of the fundamental Voronoi region of , we have:
(6) 
Now, by using (5) and (6) and definition of the rate of a nested lattice code, we can achieve the following rate for each node:
Since is Rogersgood, as . Thus,
(7) 
Now, we choose as the minimum meansquare error (MMSE) coefficient that minimizes the variance of the effective noise, . Thus, we get
(8) 
By inserting (7) in (8), we can achieve any rate satisfying
(9) 
i.e., it is possible to decode within arbitrarily low error probability, if the coding rates of the nested lattice codes associated with the lattice partition and satisfy (9). In this scheme, we must only obtain an integer linear combination of messages. Since we want to obtain an estimate of , should be an integer. On the other hand, we aim to obtain higher achievable rates as much as we can. To reach this goal, we choose as the closest integer to i.e., .
IiiA2 Scheme 2: Our proposed scheme
Let us first consider a theorem that is a key to our code construction.
Theorem 1.
[42] For any , a sequence of dimensional lattice partition chains , i.e., , exists that satisfies the following properties:

and are simultaneously Rogersgood and Poltyrev good while is Poltyrevgood.

For any , , for sufficiently large .

The coding rate of the nested lattice code associated with the lattice partition is
where and as . The coding rate of the nested lattice code associated with is given by
where .
Proof:
The proof of theorem is given in [42]. ∎
In the following, by applying a latticebased coding scheme, we obtain achievable rate region at the relay. Suppose that there exist three lattices , and , which are Rogersgood (i.e.,, and Poltyrevgood with the following second moments
and a lattice which is Poltyrevgood with
Encoding: To transmit both messages, we construct the following codebooks:
Then node chooses associated with the message and sends
where and are two independent dithers that are uniformly distributed over Voronoi regions and , respectively. Dithers are known at the source nodes and the relay. Due to the cryptolemma, is uniformly distributed over and independent of . Thus, the average transmit power of node is equal to , and the power constraint is met.
Decoding: At the relay node, based on the channel output that is given by
(10) 
we estimate . Depend on the value of , we consider two cases:
Case (I):
Based on Theorem 1, we can find two lattices, and , such that . With this selection of lattices, the relay node performs the following operation:
where (c) follows from and and the distributive law of modulo operation. The effective noise is given by
and
Due to the dithers, the vectors are independent, and also independent of . Therefore, is independent of and . From the cryptolemma, it follows that is uniformly distributed over and independent of [42].
The problem of finding the optimum value for when the lattice dimension goes to infinity, reduces to obtain the value of that minimizes the effective noise variance. Hence, by minimizing variance of , we obtain
(12) 
The relay attempts to recover from instead of recovering and individually. The method of decoding is minimum Euclidean distance lattice decoding [31, 43, 40], which finds the closest point to in . Thus, the estimate of is given by,
Then, from the type of decoding, the probability of decoding error is given by
Now, we have the following theorem which bounds the error probability.
Theorem 2.
For the described lattice partition chain and any rate satisfying
the error probability under minimum Euclidean distance lattice decoding is bounded by
where is the Poltyrev exponent, which is given by [40]
(13) 
and .
Proof:
The proof of theorem is similar to the proof of theorem 3 in [42] and removed here. ∎
Since for , the error probability vanishes as if . Thus, by Theorem 1 and Theorem 2, the error probability at the relay node vanishes if
(14)  
(15) 
Clearly, using a time sharing argument the following rates can be achieved:
where u.c.e is the upper convex envelope with respect to .
At low SNR, i.e., , pure (infinite dimensional) latticestrategies cannot achieve any positive rates for as shown in Fig. 2. Hence, time sharing is required between the point and , which is a solution of the following equation:
where . We also evaluate numerically the achievable rates for with lattice strategies for different values of . As we observe, with increasing , the achievable rate with lattice scheme decreases. As it is shown in Fig. 2, the maximum difference between two extreme cases ( and ) is 0.1218 bit.
Case (II):
By using Theorem 1, we can choose two lattices and such that . The relay calculates . The equivalent channel is given by
where (d) follows from and and the distributive law of modulo operation. The effective noise is given by
and
Due to the dithers, the vectors are independent, and also independent of . Therefore, is independent of and . From the cryptolemma, it follows that is uniformly distributed over and independent of [42]. In order to achieve the maximal rate, the optimal MMSE factor is used, i.e.,
(17) 
Similar to case , instead of recovering and separately, the relay recovers . Again, the decoding method is minimum Euclidean distance lattice decoding, which finds the closest point to in . Thus, the estimate of is given by,
Theorem 3.
For the described lattice partition chain, if
the error probability under minimum Euclidean distance lattice decoding is bounded by
where is the Poltyrev exponent given in (13).
Proof:
The proof of theorem is similar to the proof of theorem 3 in [42] and removed here. ∎
Here also the error probability vanishes as if since for . Thus, by Theorem 1 and Theorem 3, the error probability at the relay vanishes if
Clearly, using a time sharing argument the following rates can be achieved:
(18)  
(19) 
where u.c.e is the upper convex envelope with respect to .
For , pure (infinite dimensional) latticestrategies cannot achieve any positive rate for as shown in Fig. 3. Hence, time sharing is required between the point and , which is a solution of the following:
where . We also evaluate numerically the achievable rate of lattice strategy for different values of . As we see with decreasing , the achievable rate with lattice scheme is decreased.
IiiB Broadcast Phase
We assume that the relay can recover the linear combination of both messages correctly, i.e., there is no error in the MAC phase, . The relay attempts to broadcast a message such that each node can recover the other node’s message based on both the received signal from the relay node and the available side information at each node, i.e., its own message. For the decoding at node 1 and node 2, we can use jointly typical decoding or lattice based scheme. Here, we apply jointly typical decoding. We consider scheme 2; decoding for scheme 1 is similar to scheme 2. For scheme 2, we also assume that . Under this assumption, we have . Now, we generate sequences with each element . according to . These sequences form a codebook . We assume that there is a onetoone correspondence between and .
Let us denote the relay codeword by . Based on , node 2 estimates the relay message as if a unique codeword exists such that are jointly typical, where
Note that . Now, by using the knowledge of and , node 2 estimates the message of node 1 as:
From the argument of random coding and jointly typical decoding [37], we get
(20) 
Similarly, at node 1, we get
(21) 
Now, we summarize our results for both schemes in the following two theorem:
Theorem 4.
For the Gaussian twoway relay channel, the following rate region is achievable:
(22)  
(23) 
where and is the closest integer to i.e., .
Proof:
Theorem 5.
For Gaussian twoway relay channel, the following rate region is achievable:
(24)  
(25) 
Iv Outer Bound
By using the cutset bound, an outer bound for a TRC can be derived. If a rate pair is achievable for a general TRC, then
(26)  