Nested Lattice Codes for Gaussian Two-Way Relay Channels

# Nested Lattice Codes for Gaussian Two-Way Relay Channels

\authorblockAShahab Ghasemi-Goojani and Hamid Behroozi
Information Systems and Security Lab (ISSL)
Department of Electrical Engineering
Sharif University of Technology
Tehran, Iran
Email:shahab_ghasemi@ee.sharif.edu, behroozi@sharif.edu
###### Abstract

In this paper, we consider a Gaussian two-way relay channel (GTRC), where two sources exchange messages with each other through a relay. We assume that there is no direct link between sources, and all nodes operate in full-duplex mode. By utilizing nested lattice codes for the uplink (i.e., MAC phase), and structured binning for the downlink (i.e., broadcast phase), we propose two achievable schemes. Scheme 1 is based on “compute and forward” scheme of [1] while scheme 2 utilizes two different lattices for source nodes based on a three-stage lattice partition chain. We show that scheme 2 can achieve capacity region at the high signal-to-noise ratio (SNR). Regardless all channel parameters, the achievable rate of scheme 2 is within 0.2654 bit from the cut-set outer bound for user 1. For user 2, the proposed scheme achieves within 0.167 bit from the outer bound if channel coefficient is larger than one, and achieves within 0.2658 bit from the outer bound if channel coefficient is smaller than one. Moreover, sum rate of the proposed scheme is within bits from the sum capacity. These gaps for GTRC are the best gap-to-capacity results to date.

## I Introduction

In this paper, we study a two-way relay channel, where two nodes exchange their messages with each other via a relay. This can be considered as, e.g., two mobile users communicate to each other via the access point in a WLAN [2] or a satellite which enables data exchange between several earth stations where there is no direct link between the stations.

Two-way or bi-directional communication between two nodes without a relay was first proposed and studied by Shannon in [3]. In this setup, two nodes want to exchange messages with each other, and act as transmitters and receivers at the same time. For this setup, the capacity region is not known in general and only inner and outer bounds on the capacity region are obtained in the literature.

The two-way relay channel, also known as the bi-directional relay channel, consists of two nodes communicating with each other in a bi-directional manner via a relay. This setup was first introduced in [4]; later studied in [4, 5, 6] and an approximate characterization of the capacity region of the Gaussian case is derived.

The traditional relaying protocols require four channel uses to exchange the data of two nodes whereas the two-way relaying protocol in [4] only needs two phases to achieve bidirectional communication between the two nodes. The first phase is referred to as the multiple access (MAC) phase, and the second phase is referred to as the broadcast (BRC) phase. In the MAC phase, both nodes transmit their messages to the relay node which decodes them. In the BRC phase, the relay combines the data from both nodes and broadcasts the combined data back to both nodes. For this phase, there exist several strategies for the processing at the relay node, e.g., an amplify-and-forward (AF) strategy [4], a decode-and-forward (DF) strategy [4, 7], or a compress-and-forward (CF) strategy [8].

The AF protocol is a simple scheme, which amplifies the signal transmitted from both nodes and retransmit it to them, and unlike the DF protocol, no decoding at the relay is performed. In the two-way AF relaying strategy, the signals at the relay are actually combined on the symbol level. Due to amplification of noise, its performance degrades at low signal-to-noise ratios (SNR). The two-way DF relaying strategy was proposed in [4], where the relay decodes the received data bits from the both nodes. Since the decoded data at the relay can be combined on the symbol level or on the bit level, there has been different data combining schemes at the relay for the two-way DF relaying strategy: superposition coding, network coding, and lattice coding [2]. In the superposition coding scheme, applied in [4], the data from the two nodes are combined on the symbol level, where the relay sends the linear sum of the decoded symbols from both nodes. Shortly after the introduction of the two-way relay channel, its connection to network coding [9] was observed and investigated. The network coding schemes combine the data from nodes on the bit level using the XOR operation, see e.g., [10, 11, 12, 13, 14, 15]. Lattice coding uses modulo addition in a multi-dimensional space and utilizes nonlinear operations for combining the data. Applying lattice coding in two-way relaying systems was considered in, e.g., [16, 17, 18]. In general, as in CF or partial DF relaying strategies, the relay node need not to decode the source messages, but only need to pass sufficient information to the destination nodes.

In this paper, we focus on the Gaussian TRC (GTRC) as shown in Fig. 1, where nodes 1 and 2 want to communicate with each other at rates and , respectively, while a relay node facilitates the communication between them. TRC with a direct link between nodes 1 and 2 is studied in [19, 20, 21, 22, 23]. Here, we consider a GTRC without a direct link between nodes 1 and 2. Our model is an extension of [18] and it is essentially the same as those considered in [5, 24, 17, 25, 26, 27]. Similar to [18, 17], we apply the nested lattice coding. For a comprehensive review on lattices and their performance analysis, we refer the reader to the study of [28, 29, 30]. Nested lattice codes have been shown to be capacity achieving for the AWGN channel [31, 32], AWGN broadcast channel [32] and the AWGN multiple access channel [1]. Song and Devroye [33], using list decoding technique, show that lattice codes can achieve the DF rate of single source, single destination Gaussian relay channels with one or more relays. It is also shown that the lattice CF scheme, which exploits a lattice Wyner-Ziv binning, for the Gaussian relay channel achieves the same rate as the Cover-El Gamal CF achievable rate [33]. Nazer and Gastpar by introducing compute-and-forward scheme obtain significantly higher rates between users than previous results in the literature over a relay network [1]. By utilizing a deterministic approach [34] and a simple AF or a particular superposition coding strategy at the relay, [6] achieves full-duplex GTRC capacity region within 3 bits from the cut-set bound for all values of channel gains. A general GTRC is considered in [17], and it is shown that regardless of all channel parameters, the capacity region for each user is achievable within bit and the sum rate is within bits from the sum capacity, which was the best gap-to-capacity. Communication over a symmetric GTRC when all source and relay nodes have the same transmit powers and noise variances, is considered in [35]. It is shown that lattice codes and lattice decoding at the relay node can achieve capacity region at the high SNR. Lim et.al, using noisy network coding for GTRC, show that the achievable rate of their scheme for each user is within bit from the cut-set bound. [36]. The achievable sum rate in [36] is within 1 bit from the cut-set bound.

Here, we apply nested lattice codes at the relay to obtain linear combination of messages. By comparing with the cut-set outer bound, we show that the achievable rate region, regardless of all channel parameters, is within 0.2654 bit from the outer bound for user 1. For user 2, the proposed scheme achieves within 0.167 bit from the outer bound if the channel coefficient is larger than one, and achieves within 0.2658 bit from the outer bound if channel coefficient is smaller than one. The achievable sum rate by the proposed scheme is within bit from the cut-set bound. Thus, the proposed scheme outperforms [17, 6, 36] and the resulting gap is the best gap-to-capacity result to date.

The remainder of the paper is organized as follows. We present the preliminaries of lattice codes and the channel model in Section II. In Section III, we introduce two achievable coding schemes. Scheme 1 is based on compute-and-forward strategy while scheme 2 utilizes two different lattices for source nodes based on a three-stage lattice partition chain. In Section IV, we analyze the gap between the achievable rate region and the cut-set outer bound for each user. Numerical results and performance comparison between two schemes and the outer bound are presented in Section V. Section VI concludes the paper.

## Ii Preliminaries: Lattices and Channel Model

### Ii-a Notations and Channel Model

Throughout the paper, random variables and their realizations are denoted by capital and small letters, respectively. stands for a vector of length , . Also, denotes the Euclidean norm, and all logarithms are with respect to base .

In this paper, we consider a Gaussian two-way relay channel (GTRC), with two sources that exchange messages through a relay. We assume that there is no direct link between the sources and all nodes operate in a full-duplex mode. The system model is depicted in Fig. 1. Communication process takes place in two phases: MAC phase BRC phase, which are described in the following:

• MAC phase: In this phase, first the input message to both encoders, , are mapped to

 X(t)i=fi(Wi,Yt−1i),fori=1,2

where is the encoder function at node and is the set of past channel outputs at node . Without loss of generality, we assume that both transmitted sequences , are average-power limited to , i.e.,

 1nn∑t=1E[∣∣X(t)i∣∣2]≤P,fori=1,2. (1)

Both nodes send their signals to the relay. The received signal at the relay at time is specified by

 Y(t)R = X(t)1+√gX(t)2+Z(t)R,

where and are the signals transmitted from node 1 and node 2 at time , respectively. denotes the channel gain between node 2 and the relay and all other channel gains are assumed to be one. represents an independent identically distributed (i.i.d.) Gaussian random variable with mean zero and variance , which models an additive white Gaussian noise (AWGN) at the relay.

• BRC phase: During the broadcast phase, the relay node processes the received signal and retransmits the combined signals back to both nodes, i.e., the relay communicates signal to both nodes 1 and 2. Since the relay has no messages of its own, the relay signal at time , , is a function of the past relay inputs, i.e.,

 X(t)R=fR(Yt−1R),

where is a encoding function at the relay and is a sequence of past relay inputs. The power constraint at the relay is given by

 1nn∑t=1E[∣∣X(t)R∣∣2]≤PR.

The received signal at each node at time is given by

 Y(t)i=X(t)R+Z(t)i,fori=1,2

where is the transmitted signal from the relay node at time and represents an i.i.d AWGN with zero mean and variance .

Node 1, based on the received sequence, , and its own message, , makes an estimate of the other message, , as

 ^W2=ψ1(Y1,W1),

where is a decoding function for at node 1. Decoding at node 2 is performed in a similar way. The average probability of error is defined as

 Pe= Pr{^W1≠W1or ^W2≠W2}.

A rate pair of non-negative real values is achievable if there exists encoding and decoding functions with as [37]. The capacity region of the GTRC is the convex closure of the set of achievable rate pairs .

Note that the model depicted in Fig. 1 is referred to as the symmetric model if all source and relay nodes have the same transmit powers, noise variances, i.e., and .

### Ii-B Lattice Definitions

Here, we provide some necessary definitions on lattices and nested lattice codes [31, 1, 38].

###### Definition 1.

(Lattice): An -dimensional lattice is a set of points in Euclidean space such that, if , then , and if , then . A lattice can always be written in terms of a generator matrix as

 Λ={x=zG:z∈Zn},

where represents integers.

###### Definition 2.

(Quantizer): The nearest neighbor quantizer associated with the lattice is

 QΛ(x)=argminl∈Λ∥x−l∥.
###### Definition 3.

(Voronoi Region): The fundamental Voronoi region of a lattice is set of points in closest to the zero codeword, i.e.,

 V0(Λ)={x∈Rn:Q(x)=0}.
###### Definition 4.

(Moments): which is called the second moment of lattice is given by

 σ2(Λ)=1n∫V(Λ)∥x∥2dx∫V(Λ)dx. (2)

and the normalized second moment of lattice is

 G(Λ)=σ2(Λ)[∫V(Λ)dx]2n=σ2(Λ)V2n,

where is the Voronoi region volume, i.e., .

###### Definition 5.

(Modulus): The modulo- operation with respect to lattice is defined as

 x mod Λ=x−Q(x),

that maps into a point in the fundamental Voronoi region.

###### Definition 6.

(Quantization Goodness or Rogers-good): A sequence of lattices is good for mean-squared error (MSE) quantization if

 limn→∞G(Λ(n))=12πe.

The sequence is indexed by the lattice dimension . The existence of such lattices is shown in [39, 28].

###### Definition 7.

(AWGN channel coding goodness or Poltyrev-good): Let be a length- Gaussian vector, . The volume-to-noise ratio of a lattice is given by

 μ(Λ,ϵ)=( Vol(V))2/nσ2Z,

where is chosen such that and is an identity matrix. A sequence of lattices is Poltyrev-good if

 limn→∞μ(Λ(n),ϵ)=2πe,∀ϵ∈(0,1)

and, for fixed volume-to-noise ratio greater than , decays exponentially in .

Poltyrev showed that sequences of such lattices exist [40]. The existence of a sequence of lattices which are good in both senses (i.e., simultaneously are Poltyrev-good and Rogers-good) has been shown in [28].

###### Definition 8.

(Nested Lattices): A lattice is said to be nested in lattice if . is referred to as the coarse lattice and as the fine lattice.

(Nested Lattice Codes): A nested lattice code is the set of all points of a fine lattice that are within the fundamental Voronoi region of a coarse lattice ,

 C={Λ1∩V}.

The rate of a nested lattice code is

 R=1nlog|C|=1nlog Vol(V) Vol(V1).

The existence of nested lattices where the coarse lattice as well as the fine lattice are good in both senses has also been shown in [1, 41]. An interesting property of these codes is that any integer combinations of transmitted codewords are themselves codewords.

In the following, we present a key property of dithered nested lattice codes.

###### Lemma 1.

The Crypto Lemma [30, 31] Let be a random vector with an arbitrary distribution over . If is independent of and uniformly distributed over , then is also independent of and uniformly distributed over .

###### Proof:

See Lemma 2 in [30].

## Iii Nested Lattice Codes

### Iii-a Relay Strategies

In this Section, we introduce two strategies for processing and transmitting at the relay. In both schemes, we recover a linear combination of messages instead of separate recovery of messages at the relay.

#### Iii-A1 Scheme 1: Compute-and-Forward

The compute-and-forward strategy is proposed in [1]. In this scheme, the goal is recover an integer linear combination of codewords, and , i.e., we estimate

 V1+aV2, (3)

where . Since the transmitted sequences are from lattice codes, it guarantee that any integer linear combination of the codewords is a codeword. However, at the receiver, the received signal which is a linear combination of the transmitted codewords is no longer integer since the channel coefficients are real (or complex). Also, the received signal is corrupted by noise. As a solution, Nazer and Gastpar [1] propose to scale the received signal by a factor such that the obtained vector is made as close as possible to an integer linear combination of the transmitted codewords.

To reach to this goal, for large enough, we assume that there exist three lattices , and such that . and are both Poltyrev-good and Rogers- good and is Rogers-good with the second moment

 σ2(Λ)=P.

We denote the Voronoi regions of , and with , and , respectively.

Encoding: We choose two codebooks and , such that

 C1 = {Λ1∩V}, C2 = {Λ2∩V}.

Now, for each input node , the message set is arbitrarily one-to-one mapped onto . We also define two random dither vectors for . Dither vectors are independent of each other and also independent of the message of each node and the noise. Dither is known to both the input nodes and the relay node. To transmit a message, node chooses associated with the message and sends

 Xi=[Vi−Di] % mod Λ.

Note that by the crypto lemma, is uniformly distributed over and independent of . Thus, the average transmit power at node is equal to , so that the power constraint is satisfied.

Decoding: Upon receiving , the relay node computes

 YdR = [αYR+D1+aD2] mod Λ, = [αX1+α√gX2+αZR+D1+aD2]% mod Λ, (a)= [V1+aV2+αX1−(V1−D1)+α√gX2−a(V2−D2)+αZR] mod Λ, (b)= [V1+aV2+(α−1)X1+(α√g−a)X2+αZR] mod Λ, = [T+Zeff] % mod Λ,

where (a) follows by adding and subtracting the term , and (b) follows from the distributive law of modulo- operation [29, 1], i.e.,

 [[X mod Λ]+Y] mod Λ=[X+Y] mod Λ,

. The effective noise is given by

 Zeff=(α−1)X1+(α√g−a)X2+αZR,

and

 T=[V1+aV2] mod% Λ.

Since are independent, and also independent of , and using Crypto lemma, is independent of and and thus independent of . The relay aims to recover from instead of recovering and individually. Due to the lattice chain, i.e., , is a point from . To get an estimate of , this vector is quantized onto modulo the lattice :

 ^T = QΛ2(YdR)%modΛ, = QΛ2(T+Zeff) mod Λ,

where denotes the nearest neighbor lattice quantizer associated with . Thus, the decoding error probability at the relay vanishes as if

 Pr(Zeff∉V2)→0. (4)

By using Lemma 8 in [1], we know that the density of can be upper bounded by the density of , which is an zero-mean Gaussian vector whose variance approaches

as . From Definition 7, this means that so long as the volume-to-noise ratio satisfies

Therefore, for the volume of each Voronoi region, we have:

 Vol(Vi)>(2πeNeq)n/2i=1,2. (5)

For the volume of the fundamental Voronoi region of , we have:

 Vol(V)=(PG(Λ))n/2. (6)

Now, by using (5) and (6) and definition of the rate of a nested lattice code, we can achieve the following rate for each node:

 Ri<12log(PG(Λ)2πeNeq),i=1,2.

Since is Rogers-good, as . Thus,

 Ri<12log⎛⎜ ⎜⎝P((α√g−a)2+(α−1)2)P+α2NR⎞⎟ ⎟⎠,i=1,2. (7)

Now, we choose as the minimum mean-square error (MMSE) coefficient that minimizes the variance of the effective noise, . Thus, we get

 αMMSE=(a√g+1)P(g+1)P+NR. (8)

By inserting (7) in (8), we can achieve any rate satisfying

 Ri<12log⎛⎜ ⎜⎝P((αMMSE√g−a)2+(αMMSE−1)2)P+α2MMSENR⎞⎟ ⎟⎠. (9)

i.e., it is possible to decode within arbitrarily low error probability, if the coding rates of the nested lattice codes associated with the lattice partition and satisfy (9). In this scheme, we must only obtain an integer linear combination of messages. Since we want to obtain an estimate of , should be an integer. On the other hand, we aim to obtain higher achievable rates as much as we can. To reach this goal, we choose as the closest integer to i.e., .

#### Iii-A2 Scheme 2: Our proposed scheme

Let us first consider a theorem that is a key to our code construction.

###### Theorem 1.

[42] For any , a sequence of -dimensional lattice partition chains , i.e., , exists that satisfies the following properties:

• and are simultaneously Rogers-good and Poltyrev- good while is Poltyrev-good.

• For any , , for sufficiently large .

• The coding rate of the nested lattice code associated with the lattice partition is

where and as . The coding rate of the nested lattice code associated with is given by

 R2=1nlog(|C2|)=1nlog( Vol(V2) Vol(Vc))=R1+12log(P2P1),

where .

###### Proof:

The proof of theorem is given in [42]. ∎

In the following, by applying a lattice-based coding scheme, we obtain achievable rate region at the relay. Suppose that there exist three lattices , and , which are Rogers-good (i.e.,, and Poltyrev-good with the following second moments

 σ2(Λ1)=P,andσ2(Λ3)=gP,

and a lattice which is Poltyrev-good with

 Λ(n)1 ⊆ Λ(n)C, Λ(n)3 ⊆ Λ(n)C.

Encoding: To transmit both messages, we construct the following codebooks:

 C1 = {ΛC∩V1}, C2 = {ΛC√g∩V2}.

Then node chooses associated with the message and sends

 Xi=[Vi−Di] % mod Λi,

where and are two independent dithers that are uniformly distributed over Voronoi regions and , respectively. Dithers are known at the source nodes and the relay. Due to the crypto-lemma, is uniformly distributed over and independent of . Thus, the average transmit power of node is equal to , and the power constraint is met.

Decoding: At the relay node, based on the channel output that is given by

 YR=X1+√gX2+ZR, (10)

we estimate . Depend on the value of , we consider two cases:

#### Case (I): g≥1

Based on Theorem 1, we can find two lattices, and , such that . With this selection of lattices, the relay node performs the following operation:

 YdR = [αYR+D1+√gD2] mod Λ1 = [αX1+α√gX2+αZR+D1+√gD2] mod Λ1 = [V1+√gV2+αX1−(V1−D1)+α√gX2−√g(V2−D2)+αZR] mod Λ1 (c)= [[V1+√gV2] mod Λ3+(α−1)X1+√g(α−1)X2+αZR] mod Λ1 = [T+Zeff] % mod Λ1,

where (c) follows from and and the distributive law of modulo- operation. The effective noise is given by

 Zeff=[(α−1)X1+√g(α−1)X2+αZR] mod Λ1,

and

 T=[V1+√gV2] mod Λ3.

Due to the dithers, the vectors are independent, and also independent of . Therefore, is independent of and . From the crypto-lemma, it follows that is uniformly distributed over and independent of [42].

The problem of finding the optimum value for when the lattice dimension goes to infinity, reduces to obtain the value of that minimizes the effective noise variance. Hence, by minimizing variance of , we obtain

 αMMSE=(g+1)P(g+1)P+NR. (12)

The relay attempts to recover from instead of recovering and individually. The method of decoding is minimum Euclidean distance lattice decoding [31, 43, 40], which finds the closest point to in . Thus, the estimate of is given by,

 ^T=QΛC(YdR).

Then, from the type of decoding, the probability of decoding error is given by

 Pe = Pr{^T≠T} = Pr{Zeff∉VC}.

Now, we have the following theorem which bounds the error probability.

###### Theorem 2.

For the described lattice partition chain and any rate satisfying

the error probability under minimum Euclidean distance lattice decoding is bounded by

 Pe=e−n(EP(22(R∗1−R1))−on(1)),

where is the Poltyrev exponent, which is given by [40]

 EP(x) = ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩x2−12(x−1−lnx),1≤x≤212(1+lnx4),2≤x≤4x8,x≥4 (13)

and .

###### Proof:

The proof of theorem is similar to the proof of theorem 3 in [42] and removed here. ∎

Since for , the error probability vanishes as if . Thus, by Theorem 1 and Theorem 2, the error probability at the relay node vanishes if

 R1 ≤ [12log(1g+1+PNR)]+, (14) R2 ≤ [12log(gg+1+gPNR)]+. (15)

Clearly, using a time sharing argument the following rates can be achieved:

 R1 ≤ u.c.e{[12log(1g+1+PNR)]+}, R2 ≤ u.c.e{[12log(gg+1+gPNR)]+},

where u.c.e is the upper convex envelope with respect to .

At low SNR, i.e., , pure (infinite dimensional) lattice-strategies cannot achieve any positive rates for as shown in Fig. 2. Hence, time sharing is required between the point and , which is a solution of the following equation:

 f(SNR)=df(SNR)dSNRSNR,

where . We also evaluate numerically the achievable rates for with lattice strategies for different values of . As we observe, with increasing , the achievable rate with lattice scheme decreases. As it is shown in Fig. 2, the maximum difference between two extreme cases ( and ) is 0.1218 bit.

#### Case (II): g<1

By using Theorem 1, we can choose two lattices and such that . The relay calculates . The equivalent channel is given by

 YdR = [αX1+α√gX2+αZR+D1+√gD2] mod Λ3 = [V1+√gV2+αX1−(V1−D1)+α√gX2−√g(V2−D2)+αZR] mod Λ3 (d)= [V1+√gV2+(α−1)X1+√g(α−1)X2+αZR] mod Λ3 = [T+Zeff] % mod Λ3,

where (d) follows from and and the distributive law of modulo- operation. The effective noise is given by

 Zeff=[(α−1)X1+√g(α−1)X2+αZR] mod Λ3,

and

 T=[V1+√gV2] mod Λ1.

Due to the dithers, the vectors are independent, and also independent of . Therefore, is independent of and . From the crypto-lemma, it follows that is uniformly distributed over and independent of [42]. In order to achieve the maximal rate, the optimal MMSE factor is used, i.e.,

 α=αMMSE=(g+1)P(g+1)P+NR. (17)

Similar to case , instead of recovering and separately, the relay recovers . Again, the decoding method is minimum Euclidean distance lattice decoding, which finds the closest point to in . Thus, the estimate of is given by,

 ^T=QΛC(YdR).
###### Theorem 3.

For the described lattice partition chain, if

 R2

the error probability under minimum Euclidean distance lattice decoding is bounded by

 Pe=e−n(EP(22(R∗2−R2))−on(1)),

where is the Poltyrev exponent given in (13).

###### Proof:

The proof of theorem is similar to the proof of theorem 3 in [42] and removed here. ∎

Here also the error probability vanishes as if since for . Thus, by Theorem 1 and Theorem 3, the error probability at the relay vanishes if

 R1 ≤ [12log(1g+1+PNR)]+, R2 ≤ [12log(gg+1+gPNR)]+,

Clearly, using a time sharing argument the following rates can be achieved:

 R1 ≤ u.c.e{[12log(1g+1+PNR)]+}, (18) R2 ≤ u.c.e{[12log(gg+1+gPNR)]+}, (19)

where u.c.e is the upper convex envelope with respect to .

For , pure (infinite dimensional) lattice-strategies cannot achieve any positive rate for as shown in Fig. 3. Hence, time sharing is required between the point and , which is a solution of the following:

 f(SNR)=df(SNR)dSNRSNR,

where . We also evaluate numerically the achievable rate of lattice strategy for different values of . As we see with decreasing , the achievable rate with lattice scheme is decreased.

We assume that the relay can recover the linear combination of both messages correctly, i.e., there is no error in the MAC phase, . The relay attempts to broadcast a message such that each node can recover the other node’s message based on both the received signal from the relay node and the available side information at each node, i.e., its own message. For the decoding at node 1 and node 2, we can use jointly typical decoding or lattice based scheme. Here, we apply jointly typical decoding. We consider scheme 2; decoding for scheme 1 is similar to scheme 2. For scheme 2, we also assume that . Under this assumption, we have . Now, we generate -sequences with each element . according to . These sequences form a codebook . We assume that there is a one-to-one correspondence between and .

Let us denote the relay codeword by . Based on , node 2 estimates the relay message as if a unique codeword exists such that are jointly typical, where

 CR,2={XR(T′):T′=[V1+√gv2] mod Λ3,v2∈C2}.

Note that . Now, by using the knowledge of and , node 2 estimates the message of node 1 as:

 ^V1=[^T2−√gv2] mod Λ3.

From the argument of random coding and jointly typical decoding [37], we get

 R1≤12log(1+PRN2). (20)

Similarly, at node 1, we get

 R2≤12log(1+PRN1). (21)

Now, we summarize our results for both schemes in the following two theorem:

###### Theorem 4.

For the Gaussian two-way relay channel, the following rate region is achievable:

 R1 ≤ min⎛⎜ ⎜⎝12log⎛⎜ ⎜⎝P((αMMSE√g−a)2+(αMMSE−1)2)P+α2MMSENR⎞⎟ ⎟⎠,12log(1+PRN2)⎞⎟ ⎟⎠, (22) R2 ≤ min⎛⎜ ⎜⎝12log⎛⎜ ⎜⎝P((αMMSE√g−a)2+(αMMSE−1)2)P+α2MMSENR⎞⎟ ⎟⎠,12log(1+PRN1)⎞⎟ ⎟⎠, (23)

where and is the closest integer to i.e., .

###### Proof:

The proof of theorem follows from the achievable rate-region of scheme 1 at the relay (9) and achievable rates at nodes 1 and 2 (20), (21). ∎

###### Theorem 5.

For Gaussian two-way relay channel, the following rate region is achievable:

 R1 ≤ min(u.c.e{[12log(1g+1+PNR)]+},12log(1+PRN2)), (24) R2 ≤ min(u.c.e{[12log(gg+1+gPNR)]+},12log(1+PRN1)). (25)
###### Proof:

The proof of theorem follows from the achievable rate-region of scheme 2 at the relay (18),(19) and achievable rates at nodes 1 and 2 (20), (21). ∎

## Iv Outer Bound

By using the cut-set bound, an outer bound for a TRC can be derived. If a rate pair is achievable for a general TRC, then

 R1 ≤ min{I(X1;YR,Y2|XR,X2),I(X1,XR;Y2|,X2)}, (26) R2 ≤ min{I(X2;YR,Y1|XR,X1),I(X2,XR;Y1|,X1)},