The Three-User Finite-Field Multi-Way Relay Channel with Correlated Sources

The Three-User Finite-Field Multi-Way Relay Channel with Correlated Sources

Lawrence Ong, Gottfried Lechner, Sarah J. Johnson, and Christopher M. Kellett Part of the material in this paper was presented at the IEEE International Symposium on Information Theory, Saint Petersburg, July 31–August 5, 2011.This research was supported under Australian Research Council’s (ARC) Discovery Projects funding schemes (DP1093114 and DP120102123). Lawrence Ong is the recipient of an ARC Discovery Early Career Researcher Award (DE120100246). Sarah Johnson and Christopher Kellett are recipients of ARC Future Fellowships (FT110100195 and FT110100746 respectively).

This paper studies the three-user finite-field multi-way relay channel, where the users exchange messages via a relay. The messages are arbitrarily correlated, and the finite-field channel is linear and is subject to additive noise of arbitrary distribution. The problem is to determine the minimum achievable source-channel rate, defined as channel uses per source symbol needed for reliable communication. We combine Slepian-Wolf source coding and functional-decode-forward channel coding to obtain the solution for two classes of source and channel combinations. Furthermore, for correlated sources that have their common information equal their mutual information, we propose a new coding scheme to achieve the minimum source-channel rate.

Bidirectional relaying, common information, correlated sources, linear block codes, finite-field channel, functional-decode-forward, multi-way relay channel

I Introduction

We study the three-user multi-way relay channel (MWRC) with correlated sources, where each user transmits its data to the other two users via a single relay, and where the users’ messages can be correlated. The MWRC is a canonical extension of the extensively studied two-way relay channel (TWRC), where two users exchange data via a relay [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Adding users to the TWRC can change the problem significantly [11, 12, 13]. The MWRC has been studied from the point of view of channel coding and source coding.

In channel coding problems, the sources are assumed to be independent, and the channel noisy. The problem is to find the capacity, defined as the region of all achievable channel rate triplets (bits per channel use at which the users can encode/send on average). For the Gaussian MWRC with independent sources, Gündüz et al. [13] obtained asymptotic capacity results for the high SNR and the low SNR regimes. For the finite-field MWRC with independent sources, Ong et al. [14, 15] constructed the functional-decode-forward coding scheme, and obtained the capacity region. For the general MWRC with independent sources, however, the problem remains open to date.

In source coding problems, the sources are assumed to be correlated, but the channel noiseless. The problem is to find the region of all achievable source rate triplets (bits per message symbol at which the users can encode/send on average). The source coding problem for the three-user MWRC was solved by Wyner et al. [16], using cascaded Slepian-Wolf source coding [17].

In this paper, we study both source and channel coding in the same network, i.e., transmitting correlated sources through noisy channels (cf. our recent work [18] on the MWRC with correlated sources and orthogonal uplinks). For most communication scenarios, the source correlation is fixed by the natural occurrence of the phenomena, and the channel is the part that engineers are “unwilling or unable to change” [19]. Given the source and channel models, we are interested in finding the limit of how fast we can feed the sources through the channel. To this end, define source-channel rate [20] (also known as bandwidth ratio [21]) as the average channel transmissions used per source tuple. Our aim is then to derive the minimum source-channel rate required such that each user can reliably and losslessly reconstruct the other two users’ messages.

In the multi-terminal network, it is well known that separating source and channel coding, i.e., designing them independently, is not always optimal (see, e.g., the multiple-access channel [22]). Designing good joint source-channel coding schemes is difficult, let alone finding an optimal one. Gündüz et al. [20] considered a few networks with two senders and two receivers, and showed that source-channel separation is optimal for certain classes of source structure. In this paper, we approach the MWRC in a similar direction. We show that source-channel separation is optimal for three classes of source/channel combinations, by constructing coding schemes that achieve the minimum source-channel rate.

Recently, Mohajer et al. [23] solved the problem of linear deterministic relay networks with correlated sources. They constructed an optimal coding scheme, where each relay injectively maps its received channel output to its transmitted channel input. While this scheme is optimal for deterministic networks, such a scheme (e.g., the amplify-forward scheme in the additive white Gaussian noise channel) suffers from noise propagation in noisy channels and has been shown to be suboptimal for the MWRC with independent sources [13].

Ii Main Results

Ii-a Source and Channel Models

Fig. 1: The three-user finite field MWRC with correlated sources: The uplink communications are represented by solid lines, and the downlink communications by dashed lines. The square blocks are nodes, and the circles represent finite field additions.

We consider the MWRC depicted in Figure 1, where three users (denoted by 1, 2, and 3) exchange messages through a noisy channel with the help of a relay (denoted by 0). For each node , we denote its source by , its input to the channel by , and its received channel output by . We let , as the relay has no source.

We consider correlated and discrete-memoryless sources for the users, where , , and are generated according to some joint probability mass function


The channel consists of a finite-field uplink from the users to the relay, which takes the form


and a finite-field downlink from the relay to each user , which takes the form


where , for all , for some finite field of cardinality with the associated addition . Here, can be any prime power. We assume that the noise is not uniformly distributed, i.e., its entropy ; otherwise, it will randomize the channel, and no information can be sent through.

Each user sends source symbols to the other two users (simultaneously) in channel uses. We refer to the source symbols of user as its message, denoted by , where each symbol triplet for is generated independently according to (1). The channel is memoryless in the sense that the channel noise for all nodes and all channel uses are independent, and the distribution is fixed for all channel uses. The source-channel rate, i.e., the number of channel uses per source triplet, is denoted by .

We assume that each user has all its source symbols prior to the channel uses111This assumption merely simplifies our analysis. Even if the source generation and the channel uses occur simultaneously ( source triplets and channel uses per unit time), we can always transmit in blocks. We first wait for source triplets to be generated, and then use the channel times to transmit these source symbols (while waiting for the next triplets generation), and so on. Taking the number of blocks to be sufficiently large, the source-channel rate can be made as close to as desired., and consider the following block code of source-channel rate :

  1. The -th transmitted channel symbol of each node depends on its message and its previously received channel symbols, i.e., , for all and for all .

  2. Each user estimates the messages of the other users from its own message and all its received channel symbols, i.e., user decodes the messages from users and as , for all distinct . We denote .222The length of a bold-faced vector, either for source symbols or for channel symbols is clear from context.

Note that utilizing feedback is permitted in our system model. This is commonly referred to as the unrestricted MWRC (cf. the restricted MWRC [2, 24, 5, 13]). We will see later that for the classes of source/channel combinations for which we find the minimum source-channel rate, feedback is not used. This means that feedback provides no improvement to source-channel rate for these cases.

User makes a decoding error if . We define as the probability that one or more users make a decoding error, and say that source-channel rate is achievable if the following is true: for any , there exists at least one block code of source-channel rate with . The aim of this paper is to find the infimum of achievable source-channel rates, denoted by . For the rest of the paper, we refer to as the minimum source-channel rate.

Remark 1

Theoretical interest aside, the finite-field channel considered in this paper shares two important properties with the AWGN channel (commonly used to model wireless environments). Firstly, the channel is linear, i.e., the channel output is a function of the sum of all inputs. Secondly, the noise is additive. Sharing these two properties, optimal coding schemes derived for the finite-field channel shed light on how one would code in AWGN channels. For example, the optimal coding scheme derived for the finite-field MWRC with independent sources [14] is used to prove capacity results for the AWGN MWRC with independent sources [12].

Ii-B Main Results

We will now state the main result of this paper. The technical terms (in italics) in the theorem will be defined in Section II-C following the theorem.

Theorem 1

The minimum source-channel rate is given by


if the sources have any one of the following:

  1. almost-balanced conditional mutual information, or

  2. skewed conditional entropies (on any symmetrical finite-field channel), or

  3. their common information equals their mutual information.

For Cases 1 and 2, we derive the achievability (upper bound) of using existing (i) Slepian-Wolf source coding and (ii) functional-decode-forward channel coding for independent sources. We abbreviate this pair of source and channel coding scheme by SW/FDF-IS. We derive a lower bound using cut-set arguments. While the achievability for these two cases is rather straightforward, what we find interesting is that using the scheme for independent messages is actually optimal for two classes of source/channel combinations. Furthermore, although the source-channel rates achievable using SW/FDF-IS cannot be expressed in a closed form, we are able to derive closed-form conditions for two classes of sources where the achievability of SW/FDF-IS matches the lower bound.

In SW/FDF-IS, the source coding—while compressing—destroys the correlation among the sources, and hence channel coding for independent sources is used. For Case 3, the sources have their common information equal their mutual information, meaning that each source is able to identify the parts of the messages it has in common with other source(s). For this case, we again use Slepian-Wolf source coding, but we conserve the parts that the sources have in common. We then design a new channel coding scheme that takes the common parts into account. Here, the challenge is to optimize the functions of different parts that the relay should decode. We show that the new coding scheme is able to achieve .

For all three cases, the coding schemes are derived based on the separate source-channel coding architecture. Also, for Cases 1 and 3, is found when only the sources satisfy certain conditions, and this is true independent of the underlying finite-field channel, i.e., any and any noise distribution.

Ii-C Definitions

In this section, we define the technical terms in Theorem 1.

Ii-C1 Symmetrical Channel

Definition 1

A finite-field MWRC is symmetrical if


Otherwise, we say that the channel is asymmetrical.

We can think of as the noise level on the downlink from the relay to user . So, a symmetrical channel requires that the downlinks from the relay to all the users are equally noisy. We do not impose any condition on the uplink noise level, .

Ii-C2 Almost-Balanced Conditional Mutual Information

Definition 2

The sources are said to have almost-balanced conditional mutual information (ABCMI) if


for all distinct . Otherwise, the sources are said to have unbalanced conditional mutual information.

Putting it another way, for unbalanced sources, we can always find a user , such that


for some and distinct .

Ii-C3 Skewed Conditional Entropies

Definition 3

Sources with unbalanced conditional mutual information are said to have skewed conditional entropies (SCE) if, in addition to (7),


for the same as in (7).

Ii-C4 Common Information Equals Mutual Information

Lastly, we define common information in the same spirit as Gács and Körner [25]. For two users, Gács and Körner defined common information as a value on which two users can agree (using the terminology of Witsenhausen [26]). The common information between two random variables can be as large as mutual information (in the Shannon sense), but no larger.

The concept of common information was extended to multiple users by Tyagi et al. [27], where they considered a value on which all users can agree. In this paper, we further extend common information to values on which different subsets of users can agree. We now formally define a class of sources, where their common information equals their mutual information.

Definition 4

Three correlated random variables are said to have their common information equal their mutual information if there exists four random variables , , , and such that


for some deterministic functions , and


We give graphical interpretations using information diagrams for sources that have ABCMI and SCE in Appendix A, and examples of sources that have ABCMI and their common information equal their mutual information in Section VI.

Definitions 2 and 3 are mutually exclusive, but Definitions 4 and 2 (or 4 and 3) are not. This means correlated sources that have their common information equal their mutual information must also have either ABCMI, SCE, or unbalanced mutual information without SCE. This leads to the graphical summary of the results of Theorem 1 in Figure 2.

Fig. 2: Main results of this paper: shaded regions are the classes of source and channel combinations where the minimum source-channel rate is found

Ii-D Organization

The rest of this paper is organized as follows: We show a lower bound and an upper bound (achievability) to in Section III. In Section IV, we show that for Cases 1 and 2 in Theorem 1, the lower bound is achievable. In Section V, we propose a coding scheme that takes common information into account, and show the source-channel rate achievable using this new scheme matches the lower bound. We conclude the paper with some discussion in Section VI.

Iii Lower and Upper Bounds to

Denote the RHS of (4) as


We first show that is a lower bound to . Using cut-set arguments [28, pp. 587–591], we can show that if source-channel rate is achievable, then


for all distinct . Here (18a) follows from Mohajer et al. [23, eqs. (11)–(12)] and (18b) follows from Ong et al. [15, Section III]. Re-arranging the equation gives the following lower bound to all achievable source-channel rates —and hence also to :

Lemma 1

For any three-user finite-field MWRC with correlated sources, the minimum source-channel rate is lower bounded as


We now present the result of SW/FDF-IS coding scheme that first uses Slepian-Wolf source coding for the noiseless MWRC with correlated sources [29], followed by functional-decode-forward for independent sources (FDF-IS) channel coding for the MWRC [15]. This scheme achieves the following source-channel rates:

Lemma 2

For any three-user finite-field MWRC with correlated sources, SW/FDF-IS achieves all source-channel rates in , where


where is the set of real numbers. So, the minimum source-channel rate is upper bounded as


The proof is based on random coding arguments and can be found in Appendix B.

Remark 2

The variables are actually the channel code rates, i.e., the number of message bits transmitted by user per channel use.

From Lemmas 1 and 2, we have the following result:

Corollary 1

For a three-user finite-field MWRC, if , then , meaning that the minimum source-channel rate is known and is achievable using SW/FDF-IS.

Remark 3

The collection of source/channel combinations that satisfy Corollary 1 forms a class where the minimum source-channel rate is found, in addition to Theorem 1. The challenge, however, is to characterize—in closed form—classes of source/channel combinations for which . For this, we need to guarantee the existence of three positive numbers and satisfying the inequalities in (20) for every .

Next, we will show that for Cases 1 and 2 in Theorem 1, SW/FDF-IS achieves all source-channel rates .

Iv Proof of Cases 1 and 2 in Theorem 1

Iv-a Proof of Case 1 in Theorem 1

In this subsection, we will show that if the sources have ABCMI, then . Since any relies on the existence of channel code rates , we first show the following proposition:

Proposition 1

Consider sources with ABCMI. Given any source-channel rate , and any positive number , we can always find positive and such that


It can be shown that choosing


for all distinct satisfies (22)–(27). The expression in the square brackets is non-negative due to the ABCMI condition (6). \qed

With this result, we now prove Case 1 of Theorem 1. We need to show that any source-channel rate is achievable, i.e., the source-channel rate


for any , lies in . Here, is independent of and .

For a source-channel rate in (29), we choose as in (28). Substituting (25)–(27) into (29), the second inequality in (20) is satisfied. Also, (22)–(27) imply the first inequality in (20). Hence, . This proves Case 1 in Theorem 1.

Iv-B Proof of Case 2 in Theorem 1

We need to show that if the sources have SCE and the channel is symmetrical, then the source-channel rate in (29) is achievable for any . Recall that sources that have SCE must have unbalanced conditional mutual information, for which we can always re-index the users as , , and satisfying (7) for some fixed .

For achievability in Lemma 2, we first show the existence of satisfying the following conditions:

Proposition 2

Consider sources with unbalanced mutual information. Given any source-channel rate , and any positive number , we can always find positive and such that


for defined in (7).


Constraint (7) implies the following:


First, we can always choose a positive number as in (30). In addition, we choose


Substituting (36) into (38), we get (31); substituting (37) into (39), we get (32). Summing different pairs from (30), (38), and (39), we get (33)–(35). \qed

Furthermore, for a symmetrical channel, we can define


So, (8) for SCE and (40) for symmetrical channels imply that the source-channel rate in (29) equals


Hence, we only need to show that the source-channel rate (41) is achievable for any .

We first choose and as in (30), (38), and (39), respectively. From (33)–(35), we get


where (42e) and (42h) follow from (8); (42b), (42f), and (42i) follow from (41).444Note that , which is determined by the sources’ correlation, is strictly greater than zero; see its definition in (7). This means the second inequality in (20) is satisfied. From (30)–(35), we know that the first inequality in (20) is also satisfied. Hence, the source-channel rate (41) is indeed achievable for any .

Iv-C A Numerical Example Showing that SW/FDF-IS is Not Always Optimal

In this section, we give an example showing that SW/FDF-IS can be suboptimal. Consider the following sources: , , and , where is uniformly distributed in , is uniformly distributed in , and are each uniformly distributed in . In addition, all and are mutually independent. Here, each represents common information between and .

For the channel, let the finite field be and be modulo- addition, i.e., . Furthermore, let for , and for ; let for , and for , for all .

For this source and channel combinations, we have , , , , , , for all . One can verify that these sources have unbalanced conditional mutual information and do not have SCE.

In this example, . Suppose that is achievable using SW/FDF-IS. From Lemma 2, there must exists three positive real numbers , , and such that


From (43), we must have that and . These imply . Hence, (44) and (46) cannot be simultaneously true. This means the source-channel rate 1.05 is not achievable using SW/FDF-IS.

The sources described here have their common information equal their mutual information. We will next propose an alternative scheme that is optimal for this class of sources. The following proposed scheme achieves all source-channel rates for this source/channel combination, meaning that the minimum source-channel rate for this example is . So SW/FDF-IS is strictly suboptimal for this source/channel combination.

V Proof of Case 3 in Theorem 1

While the achievability for Cases 1 and 2 uses existing source and channel coding schemes, for Case 3 (i.e., sources that have their common information equal their mutual information), we will use an existing source coding scheme and design a new channel coding scheme to achieve all source-channel rates .

Remark 4

While Case 3 in general requires a new achievability scheme, these sources may have ABCMI (as shown in Figure 2). For such cases, optimal codes can also be obtained using the coding scheme for Case 1.

In this section, without loss of generality,555We can always re-index the users such that (47) is true. we let


This means we can re-write as follows:


As mentioned earlier, we will use a separate-source-channel-coding architecture, where we first perform source coding and then channel coding. We will again use random coding arguments. More specifically, we will use random linear block codes for channel coding.

V-a Source Coding

We encode each to , which is a length- finite-field (of size ) vector, for all (see Definition 4 for the definition of ). We also encode each to , which is a length- finite-field vector. So, each message is encoded into four subcodes, e.g., is encoded into . Some subcodes—the common parts—are shared among multiple sources.

Using the results of distributed source coding [17, 29], if is sufficiently large and if


then we can decode to , to , and to with an arbitrarily small error probability. We show the proof in Appendix C.

After source coding, user 1 has . In order for it to decode , it must receive from the other users through the channel. Similarly, users 2 and 3 must each obtain subcodes that they do not already have through the channel.

In contrast to the source coding used for Cases 1 and 2, here, we have generated source codes where the users share some subcodes. So, instead of using existing FDF-IS channel codes (designed for independent sources), we will design channel codes that take the common subcodes into account.

Message length
Channel uses
Total channel uses
TABLE I: Uplink transmission using linear block codes when ,

V-B Channel Coding

After source coding, the users now send to the relay on the uplink. The common subcode known to all three users, i.e., , need not be transmitted. Similar to FDF-IS, we will design channel codes for the relay to decode functions of the transmitted messages. This can be realized using linear block codes of the following form:


where is the message vector, code generator matrix, is a random dither, and is the channel codeword. All elements are in , and is the multiplication in . Each element in and in is independently and uniformly chosen over , and is known to the relay.

We now state the following lemma as a direct result of using linear block codes [15]:

Lemma 3

Each user transmits using the linear block code of the form (52) with a common and independently generated for each user. The relay receives according to (2). If is sufficiently large and if


then the relay can reliably666We say that a node can reliably decode a message if it can decode the message with arbitrarily small error probability. decode the finite-field sum of the messages .

From (47), (49)–(50), and noting we choose


We consider the following two cases, where the relay decodes different functions in each case:

V-B1 When
(chosen when )

Uplink: We split the message into three different disjoint parts and , and the message into and . The uplink message transmission is arranged as shown in Table I.

The messages in each column are transmitted simultaneously using linear block codes with the message length specified in the first row and the codeword length in the second last row. From (54), we know that and , meaning that the message length for each column is non-negative. For each column, both messages use the same code generator matrix but different dithers. The relay decodes the finite-field addition of the messages in each column. Take the first column for example, and are transmitted by user 1 and user 2 respectively. Note that the second codeword can also be transmitted by user 3 since it knows . Using Lemma 3, if is sufficiently large and if


where , then the relay can reliably decode . Using the same coding scheme for the other columns, we can show that if (55) holds, then the relay can decode the summation of the messages in every column.

Downlink: Assume that the relay has successfully decoded the functions for all columns . Note that is a finite-field vector of length . Generate codewords of length , where each codeletter is independently generated according to the uniform distribution . Index the codewords by . After decoding , the relay transmits on the downlink. By reducing the decoding space of each user—since each user has some side information about —we can show the following (see, e.g., Ong and Johnson [30]):

Lemma 4

If user knows (a priori) elements in , then it can reliably decode if is sufficiently large and if


Note that random codes are used on the downlink instead of linear codes.

Knowing of length , user 1 can reliably decode if


From and knowing its own subcodes and , user 1 can then obtain and .

Knowing of length , user 2 can reliably decode if


From and knowing its own subcodes and , user 2 can then obtain and .

Similarly, we can show that user 3 can reliably decode if


It can then proceed to obtain .

Recovering Other Users’ Messages: We have shown that from , each user can obtain all other users’ subcodes. If (49)–(51) are satisfied and if is sufficiently large, each user can reliably decode the messages of the other users, i.e., and from the subcodes.

Achievability: We now combine the above results. For any , we can choose