Cooperative Binning for Semi-deterministic Channels with Non-causal State Information

Cooperative Binning for Semi-deterministic Channels with Non-causal State Information

 Ido B. Gattegno,   Haim H. Permuter,   Shlomo  Shamai  (Shitz),  and Ayfer Özgür,  The work of Ido B. Gattegno, Haim H. Permuter and Shlomo Shamai was supported by the Heron consortium via the minister of economy and science, and, by the ERC (European Research Council). The work of A. Ozgur was supported in part by NSF grant #1514538.
Abstract

The capacity of the semi-deterministic relay channel (SD-RC) with non-causal channel state information (CSI) only at the encoder and decoder is characterized. The capacity is achieved by a scheme based on cooperative-bin-forward. This scheme allows cooperation between the transmitter and the relay without the need to decode a part of the message by the relay. The transmission is divided into blocks and each deterministic output of the channel (observed by the relay) is mapped to a bin. The bin index is used by the encoder and the relay to choose the cooperation codeword in the next transmission block. In causal settings the cooperation is independent of the state. In non-causal settings dependency between the relay’s transmission and the state can increase the transmission rates. The encoder implicitly conveys partial state information to the relay. In particular, it uses the states of the next block and selects a cooperation codeword accordingly and the relay transmission depends on the cooperation codeword and therefore also on the states. We also consider the multiple access channel with partial cribbing as a semi-deterministic channel. The capacity region of this channel with non-causal CSI is achieved by the new scheme. Examining the result in several cases, we introduce a new problem of a point-to-point (PTP) channel where the state is provided to the transmitter by a state encoder. Interestingly, even though the CSI is also available at the receiver, we provide an example which shows that the capacity with non-causal CSI at the state encoder is strictly larger than the capacity with causal CSI.

Cooperative-bin-forward, cooperation, cribbing, multiple-access channel, non-causal state information, random binning, relay channel, semi-deterministic channel, state encoder, wireless networks.

I Introduction

Semi-deterministic models describe a variety of communication problems in which there exists a deterministic link between a transmitter and a receiver. This work focus on the semi-deterministic relay channel (SD-RC) and the multiple access channel (MAC) with partial cribbing encoders and non-causal channel state information (CSI) only at the encoder and decoder. The state of a channel may be governed by physical phenomena or by an interfering transmission over the channel, and the deterministic link may also be a function of this state.

The capacity of the relay channel was first studied by van der Muelen [1]. In the relay channel, an encoder receives a message, denoted by , and sends it to a decoder over a channel with two outputs. A relay observes one of the channel outputs, denoted by , and uses past observations in order to help the encoder deliver the message. The decoder observes the other output, denoted by , and uses it to decode the message that was sent by the encoder. Cover and El-Gamal [2] established achievable rates for the general relay channel, using a partial-decode-forward scheme. If the channel is semi-deterministic (i.e. the output to the relay is a function of the channel inputs), El-Gamal and Aref [3] showed that this scheme achieves the capacity. Partial-decode-forward operates as follows: first, the transmission is divided into blocks, each of length ; in each block we send a message , at rate , that is independent of the messages in the other blocks. The message is split; after each transmission block, the relay decodes a part of the message and forwards it to the decoder in the next block using its transmission sequence. Since the encoder also knows the message, it can cooperate with the relay in the next block. The capacity of the SD-RC is given by maximizing over the joint probability mass function (PMF) , where is the input from the encoder and is the input from the relay. The cooperation is expressed in the joint PMF, in which and are dependent. However, when the channel depends on a state that is unknown to the relay, the partial-decode-forward scheme is suboptimal [4], i.e., it does not achieve the capacity. The partial-decoding procedure at the relay is too restrictive since the relay is not aware of the channel state.

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]SD_RC \psfragA[][][1] \psfragB[][][1]     \psfragC[][][1]Encoder \psfragD[][][1] \psfragE[][][1] \psfragF[][][1]Relay \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1]Decoder \psfragL[][][1]       \psfragscanoff

Fig. 1: SD-RC with causal/non-causal CSI at encoder and decoder.

Focusing on state-dependent SD-RC, depicted in Fig. 1, we consider two situations: when the CSI is available in a causal or a non-causal manner. State-dependent relay channels were studied in [5, 6, 7, 8, 9, 10, 11, 12, 4]; Kolte et al. [4] derived the capacity of state-dependent SD-RC with causal CSI and introduced a cooperative-bin-forward coding scheme. It differs from partial-decode-forward as follows: the relay does not have to explicitly recover the message bits; instead, the encoder and relay agree on a map from the deterministic outputs space to a bin index. This index is used by the relay to choose the next transmission sequence. Note that this cooperative-binning is independent of the state and, therefore, can be used by the relay. The encoder is also aware of this index (since the output is deterministic) and coordinates with the relay in the next block, despite the lack of state information at the relay. The capacity of this channel is given by maximizing over . Note that and are dependent, but and are not. When the state is known causally, a dependency between and is not feasible. At each time , the encoder can send to the relay information about the states up to time . The relay can use only strictly causal observations , which may contain information on but not on . Furthermore, since the states are distributed independently, the past state at the relay does not help to increase the achievable rate.

The main contribution of this paper is to develop a variation of the cooperative-bin-forward scheme that accounts for non-causal CSI. While the former scheme allows cooperation, the new scheme also allows dependency between the relay’s transmission and the state. When the CSI is available in a non-causal manner, knowledge of the state at the relay is feasible and may increase the transmission rate. The encoder can perform a look-ahead operation and transmit to the relay information about the upcoming states. The relay can still agree with the encoder on a map, and in each transmission the encoder can choose carefully which index it causes the relay to see. The encoder chooses an index such that it reveals compressed state information to the relay, using an auxiliary cooperation codeword. Incorporating look-ahead operations with cooperative-binning increases the transmission rate and achieves capacity. This scheme can be used in other semi-deterministic models, such as the multiple access channel (MAC) with strictly causal partial cribbing and non-causal CSI.

The MAC with cooperation can also be viewed as a semi-deterministic model, due the deterministic cooperation link. MAC with conferencing, introduced by Willems in [13], consists of a rate-limited private link between two encoders. Permuter el al [14] showed that for state-dependent MAC with conferencing, the capacity can be achieved by superposition coding and rate-splitting. The cribbing is a different type of cooperation, also introduced by Willems [15], in which one transmitter has access to (is cribbing) the transmission of the other. In [16], Simeone et al. considered cooperative wireless cellular systems and analyzed their performance with cribbing (referred to as In-Band cooperation). The results show how cribbing potentially increases the capacity. A generalization of the cribbing is partial and controlled cribbing, introduced by Asnani and Permuter in [17], when one encoder has limited access to the transmission sequence of the other. The cribbed information is a deterministic function of the transmission sequence. Kopetz et al. [18] characterized the capacity region of combined partial cribbing and conferencing MAC without states. When states are known causally at the first encoder (while the other is cribbing), Kolte et al. [4] derived the capacity, which is achieved by cooperative-bin-forward. We show that the variation of the cooperative-bin-forward scheme achieves the capacity when the states are known non-causally.

The results are examined for several special cases; the first is a point-to-point (PTP) channel where the CSI is available to the transmitter through a state encoder, and to the receiver. Former work on limited CSI was done by Rosenzweig el al [19], where the link from the state encoder to the transmitter is rate-limited. Steinberg [20] derived the capacity of rate-limited state information at the receiver. In our setting, the link between the state encoder and the transmitter is not is not a rate-limited bit pipe, but a communication channel where the transmitter can observe the output of the state encoder in a causal fashion. We provide an example which illustrates that in this setting the capacity with non-causal CSI available at the state encoder is strictly larger than the capacity with causal CSI at the state encoder, even-though the receiver also has channel state information. This is somewhat surprising given that in a PTP channel the CSI at both the transmitter and receiver, causal and non-causal state information lead to the same capacity.

The remainder of the paper is organized follows. Problem definitions and capacity theorems are given in Section II. Special cases are given in Section III, and the new state-encoder problem and the example are given in Section IV. Proofs for theorems are given in Sections V,VI and VII. In Section IX we offer conclusions and final remarks.

Ii Problem Definition and Main Results

Ii-a Notation

We use the following notation. Calligraphic letters denote discrete sets, e.g., . Lowercase letters, e.g., , represent variables. A vector of variables is denoted by . A substring of is denoted by , and includes variables .Whenever the dimensions are clear from the context, the subscript is omitted. Let denote a probability space where is the sample space, is the -algebra and is the probability measure. Roman face letters denote events in the -algebra, e.g., . is the probability assigned to , and is the indicator function, i.e., indicates if event has occurred. Random variables are denoted by uppercase letters, e.g., , and similar conventions apply for vectors. The probability mass function (PMF) of a random variable, , is denoted by . If , then . Whenever the random variable is clear from the context, we drop the subscript. Similarly, a joint distribution of and is denoted by and a conditional PMF by . Whenever is a deterministic function of , we denote and the conditional PMF by . If and are independent, we denote this as which implies that , and a Markov chain is denoted as and implies that .

An empirical mass function (EMF) is denoted by . Sets of typical sequences are denoted by , which is a -strongly typical set with respect to PMF , and defined by

(1)

Jointly typical sets satisfy the same definition with respect to (w.r.t.) the joint distribution and are denoted by . Conditional typical sets are defined as

(2)

Ii-B Semi-Deterministic Relay Channel

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]SD_RC \psfragA[][][1] \psfragB[][][1] \psfragC[][][1]Encoder \psfragD[][][1] \psfragE[][][1] \psfragF[][][1]Relay \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1]Decoder \psfragL[][][1]       \psfragscanoff

Fig. 2: SD-RC with non-causal CSI at encoder and decoder.

.

We begin with a state dependent SD-RC, depicted in Fig. 2. This channel depends on a state , which is known non-causally to the encoder and decoder, but not to the relay. An encoder sends a message to the decoder through a channel with two outputs. The relay observes an output of the channel, which at time is a deterministic function of the channel inputs, and , and the state (i.e., ). Based on past observations the relay transmits in order to assist the encoder. The decoder uses the state information and the channel output in order to estimate . The channel is memoryless and characterized by the joint PMF .

Definition 1 (Code for SD-RC)

A code for the SD-RC is defined by

Definition 2 (Achievable rate)

A rate is said to be achievable if there exists such that

(3)

for any and some sufficiently large .

The capacity is defined to be the supremum of all achievable rates.

Theorem 1

The capacity of the SD-RC with non-causal CSI, depicted in Figure 2, is given by

(4)

where the maximum is over such that , where and .

The proof for the theorem is given in Section V. Let us first investigate the capacity and the role of the auxiliary random variable . Here, the random variable is used to create empirical coordination between the encoder, the relay and the states, i.e., with high probability are jointly typical w.r.t. . Note that the PMF factorizes as ; the random variable , which represents the relay, depends on through the random variable . This dependency represents the state knowledge at the relay, using an auxiliary codeword .

Ii-C Multiple Access Channel with Partial Cribbing

Consider a MAC with partial cribbing and non-causal state information, as depicted in Figure 3. This channel depends on the state sequence that is known to the decoder, and each encoder has non-causal access to one state component . Each encoder sends a message over the channel. Encoder 2 is cribbing Encoder 1; the cribbing is strictly causal, partial and controlled by . Namely, the cribbed signal at time , denoted by , is a deterministic function of and . The cribbed information is used by Encoder 2 to assist Encoder 1.

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]MAC_cribbing_2states \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1]Encoder \psfragE[][][1]Encoder \psfragF[][][1] \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1] \psfragL[][][1]Decoder \psfragM[][][1] \psfragO[][][1] \psfragscanoff

Fig. 3: State dependent MAC with two state components and one side cribbing. The cribbing is strictly causal – .
\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]MAC_cribbing_2states \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1]Encoder \psfragE[][][1]Encoder \psfragF[][][1] \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1] \psfragL[][][1]Decoder \psfragM[][][1] \psfragO[][][1] \psfragscanoff

Fig. 4: State dependent MAC with two state components and one side cribbing. The cribbing is causal – .
Definition 3 (Code for MAC)

A code for the state-dependent MAC with strictly causal partial cribbing and two state components is defined by

for any and some sufficiently large .

Definition 4 (Achievable rate-pair)

A rate-pair is achievable if there exists a code such that

for any and some sufficiently large .

The capacity region of this channel is defined to be the union of all achievable rate-pairs. We note here that a setup with causal cribbing, depicted in Fig. 4, satisfy a similar definition with .

Theorem 2

The capacity region for discrete memoryless MAC with non-causal CSI and strictly causal cribbing in Fig. 3 is given by the set of rate pairs that satisfy

(5a)
(5b)
(5c)
(5d)
for PMFs of the form , with , that satisfies
(5e)

and .

Theorem 3

The capacity region for discrete memoryless MAC with non-causal CSI and causal cribbing in Fig. 4 is given by the set of rate pairs that satisfy the equations in (5) for PMFs of the form .

We note here that when is degenerated, i.e., there is only one state component, the capacity region in both theorems is given by degenerating . Note that the difference between Theorems 2 and 3 is conditioning on in the PMF . Here, the auxiliary random variable plays a double role. The first role is similar to the role in the SD-RC; it creates dependency between and . This is done using a cooperation codeword ; Encoder 1 selects a codeword that is coordinated with the states. Encoder 2 uses this codeword in order to cooperate. Since the codeword depends on the state, so does . When there are two state components, the second component is used by Encoder to select the cooperation codeword from a collection. The second role is to generate a common message between the encoders.

In Section VI we provide proof for Theorem 2 when there is only one state component. The proof for the general case is given in Section VII and is based on the case with a single state component. The proof for Theorem 3 is given in Section VIII. In the following section we examine the results in cases which emphasize the role of .

Iii Special Cases

Iii-a Cases of State-Dependent SD-RC

Case 1: SD-RC without states: When there is no state to the channel, i.e., the channel is fixed throughout the transmission, the capacity of SD-RC is given by Cover and El-Gamal [3] as

(6)

This case is captured by degenerating . Then, can be omitted from the information terms in Theorem 1 and the joint PMF is . Choosing recovers the capacity. Therefore, we see that here, plays the role of a common message between and .

Case 2: SD-RC with causal states Consider a similar configuration to that in Fig. 2, and assume that the states are known to the encoder in a causal manner. Although this is not a special case of the non-causal configuration, it emphasizes the role of further. The capacity for this channel was characterized by Kolte el al [4, Theorem 2] by

(7)

where . Let us compare this capacity to the one with non-causal states. In the causal case, we see that and are dependent, but and are not. In the non-causal case (eq. (4)), and are dependent. The random variable generates empirical coordination w.r.t. , and then uses it as common side information at the encoder, relay and decoder. When the state is known causally, such dependency cannot be achieved since the the states are drawn i.i.d. and the relay observes only past outputs of the channel. The capacity of the causal case is directly achievable by Theorem 1 by substituting and .

Iii-B Cases of State-Dependent MAC with Partial Cribbing

Let us investigate the role of the auxiliary random variable in the MAC configuration via special cases of Theorem 2. We consider here the naive case of one state component, i.e., is degenerated. We denote to emphasize this. Proofs for these cases are given in Appendix B.

Case A: Multiple Access Channel with states (without cribbing):

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]MAC_1state \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1]Encoder \psfragE[][][1]Encoder \psfragF[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1] \psfragL[][][1]Decoder \psfragM[][][1] \psfragscanoff

Fig. 5: Case A - MAC with CSI at one encoder.

Consider the case of a multiple access channel with CSI at Encoder 1 and the decoder, depicted in Fig. 5. It is a special case without cribbing (i.e. ). The capacity region, characterized by Jafar [21], is defined by all pairs that satisfy

(8a)
(8b)
(8c)

with PMFs that factorize as .

Case B: Multiple Access Channel with Conferencing:

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]MAC_conference \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1]Encoder \psfragE[][][1]Encoder \psfragF[][][1] \psfragG[][][1]     \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1] \psfragK[][][1] \psfragL[][][1]Decoder \psfragM[][][1] \psfragscanoff

Fig. 6: Case B - MAC with CSI at one encoder and conferencing.

Consider a case of MAC with conferencing, as depicted in Fig. 6. In this case, the channel depends only on part of , which we denote by . The other part of , denoted by , is known in a strictly causal manner to Encoder 2.

This setting is different from previous works, which considered a rate-limited cooperation. Here we use a sequence with noiseless communication and a fixed alphabet . It turns out that the capacity region of the channel is the same for both a strictly causal and a non-causal cooperation link. The capacity of both cases when and is

(9a)
(9b)
(9c)
(9d)

for .

Case C: Point-to-point with non-causal CSI:

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]PTP_states_both \psfragA[][][1] \psfragB[][][1] \psfragC[][][1]Encoder \psfragD[][][1] \psfragE[][][1] \psfragF[][][1] \psfragG[][][1] \psfragH[][][1]Decoder \psfragI[][][1] \psfragscanoff

Fig. 7: Case C - PTP with non-causal CSI.

Consider a configuration of a PTP channel with non-causal CSI, depicted in Fig. 7. This is a special case of the MAC when and . The capacity of this channel was given by Wolfowitz [22, Theorem 4.6.1] as

(10)

Iv Point-to-point with State Encoder and Causality Constraint

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.28]PTP_states_both_coded_enc \psfragA[][][1] \psfragB[][][1] \psfragC[][][1]E \psfragD[][][1] \psfragE[][][1]E \psfragF[][][1] \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1]D \psfragK[][][1] \psfragscanoff

(a) Non-causal CSI, strictly-causal cribbing
\psfragscanon\psfragfig

*[mode=nonstop,scale=0.28]PTP_states_both_coded_enc \psfragA[][][1] \psfragB[][][1] \psfragC[][][1]E \psfragD[][][1] \psfragE[][][1]E \psfragF[][][1] \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1]D \psfragK[][][1] \psfragscanoff

(b) Causal CSI, causal cribbing
Fig. 8: Comparison between causal and non-causal CSI.

Iv-a The State Encoder with a Causality Constraint

We introduce a new setting, depicted in Fig 8, of a PTP channel with a state encoder (SE) and a causality constraint. The SE has non-causal access to CSI and assists the encoder to increase the transmission rate. The causality constraint enforces the encoder to depend on past observations of the SE. This setting is attractive since it is a special case of the MAC, and similar settings may be special cases of more complicated models.

The setting is defined for two cases; one with non-causal CSI and the other with causal CSI. Explicitly, the setting with non-causal CSI is defined by a state encoder (E1) , an encoder (E2) and a decoder (D). Note that the encoder depends on strictly causal information from the state encoder. The second setting, however, is defined slightly different. First, the state encoder depends on causal CSI, i.e., . Secondly, the encoder can use causal information from the state encoder and not strictly causal. Namely, . We will first discuss on the inclusion of the non-causal case in the MAC setting.

To apply the MAC with partial cribbing to this case, consider the following situation with only one state component. Encoder 1 has no access to the channel, i.e., ), and no message to send (). Its only job is to assist Encoder 2 by compressing the CSI and sending it via a private link. The private link is the partial cribbing with . When the link between the encoders is non-causal, i.e., when , using the characterization of Rosenzweig [19] with a rate limit of yields

(11)

When there is a causality constraint, the transmission at time can only depend on the strictly causal output of state encoder, i.e., ; nonetheless, the capacity remains.

Briefly explained, the capacity is achieved as follows. The transmission is divided to blocks (block-Markov coding). In each block, Encoder 1, which serves as the state encoder, sends a compressed version of the states of the next block. After each transmission block, Encoder has a compressed version of the state of the current transmission block and uses it for coherent transmission.

Iv-B An Example - Non-causal CSI Increases Capacity

The non-causal CSI in the MAC configuration does increase the capacity region in the general case. The following example proves this claim. Consider a model where the channel states are coded, as depicted in Fig. 8. Case (a) is a non-causal case, and (b) is causal. As we previously discussed, the channel in Fig. 7(a) is a special case of the non-causal state dependent MAC with partial cribbing. Similarly, Fig. 7(b) is a special case of causal state dependent MAC with partial cribbing [4].

Since this is a point-to-point configuration, it is a bit surprising that the non-causal CSI increases capacity; when the states are perfectly provided to the encoder, the capacity with causal CSI and with non-causal CSI coincide. As we will next show, in the causal case, the size of can enforce lossy quantization on the state, while in the non-causal case, the states can be losslessly compressed.

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]mac_ptps0 \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1] \psfragF[][][1] \psfragscanoff

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]mac_ptps1 \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1] \psfragF[][][1] \psfragscanoff

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.5]mac_ptps2 \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1] \psfragscanoff

Fig. 9: Example of a state dependent channel.

For every channel and states distribution ,

(12)

where and are the capacity of non-causal and causal CSI configurations, respectively. Assume that the states distribution is

(13)

For each state there is a different channel; these channels are depicted in Fig. 9; a Z-channel for , an S-channel for , where both share the same parameter , and a noiseless channel for .

The idea is that when the CSI is known non-causally we can compress while in a causal case we cannot. Assume that is binary, and is small enough, for instance , such that

(14)

Therefore, taking satisfies and results in the non-causal capacity

(15)

where

(16)

On the other hand, the capacity for causal CSI is

(17)

The capacity can be achieve by one of several deterministic functions . Each function, maps both and to one letter, and to the other letter, respectively. Note that this operation causes a lossy quantization of the CSI. For comparison, we also provide the capacity when there is no CSI at the encoder, which is

(18)
No-CSI Causal CSI Non-causal CSI
TABLE I: Capacity of PTP with coded CSI - numerical evaluations for .

The capacity of the channels (non-causal, causal, no CSI) for are summarized in Table I. There are two points where the three channels results in the same capacity. The first is when ; in this case, the channel is noiseless for and the capacity is . There is no need for CSI at the encoder and, therefore, the capacity is the same (among the three cases). The second point is when ; the channel is stuck at and stuck at for and , respectively, and noiseless for . In this case we can set for every and achieve the capacity. Therefore, the encoder does not use the CSI in those cases. However, for every , the capacity of the non-causal case is strictly larger than of the others, which confirms that non-causal CSI indeed increases the capacity region.

V Proof for Theorem 1

V-a Direct

\psfragscanon\psfragfig

*[mode=nonstop,scale=0.9]BinLookForward_one_state \psfragA[][][1] \psfragB[][][1] \psfragC[][][1] \psfragD[][][1] \psfragE[][][1] \psfragF[][][1] \psfragG[][][1] \psfragH[][][1] \psfragI[][][1] \psfragJ[][][1]bin() \psfragK[][][1] \psfragscanoff

Fig. 10: Indirect covering: choosing a sequence that points toward a coordinated sequence .

Before proving the achievability part, let us investigate important properties of the cooperative-bin-forward scheme. This scheme was derived by Kolte et al [4] and is based on mapping the discrete finite space to a range of indexes . We refer this function as cooperative-binning for two reasons: 1) it randomly maps into bins, and 2) the random binning is independent of all other random variables, which make ’suitable’ for cooperation. For instance, a sequence can be drawn given , but its bin index is drawn uniformly, i.e., , and is not a function of . Thus, if we observe we can find without knowing . This index is used to create cooperation between the encoder and a relay.

Lemma 1 (Indirect covering lemma)

Let be a collection of sequences, each sequence is drawn i.i.d according to . For every , let . For any , if

(19)
(20)

then,

(21)

where as ,

The proof for this lemma is given in Appendix A. Lemma 1 states that by choosing and , we can guarantee (with high probability) that we will see approximately different bins indexes. Having these indexes allow us to assign to each one of them a sequence or threat them as bins (i.e. use the index to create a list). For instance, if we assign each index a sequence , we can perform covering [23, Lemma 3.3] in order to create coordination with another sequence , by choosing .

The coding scheme works as follows. Divide the transmission to block and choose a distribution . Draw a codebook for each block which consist of the followings. A cooperative-binning function (a map from to , drawn uniformly), a collection of codewords for each indexed by , a sequence for each , cooperation codeword and a relay codeword for each .

To send a message , recall that the link from the encoder to the relay is deterministic. Therefore, the Encoder can dictate which sequence the relay will observe during the block. Thus, it look at the collection of sequences and search for s.t. points toward a cooperation codeword that is coordinated (typical) with of the next block. This lookup is illustrated in Fig. 10, and we refer it as indirect covering111For each there is a bin index, and for each bin index there is an sequence. Therefore, the covering is called indirect.. Lemma 1 guarantees us that if we take and then with high probability we will see at least one coordinated sequence . Afterwards, the transmission codeword is chosen according to . In the next block, the relay codeword is chosen given . Note that is coordinated with through .

The decoding procedure is done forward using a sliding window technique, derived by Carleial [24]. At each block , the decoder imitates the encoder procedure for every possible and finds and . To ensure that the mapping from to is unique, we take and . Then, the decoder looks for such that: 1) all sequences at the current block are coordinated, and 2) are coordinated. Setting and ensures reliability in the decoding procedure.

We will now give a formal proof for the achievability part. Fix a PMF and let be such that . We use block-Markov coding as follows. Divide the transmission into blocks, each of length . At each communication block , we transmit a message at rate . Each message is divided to and , with corresponding rates and , respectively.

Codebook: For each block , a codebook is generated as follows:
  • Binning: Partition the set into bins, by choosing uniformly and independently an index .

  • Cooperation codewords: Generate -codewords

    (22a)
  • Relay codewords: For each generate -codeword .

  • -codewords: For each , and , generate -codewords