Capacity Bounds for State-Dependent Broadcast Channels

Capacity Bounds for State-Dependent Broadcast Channels

K. G. Nagananda, Chandra R. Murthy and Shalinee Kishore K. G. Nagananda and Shalinee Kishore are with the Dept. of ECE at Lehigh University, Bethlehem, PA, U.S.A. E-mail: {kgn209,skishore}@lehigh.edu; Chandra R. Murthy is with the Dept. of ECE at the Indian Institute of Science, Bangalore, India. E-mail: cmurthy@ece.iisc.ernet.in. Corresponding author: K. G. Nagananda.
Abstract

In this paper, we derive information-theoretic performance limits for three classes of two-user state-dependent discrete memoryless broadcast channels, with noncausal side-information at the encoder. The first class of channels comprises a sender broadcasting two independent messages to two non-cooperating receivers; for channels of the second class, each receiver is given the message it need not decode; and the third class comprises channels where the sender is constrained to keep each message confidential from the unintended receiver. We derive inner bounds for all the three classes of channels. For the first and second class of channels, we discuss the rate penalty on the achievable region for having to deal with side-information. For channels of third class, we characterize the rate penalties for having to deal not only with side-information, but also to satisfy confidentiality constraints. We then derive outer bounds, where we present an explicit characterization of sum-rate bounds for the first and third class of channels. For channels of the second class, we show that our outer bounds are within a fixed gap away from the achievable rate region, where the gap is independent of the distribution characterizing this class of channels. The channel models presented in this paper are useful variants of the classical broadcast channel, and provide fundamental building blocks for cellular downlink communications with side-information, such as fading in the wireless medium, interference caused by neighboring nodes in the network, etc. at the encoder; two-way relay communications; and secure wireless broadcasting.

Keywords: State-dependent broadcast channels, side-information, rate regions, outer bounds.

1 Introduction

The information-theoretic study of broadcast channels (BC) was initiated first by Cover in [1]. In the classical setting, the BC comprises a sender who wishes to transmit independent messages to noncooperative receivers. The largest known inner bound on the capacity region when was derived by Marton [2]. Recently, some ideas were discussed in [3], that is conjectured to lead to a larger inner bound. Capacity outer bounds were presented by Sato in [4] by utilizing the fact that the capacity region of BC depends on the marginal transition probabilities. Nair and El Gamal provided outer bounds for the two-user case [5], based on the results of the more capable BC [6]. Liang et. al generalized the outer bounds of [5] by deriving the New-Jersey outer bound. Some properties of the New-Jersey outer bound were exposed in [7], where it was shown to be equivalent to the computable UVW-bound with bounded cardinalities of the auxiliary random variables.

Several variants of this classical setting have also received considerable attention. One of the most prominent variants is the state-dependent BC with side-information, where the probability distribution characterizing the channel depends on a state process, and with the channel state made available as side-information at the transmitter, or at the receiver, or at both ends. Capacity inner bounds for the two-user BC with noncausal side-information at the transmitter were derived in [8], where Marton’s achievability scheme was extended to state-dependent channels. In [9], inner and outer bounds were derived for the degraded BC with noncausal side-information at the transmitter; the capacity region was derived when side-information was obtained to the encoder in a causal manner. The capacity region for BC with receiver side-information was derived in [10], where a genie provides each receiver with the message it need not decode. To the best of the authors’ knowledge, outer bounds for the two-user BC with noncausal side-information at the encoder have not appeared in the literature.

Yet another issue in wireless communications, owing to the broadcast nature of the wireless medium, is related to information security. That is, the broadcast nature of wireless networks facilitates malicious or unauthorized access to confidential data, denial of service attacks, corruption of sensitive data, etc. An information-theoretic approach to address problems related to security has gained rapid momentum, and is commonly referred to as information-theoretic confidentiality or wireless physical-layer security [11]. An information-theoretic approach to secure broadcasting was inspired by the pioneering work of Csiszár and Körner [12], who derived capacity bounds for the two-user BC, when the sender transmits a private message to and a common message to both receivers, while keeping the private message confidential from . Secure broadcasting with a single transmitter and multiple receivers in the presence of an external eavesdropper was considered in [13], where the secrecy capacity region was obtained for several special classes of channels. In [14], capacity bounds were derived for BC where a sender broadcasts two independent messages to two receivers, while keeping each message confidential from the unintended receiver. Capacity results and bounds for Gaussian BC with confidential messages were reported in [15] - [17]. The reader is referred to [18] for a comprehensive review of physical-layer security in BC. However, to the best of the authors’ knowledge, the joint problem of side-information and confidentiality on the BC has not been addressed in the literature.

1.1 Main contributions

In this paper, we aim to provide useful insights into the effect of noncausal side-information at the encoder on the classical two-user BC; the BC with genie- aided receiver side-information; and the BC with confidentiality constraints on the messages. Towards this end, we define three different classes of two-user discrete memoryless BC with noncausal side-information at the encoder. Of particular interest is the channels (described below), which provides a fundamental building block to jointly address side-information and confidentiality in BC.

  1. : A sender broadcasts two independent messages to two non-cooperating receivers (see Fig. 1(a)). We derive an inner bound for this class of channels and characterize the rate penalty for dealing with noncausal side-information at the encoder. We are mainly concerned with outer bounds for this class of channels, where we present an explicit single-letter characterization of the sum-rate bound, along with bounds on single-user rates. An example for channels is a base-station transmitting to two mobile receivers, with the base-station having prior knowledge of interference from a transmitter located in its vicinity, e.g., through a backhaul network.

  2. : A sender broadcasts two independent messages to two receivers, with each receiver having a priori knowledge of the message it need not decode (see Fig. 1(b)). An example of this scenario is full-duplex communications between two nodes, aided by a relay. The relay node broadcasts the messages to the terminals, with each terminal knowing its own message. We devise an achievability scheme to derive an inner bound for this class of channels and show that the achievable rate for each user is in fact the maximum rate achievable for a single-user channel with states known a priori at the encoder. We also derive an outer bound which is within a fixed gap away from the achievable region, where the gap is independent of the distribution characterizing this class of channels.

  3. : A sender broadcasts two independent messages to two receivers, such that each message is kept confidential from the unintended receiver (see Fig. 1(c)). To the best of the authors’ knowledge, this is the first instance of a study of simultaneous impact of side-information and confidentiality constraints on BC. An inner bound for this class of channels is derived employing stochastic encoders to satisfy confidentiality constraints; we characterize the rate penalties for having to deal not only with side-information, but also to satisfy confidentiality constraints. One of the outer bounds is derived by employing a genie, which gives one of the receivers the message it need not decode, while the other receiver computes the equivocation rate treating this message as side-information. We also derive another outer bound, with an explicit characterization of the sum-rate bounds. As an example for this class of channels, we can extend the example considered for channels, with the additional constraint of keeping each message confidential from the unintended receiver.

The remainder of the paper is organized as follows. In Section 2, we introduce the notation used and provide a mathematical model for the discrete memoryless version of the channels considered in this paper. In Section 3, we summarize the main results of this paper by describing inner and outer bounds for all the channel models, and provide related discussion. The proofs of the achievability theorems can be found in Section 4, while the proofs of the outer bounds are provided in Section 5. Finally, we conclude the paper in Section 6. The encoder error analysis is relegated to Appendix A.

2 System model and notation

The channels belonging to , and are denoted , and , respectively. Calligraphic letters are used to denote finite sets, with a probability function defined on them. is the number of channel uses, and denotes the channel index. Uppercase letters denote random variables (RV), while boldface uppercase letters denote a sequence of RVs. The following notation for a sequence of RVs is useful: ; ; and . Lowercase letters are used to denote particular realizations of RVs, and boldface lowercase letters denote vectors. The sender is denoted and the receivers are denoted , where is the receiver index. Discrete RV and denote the channel input and outputs, respectively. The encoder of is supplied with side-information , in a noncausal manner. The channel is assumed to be memoryless and is characterized by the conditional distribution . For sake of brevity, in the remainder of this paper, we use to denote . Unless otherwise stated, .

To transmit its messages, generates two RVs , where denotes a set of message indices. Without loss of generality, is assumed to be an integer, with being the transmission rate intended to . denotes the message intends to transmit to , and is assumed to be independently generated and uniformly distributed over the finite set . Integer is a particular realization of and denotes the message-index.

Given the conditional distribution characterizing the channel, a code for the channels and comprises encoding functions , such that ; for the channel , it comprises a stochastic encoder, which is defined by the matrix of conditional probabilities , such that . Here, denotes the probability that a pair of message-indices is encoded as to be transmitted by , in the presence of noncausal side-information . For all channel models, there are two decoders .

The average probability of decoding error for the code, averaged over all codes, is , where, . A rate pair is said to be achievable for the channel , if there exists a sequence of codes, such that and sufficiently small, as . Furthermore, for the channel , the following constraints [19] on the conditional entropy must be satisfied for to be considered achievable:

(1)
(2)

The capacity region is defined as the closure of the set of all achievable rate pairs .

3 Main results

In this section, we state the achievability and converse theorems for all the channel models considered in this paper, and provide related discussion. Let denote the capacity region of the channel ; . We use the following auxiliary RVs defined on finite sets: , and .

3.1 channels

For the channel , we consider the set of all joint probability distributions that can be factored as . For a given , a lower bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convex hull of the set of all rate pairs that simultaneously satisfy - .

(3)
(4)
(5)

where and are constrained to satisfy the Markov chain .

Theorem 3.1

Let . Then, .

For proof, see Section 4.1.

For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy - .

(6)
(7)

where .

Theorem 3.2

Let . Then, .

The proof of Theorem 3.2 can be found in Section 5.1. However, this outer bound does not include a bound on the sum-rates. To explicitly bound the sum-rate, we provide the following alternative outer bound for the channel . We consider the set of all joint probability distributions that can be factorized as follows: . For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy (8) - (11).

(8)
(9)
(10)
(11)

where the following Markov chain is satisfied: .

Theorem 3.3

Let . Then, .

Section 5.2 contains the proof of Theorem 3.3.

3.2 channels

For the channel , we consider the set of all joint probability distributions of the form . For a given , a lower bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convex-hull of the set of all rate pairs that simultaneously satisfy - .

(12)
(13)

where the Markov chain holds.

Theorem 3.4

Let . Then, .

The proof of Theorem 3.4 is relegated to Section 4.2.

For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy - .

(14)
(15)

with .

Theorem 3.5

Let . Then, .

The proof of Theorem 3.5 can be found in Section 5.3.

3.3 channels

For the channel , we consider the set of all joint probability distributions that can be written as . For a given , an inner bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convex-hull of the set of all rate pairs that simultaneously satisfy - .

(16)
(17)
(18)

where the following Markov chain is satisfied: .

Theorem 3.6

Let . Then, .

Section 4.3 contains the proof of Theorem 3.6.

For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy - .

(19)
(20)

where are given by - , respectively.

(21)
(22)
(23)
(24)

where . The expressions - are obtained by letting a genie give message , while computes the equivocation using as side-information.

Theorem 3.7

Let . Then, .

The proof of Theorem 3.7 can be found in Section 5.4. We also provide the following outer bound for the channel , which explicitly characterizes the sum-rates. Consider the set of all joint probability distributions that can be factorized as follows: . For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy (25) - (28).

(25)
(26)
(27)
(28)

where .

Theorem 3.8

Let . Then, .

The proof of Theorem 3.8 can be found in Section 5.5.

3.4 Discussion

A pictorial representation of the rate region for the channel is shown in Fig. 2. When , the channel resembles a single-user channel with side-information (the Gel’fand-Pinsker’s (GP) channel [20]) and can transmit at the maximum achievable given by , denoted by point the . At the point , the maximum achievable is given by the point ; this is obtained by treating the channel as a single-user channel with side-information. Therefore, the rectangle is achievable. By exchanging and and following similar arguments the points , given by , and are achievable. Hence, the rectangle is also achievable. Since the points and are shown to be achievable, any point which lies on the line can also be achieved by deriving a bound on the binning rates (see - , Appendix A). This leads to a sum rate bound given by . Finally, owing to convexity of the rate region, any point in the interior of the line is also achievable. Therefore, an achievable rate region for is described by the pentagon .

In the absence of side-information, i.e., , the channel reduces to the classical two-user BC whose rate region is described by the convex-hull of the set of all rate pairs that satisfy the following inequalities:

(29)
(30)
(31)

For channels of , each bound in - is the capacity of GP’s single-user channel with noncausal side-information. In the absence of side-information, i.e., , we get , which represents the capacity region of BC when each receiver is given the message it need not decode [10]. Furthermore, the outer bounds - is within a fixed gap, , from the achievable region, where is independent of the distribution characterizing this class of channels.

For channels, the terms and quantify the rate-penalty for having to deal with confidentiality constraints on the messages, while the terms and quantify the rate-penalty for having to deal with side-information.

Using a combination of results from GP’s channel and wiretap channels with side-information [21], we obtain a pictorial representation of the rate region for the channel as shown in Fig. 3. The arguments used to obtain this schematic are similar to those used for the channel ; therefore, we briefly explain the construction of Fig. 3. The point corresponds to the maximum achievable (when ) and is given by . Exchanging and we get the point given by . The points and are achievable by treating channels and , respectively, as wiretap channels with side-information. The line corresponds to the sum rate bound given by . Finally, owing to convexity of the rate region, any point in the interior of the line is also achievable. Therefore, an achievable rate region for is described by the pentagon .

If the confidentiality constraints - are relaxed, the channel reduces to the channel , whose rate region is described by - . Further, in the absence of side-information, i.e., , the channel reduces to the classical two-user BC whose rate region is described by - . Lastly, if the encoder satisfies confidentiality constraints in the absence of side-information, the channel reduces to BC with two independent and confidential messages whose rate region was first characterized by Liu et. al [14]. It is described by the convex-hull of the set of all rate pairs that satisfy the following inequalities:

(32)
(33)

3.5 Relation to past work

For channels, an inner bound was presented in [8] by extending Marton’s achievability scheme for the classical two-user BC to include noncausal side-information at the encoder. In this paper, we employ Marton’s technique and use results from the second moment method [22] to derive the inner bound which matches with the results presented in [8]. However, our method is simpler and generalizes well for obtaining inner bounds with other channel models, e.g., for channels of considered in this paper. For the outer bound (specifically, for the sum-rate), we generalize the technique presented in [5], to handle side-information at the encoder. When the side-information constraint is relaxed, our result reduces to the one presented for the classical two-user BC [5].

channels were also addressed in [23], where an inner bound was derived by employing Marton’s achievability scheme. An outer bound was also suggested in [23], but without a formal proof. In this paper, we derive an inner bound by generalizing the method suggested in [10] by incorporating noncausal side-information at the encoder. Our inner bound coincides with the one presented in [23], but once again the proof technique is much simpler. Furthermore, for the outer bounds, we explicitly address the problem of dealing with the two-dimensional rate region with a single auxiliary random variable.

For channels, we show that when the confidentiality constraints are relaxed, our achievable rate region reduces to region presented for the channels, and hence to the one presented in [8]. On the other hand, in the absence of side-information, our achievable region includes an explicit bound on the sum-rate for the two-user BC with confidentiality constraints (a model considered in [14]). This further strengthens the generalization of our proof technique.

4 Proofs of achievability theorems

In this section, we prove Theorem 3.1, Theorem 3.4 and Theorem 3.6. For any , we denote by an -typical set comprising sequences picked from the distribution . For all the channel models, the encoder is given an typical sequence in a noncausal manner.

4.1 Proof of Theorem 3.1

For the channel , generate independent typical sequences . Here, ; . Uniformly distribute sequences into bins, so that each bin, indexed by , comprises sequences. To send the message pair , the encoder at looks for a pair that satisfies the following joint typicality condition: . An error is declared at the encoder of , if it is not possible to find the pair to satisfy the condition . The encoder error analysis can be found in Appendix A. The channel input sequence is .

At the destination , the decoder looks for that satisfies the following joint typicality condition: . An error is declared at decoder of , if it not possible to find a unique integer to satisfy the condition . From the union of events bound, the probability of decoder error at can be upper bounded as follows: . From the asymptotic equipartition property (AEP) [24], and sufficiently small; and for large , . Further, for , . Therefore, we have , leading us to conclude that, for any and sufficiently small; and for large , if