Capacity Bounds for StateDependent Broadcast Channels
Abstract
In this paper, we derive informationtheoretic performance limits for three classes of twouser statedependent discrete memoryless broadcast channels, with noncausal sideinformation at the encoder. The first class of channels comprises a sender broadcasting two independent messages to two noncooperating receivers; for channels of the second class, each receiver is given the message it need not decode; and the third class comprises channels where the sender is constrained to keep each message confidential from the unintended receiver. We derive inner bounds for all the three classes of channels. For the first and second class of channels, we discuss the rate penalty on the achievable region for having to deal with sideinformation. For channels of third class, we characterize the rate penalties for having to deal not only with sideinformation, but also to satisfy confidentiality constraints. We then derive outer bounds, where we present an explicit characterization of sumrate bounds for the first and third class of channels. For channels of the second class, we show that our outer bounds are within a fixed gap away from the achievable rate region, where the gap is independent of the distribution characterizing this class of channels. The channel models presented in this paper are useful variants of the classical broadcast channel, and provide fundamental building blocks for cellular downlink communications with sideinformation, such as fading in the wireless medium, interference caused by neighboring nodes in the network, etc. at the encoder; twoway relay communications; and secure wireless broadcasting.
Keywords: Statedependent broadcast channels, sideinformation, rate regions, outer bounds.
1 Introduction
The informationtheoretic study of broadcast channels (BC) was initiated first by Cover in [1]. In the classical setting, the BC comprises a sender who wishes to transmit independent messages to noncooperative receivers. The largest known inner bound on the capacity region when was derived by Marton [2]. Recently, some ideas were discussed in [3], that is conjectured to lead to a larger inner bound. Capacity outer bounds were presented by Sato in [4] by utilizing the fact that the capacity region of BC depends on the marginal transition probabilities. Nair and El Gamal provided outer bounds for the twouser case [5], based on the results of the more capable BC [6]. Liang et. al generalized the outer bounds of [5] by deriving the NewJersey outer bound. Some properties of the NewJersey outer bound were exposed in [7], where it was shown to be equivalent to the computable UVWbound with bounded cardinalities of the auxiliary random variables.
Several variants of this classical setting have also received considerable attention. One of the most prominent variants is the statedependent BC with sideinformation, where the probability distribution characterizing the channel depends on a state process, and with the channel state made available as sideinformation at the transmitter, or at the receiver, or at both ends. Capacity inner bounds for the twouser BC with noncausal sideinformation at the transmitter were derived in [8], where Marton’s achievability scheme was extended to statedependent channels. In [9], inner and outer bounds were derived for the degraded BC with noncausal sideinformation at the transmitter; the capacity region was derived when sideinformation was obtained to the encoder in a causal manner. The capacity region for BC with receiver sideinformation was derived in [10], where a genie provides each receiver with the message it need not decode. To the best of the authors’ knowledge, outer bounds for the twouser BC with noncausal sideinformation at the encoder have not appeared in the literature.
Yet another issue in wireless communications, owing to the broadcast nature of the wireless medium, is related to information security. That is, the broadcast nature of wireless networks facilitates malicious or unauthorized access to confidential data, denial of service attacks, corruption of sensitive data, etc. An informationtheoretic approach to address problems related to security has gained rapid momentum, and is commonly referred to as informationtheoretic confidentiality or wireless physicallayer security [11]. An informationtheoretic approach to secure broadcasting was inspired by the pioneering work of Csiszár and Körner [12], who derived capacity bounds for the twouser BC, when the sender transmits a private message to and a common message to both receivers, while keeping the private message confidential from . Secure broadcasting with a single transmitter and multiple receivers in the presence of an external eavesdropper was considered in [13], where the secrecy capacity region was obtained for several special classes of channels. In [14], capacity bounds were derived for BC where a sender broadcasts two independent messages to two receivers, while keeping each message confidential from the unintended receiver. Capacity results and bounds for Gaussian BC with confidential messages were reported in [15]  [17]. The reader is referred to [18] for a comprehensive review of physicallayer security in BC. However, to the best of the authors’ knowledge, the joint problem of sideinformation and confidentiality on the BC has not been addressed in the literature.
1.1 Main contributions
In this paper, we aim to provide useful insights into the effect of noncausal sideinformation at the encoder on the classical twouser BC; the BC with genie aided receiver sideinformation; and the BC with confidentiality constraints on the messages. Towards this end, we define three different classes of twouser discrete memoryless BC with noncausal sideinformation at the encoder. Of particular interest is the channels (described below), which provides a fundamental building block to jointly address sideinformation and confidentiality in BC.

: A sender broadcasts two independent messages to two noncooperating receivers (see Fig. 1(a)). We derive an inner bound for this class of channels and characterize the rate penalty for dealing with noncausal sideinformation at the encoder. We are mainly concerned with outer bounds for this class of channels, where we present an explicit singleletter characterization of the sumrate bound, along with bounds on singleuser rates. An example for channels is a basestation transmitting to two mobile receivers, with the basestation having prior knowledge of interference from a transmitter located in its vicinity, e.g., through a backhaul network.

: A sender broadcasts two independent messages to two receivers, with each receiver having a priori knowledge of the message it need not decode (see Fig. 1(b)). An example of this scenario is fullduplex communications between two nodes, aided by a relay. The relay node broadcasts the messages to the terminals, with each terminal knowing its own message. We devise an achievability scheme to derive an inner bound for this class of channels and show that the achievable rate for each user is in fact the maximum rate achievable for a singleuser channel with states known a priori at the encoder. We also derive an outer bound which is within a fixed gap away from the achievable region, where the gap is independent of the distribution characterizing this class of channels.

: A sender broadcasts two independent messages to two receivers, such that each message is kept confidential from the unintended receiver (see Fig. 1(c)). To the best of the authors’ knowledge, this is the first instance of a study of simultaneous impact of sideinformation and confidentiality constraints on BC. An inner bound for this class of channels is derived employing stochastic encoders to satisfy confidentiality constraints; we characterize the rate penalties for having to deal not only with sideinformation, but also to satisfy confidentiality constraints. One of the outer bounds is derived by employing a genie, which gives one of the receivers the message it need not decode, while the other receiver computes the equivocation rate treating this message as sideinformation. We also derive another outer bound, with an explicit characterization of the sumrate bounds. As an example for this class of channels, we can extend the example considered for channels, with the additional constraint of keeping each message confidential from the unintended receiver.
The remainder of the paper is organized as follows. In Section 2, we introduce the notation used and provide a mathematical model for the discrete memoryless version of the channels considered in this paper. In Section 3, we summarize the main results of this paper by describing inner and outer bounds for all the channel models, and provide related discussion. The proofs of the achievability theorems can be found in Section 4, while the proofs of the outer bounds are provided in Section 5. Finally, we conclude the paper in Section 6. The encoder error analysis is relegated to Appendix A.
2 System model and notation
The channels belonging to , and are denoted , and , respectively. Calligraphic letters are used to denote finite sets, with a probability function defined on them. is the number of channel uses, and denotes the channel index. Uppercase letters denote random variables (RV), while boldface uppercase letters denote a sequence of RVs. The following notation for a sequence of RVs is useful: ; ; and . Lowercase letters are used to denote particular realizations of RVs, and boldface lowercase letters denote vectors. The sender is denoted and the receivers are denoted , where is the receiver index. Discrete RV and denote the channel input and outputs, respectively. The encoder of is supplied with sideinformation , in a noncausal manner. The channel is assumed to be memoryless and is characterized by the conditional distribution . For sake of brevity, in the remainder of this paper, we use to denote . Unless otherwise stated, .
To transmit its messages, generates two RVs , where denotes a set of message indices. Without loss of generality, is assumed to be an integer, with being the transmission rate intended to . denotes the message intends to transmit to , and is assumed to be independently generated and uniformly distributed over the finite set . Integer is a particular realization of and denotes the messageindex.
Given the conditional distribution characterizing the channel, a code for the channels and comprises encoding functions , such that ; for the channel , it comprises a stochastic encoder, which is defined by the matrix of conditional probabilities , such that . Here, denotes the probability that a pair of messageindices is encoded as to be transmitted by , in the presence of noncausal sideinformation . For all channel models, there are two decoders .
The average probability of decoding error for the code, averaged over all codes, is , where, . A rate pair is said to be achievable for the channel , if there exists a sequence of codes, such that and sufficiently small, as . Furthermore, for the channel , the following constraints [19] on the conditional entropy must be satisfied for to be considered achievable:
(1)  
(2) 
The capacity region is defined as the closure of the set of all achievable rate pairs .
3 Main results
In this section, we state the achievability and converse theorems for all the channel models considered in this paper, and provide related discussion. Let denote the capacity region of the channel ; . We use the following auxiliary RVs defined on finite sets: , and .
3.1 channels
For the channel , we consider the set of all joint probability distributions that can be factored as . For a given , a lower bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convex hull of the set of all rate pairs that simultaneously satisfy  .
(3)  
(4)  
(5) 
where and are constrained to satisfy the Markov chain .
Theorem 3.1
Let . Then, .
For proof, see Section 4.1.
For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy  .
(6)  
(7) 
where .
Theorem 3.2
Let . Then, .
The proof of Theorem 3.2 can be found in Section 5.1. However, this outer bound does not include a bound on the sumrates. To explicitly bound the sumrate, we provide the following alternative outer bound for the channel . We consider the set of all joint probability distributions that can be factorized as follows: . For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy (8)  (11).
(8)  
(9)  
(10)  
(11) 
where the following Markov chain is satisfied: .
Theorem 3.3
Let . Then, .
3.2 channels
For the channel , we consider the set of all joint probability distributions of the form . For a given , a lower bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convexhull of the set of all rate pairs that simultaneously satisfy  .
(12)  
(13) 
where the Markov chain holds.
Theorem 3.4
Let . Then, .
For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy  .
(14)  
(15) 
with .
Theorem 3.5
Let . Then, .
3.3 channels
For the channel , we consider the set of all joint probability distributions that can be written as . For a given , an inner bound on the capacity region for is described by the set , which is defined as the union over all distributions of the convexhull of the set of all rate pairs that simultaneously satisfy  .
(16)  
(17)  
(18)  
where the following Markov chain is satisfied: .
Theorem 3.6
Let . Then, .
For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy  .
(19)  
(20) 
where are given by  , respectively.
(21)  
(22)  
(23)  
(24) 
where . The expressions  are obtained by letting a genie give message , while computes the equivocation using as sideinformation.
Theorem 3.7
Let . Then, .
The proof of Theorem 3.7 can be found in Section 5.4. We also provide the following outer bound for the channel , which explicitly characterizes the sumrates. Consider the set of all joint probability distributions that can be factorized as follows: . For a given , an outer bound for is described by the set , which is defined as the union of all rate pairs that simultaneously satisfy (25)  (28).
(25)  
(26)  
(27)  
(28)  
where .
Theorem 3.8
Let . Then, .
3.4 Discussion
A pictorial representation of the rate region for the channel is shown in Fig. 2. When , the channel resembles a singleuser channel with sideinformation (the Gel’fandPinsker’s (GP) channel [20]) and can transmit at the maximum achievable given by , denoted by point the . At the point , the maximum achievable is given by the point ; this is obtained by treating the channel as a singleuser channel with sideinformation. Therefore, the rectangle is achievable. By exchanging and and following similar arguments the points , given by , and are achievable. Hence, the rectangle is also achievable. Since the points and are shown to be achievable, any point which lies on the line can also be achieved by deriving a bound on the binning rates (see  , Appendix A). This leads to a sum rate bound given by . Finally, owing to convexity of the rate region, any point in the interior of the line is also achievable. Therefore, an achievable rate region for is described by the pentagon .
In the absence of sideinformation, i.e., , the channel reduces to the classical twouser BC whose rate region is described by the convexhull of the set of all rate pairs that satisfy the following inequalities:
(29)  
(30)  
(31) 
For channels of , each bound in  is the capacity of GP’s singleuser channel with noncausal sideinformation. In the absence of sideinformation, i.e., , we get , which represents the capacity region of BC when each receiver is given the message it need not decode [10]. Furthermore, the outer bounds  is within a fixed gap, , from the achievable region, where is independent of the distribution characterizing this class of channels.
For channels, the terms and quantify the ratepenalty for having to deal with confidentiality constraints on the messages, while the terms and quantify the ratepenalty for having to deal with sideinformation.
Using a combination of results from GP’s channel and wiretap channels with sideinformation [21], we obtain a pictorial representation of the rate region for the channel as shown in Fig. 3. The arguments used to obtain this schematic are similar to those used for the channel ; therefore, we briefly explain the construction of Fig. 3. The point corresponds to the maximum achievable (when ) and is given by . Exchanging and we get the point given by . The points and are achievable by treating channels and , respectively, as wiretap channels with sideinformation. The line corresponds to the sum rate bound given by . Finally, owing to convexity of the rate region, any point in the interior of the line is also achievable. Therefore, an achievable rate region for is described by the pentagon .
If the confidentiality constraints  are relaxed, the channel reduces to the channel , whose rate region is described by  . Further, in the absence of sideinformation, i.e., , the channel reduces to the classical twouser BC whose rate region is described by  . Lastly, if the encoder satisfies confidentiality constraints in the absence of sideinformation, the channel reduces to BC with two independent and confidential messages whose rate region was first characterized by Liu et. al [14]. It is described by the convexhull of the set of all rate pairs that satisfy the following inequalities:
(32)  
(33) 
3.5 Relation to past work
For channels, an inner bound was presented in [8] by extending Marton’s achievability scheme for the classical twouser BC to include noncausal sideinformation at the encoder. In this paper, we employ Marton’s technique and use results from the second moment method [22] to derive the inner bound which matches with the results presented in [8]. However, our method is simpler and generalizes well for obtaining inner bounds with other channel models, e.g., for channels of considered in this paper. For the outer bound (specifically, for the sumrate), we generalize the technique presented in [5], to handle sideinformation at the encoder. When the sideinformation constraint is relaxed, our result reduces to the one presented for the classical twouser BC [5].
channels were also addressed in [23], where an inner bound was derived by employing Marton’s achievability scheme. An outer bound was also suggested in [23], but without a formal proof. In this paper, we derive an inner bound by generalizing the method suggested in [10] by incorporating noncausal sideinformation at the encoder. Our inner bound coincides with the one presented in [23], but once again the proof technique is much simpler. Furthermore, for the outer bounds, we explicitly address the problem of dealing with the twodimensional rate region with a single auxiliary random variable.
For channels, we show that when the confidentiality constraints are relaxed, our achievable rate region reduces to region presented for the channels, and hence to the one presented in [8]. On the other hand, in the absence of sideinformation, our achievable region includes an explicit bound on the sumrate for the twouser BC with confidentiality constraints (a model considered in [14]). This further strengthens the generalization of our proof technique.
4 Proofs of achievability theorems
In this section, we prove Theorem 3.1, Theorem 3.4 and Theorem 3.6. For any , we denote by an typical set comprising sequences picked from the distribution . For all the channel models, the encoder is given an typical sequence in a noncausal manner.
4.1 Proof of Theorem 3.1
For the channel , generate independent typical sequences . Here, ; . Uniformly distribute sequences into bins, so that each bin, indexed by , comprises sequences. To send the message pair , the encoder at looks for a pair that satisfies the following joint typicality condition: . An error is declared at the encoder of , if it is not possible to find the pair to satisfy the condition . The encoder error analysis can be found in Appendix A. The channel input sequence is .
At the destination , the decoder looks for that satisfies the following joint typicality condition: . An error is declared at decoder of , if it not possible to find a unique integer to satisfy the condition . From the union of events bound, the probability of decoder error at can be upper bounded as follows: . From the asymptotic equipartition property (AEP) [24], and sufficiently small; and for large , . Further, for , . Therefore, we have , leading us to conclude that, for any and sufficiently small; and for large , if