Approximate Characterizations for the Gaussian Source Broadcast Distortion Region

# Approximate Characterizations for the Gaussian Source Broadcast Distortion Region

Chao Tian, , Suhas Diggavi, ,
and Shlomo Shamai (Shitz),
The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Seoul, Korea, June-July 2009.The work of S. Shamai was supported by the European Commission in the framework of the FP7 Network of Excellence in Wireless COMmunications, NEWCOM++.C. Tian is with AT&T Labs-Research, Florham Park, NJ 07932, USA. (email: tian@research.att.com)S. N. Diggavi is with the Department of Electrical Engineering, University of California, Los Angeles, CA 90095, USA. (email: suhas@ee.ucla.edu)S. Shamai is with the Department of Electrical Engineering, Technion–Israel Institute of Technology, Haifa 32000, Israel. (email: sshlomo@ee.technion.ac.il)
###### Abstract

We consider the joint source-channel coding problem of sending a Gaussian source on a -user Gaussian broadcast channel with bandwidth mismatch. A new outer bound to the achievable distortion region is derived using the technique of introducing more than one additional auxiliary random variable, which was previously used to derive sum-rate lower bound for the symmetric Gaussian multiple description problem. By combining this outer bound with the achievability result based on source-channel separation, we provide approximate characterizations of the achievable distortion region within constant multiplicative factors. Furthermore, we show that the results can be extended to general broadcast channels, and the performance of the source-channel separation based approach is also within the same constant multiplicative factors of the optimum.

{keywords}

Gaussian source, joint source-channel coding, squared error distortion.

## I Introduction

Shannon’s source-channel separation theorem essentially states that asymptotically there is no loss from optimum by decoupling the source coding component and channel coding component in a point-to-point communication system [1]. This separation result tremendously simplifies the concept and design of communication systems, and it is also the main reason for the division between research in source coding and channel coding. However, it is also well known that in many multi-user settings, such a separation indeed incurs certain performance loss; see, e.g., [2, 3, 4]. For this reason, joint source-channel coding has attracted an increasing amount of attention as the communication systems become more and more complex.

One of the most intriguing problems in this area is joint source-channel coding of a Gaussian source on a Gaussian broadcast channel with users under an average power constraint. It was observed by Goblick [2] that when the source bandwidth and the channel bandwidth are matched, i.e., one channel use per source sample, directly sending the source samples on the channel after a simple scaling is in fact optimal, but the separation-based scheme suffers a performance loss [4]. However, when the source bandwidth and the channel bandwidth are not matched, such a simple scheme is no longer optimal. Many researchers have considered this problem, and significant progress has been made toward finding better coding schemes based on hybrid digital and analog signaling; see, e.g., [5, 6, 7, 8, 9, 10] and the references therein.

In spite of the progress on the achievability schemes, our overall understanding on this problem is still quite limited. As pointed out by Caire [11], the key difficulty appears to be finding meaningful outer bounds. Such outer bounds not only can provide a concrete basis to evaluate various achievability schemes, but also may provide insights into the structure of good or even optimal codes, and may further suggest simplification of the possibly quite complex optimal schemes in certain distortion regimes. In this regard, the result by Reznic et al. [8] is particularly important, where they derived a non-trivial outer bound for the achievable distortion region for the two-user system. This outer bound relies on a technique previously used in the multiple description problem by Ozarow [12], where one additional random variable beyond those in the original problem is introduced. The bound given in [8] is however rather complicated, and was only shown to be asymptotically tight for certain high signal to noise ratio regime.

In this work, we derive an outer bound for the -user problem using a similar technique as that used in [8], however, more than one additional random variable is introduced. The technique used here also bears some similarity to that used in [13]. The outer bound has a more concise form than the one given in [8], but for the case, it can be shown that they are equivalent. This outer bound is in fact a set of outer bounds parametrized by non-negative variables. Though one can optimize over these variables to find the tightest one, this optimization problem appears difficult. Thus we take an approach similar to the one taken in [13], and choose some specific values for the variables which gives specific outer bounds. Moreover, by combining these specific outer bounds with the simple achievability scheme based on source-channel separation, we provide approximate characterizations111We would like to thank David Tse for discussions in ITA 2008 on the formulation of the question, where he was motivated by his solution to a deterministic version of this problem. of the achievable distortion region within some universal constant multiplicative factors, independent of the signal to noise ratio and the bandwidth mismatch factor. In one of the approximations, the multiplicative factor is roughly of form for the distortion at the -th user, while in the other, the factor is for all the distortions. Thus although Shannon’s source-channel separation result does not hold strictly in this problem, it indeed holds in an approximate manner. In fact, this set of results is extremely flexible, and can be applied in the case with an infinite number of users but the minimum achievable distortion is bounded away from zero, for which we can conclude that the source-channel separation based approach is also within certain finite constant multiplicative factors of the optimum. In this case, these constants can be upper bounded by factors related to the disparity between the best and worse distortions, which is not influenced by the number of users being infinite.

Though the outer bound is derived using techniques that have some precedents in the information theory literature, the difficulty lies in determining which terms to bound. In contrast to pure source coding problems or pure channel coding problems, where we can usually meaningfully bound a linear combination of rates, in a joint source-channel coding problem the notion of rates does not exist. In [8], the lower bound on one distortion is given in terms of a function of the other distortion in the two-user problem. It is clear that such a proof approach becomes unwieldy for the general -user case. In this work, we instead derive bounds for some quantity which at the first sight may seem even unrelated to the problem, but eventually serves as an interface between the source and channel coding components, thus replacing the role of “rates” in traditional Shannon theory proofs.

Inspired by a recent work of Avestimehr, Caire and Tse [14], where source-channel separation in more general networks is considered, we further show that our technique can be conveniently extended to general broadcast channels, and the source-channel separation based scheme is within the same multiplicative constants of the optimum as for the Gaussian channel case.

The rest of the paper is organized as follows. Section II gives the necessary notation and reviews an important lemma useful in deriving the outer bound. The main results are presented in Section III, and the proofs for these results are given in Section IV. The extension to general broadcast channels is given in Section V, and Section VI concludes the paper.

## Ii Problem Definition and Review

In this section, we give a formal definition of the Gaussian source broadcast problem in the context of Gaussian broadcast channels; the notation will be generalized in Section V when other broadcast channels are considered.

Let be a stationary and memoryless Gaussian source with zero-mean and unit-variance. The vector will be denoted as . We use to denote the domain of reals, and to denote the domain of non-negative reals. The Gaussian memoryless broadcast channel is given by the model

 Yk=X+Zk,k=1,2,…,K, (1)

where is the channel output observed by the -th receiver, and is the zero-mean additive Gaussian noise on the channel input . Thus the channel is memoryless in the sense that is a stationary and memoryless process. The variance of is denoted as , and without loss of generality, we shall assume

 N1≥N2≥…≥NK. (2)

The mean squared error distortion measure is used, which is given by . The encoder maps a source sample block of length into a channel input block of length , and each decoder maps the corresponding channel output block of length into a source reconstruction block of length . The bandwidth mismatch factor is thus defined as

 b=nm, (3)

which is essentially the (possibly fractional) channel uses per source sample; see Fig. 1. The channel input is subject to an average power constraint.

We can make the codes in consideration more precise by introducing the following definition.

###### Definition 1

An Gaussian source-channel broadcast code is given by an encoding function

 f:Rm→Rn, (4)

such that

 1nn∑i=1E(X(i))2≤P, (5)

and decoding functions

 gk:Rn→Rm,k=1,2,…,K, (6)

and their induced distortions

 dk=Ed(Sm,gk(f(Sm)+Znk)),k=1,2,…,K, (7)

where is the expectation operation.

Note that there are two kinds of independent randomness in the system, the first of which is by the source, and the second is by the channel noises; the expectation operation in (7) is taken over both of them. In the definition, in the expression is understood as the length- vector addition.

From the above definition, it is clear that the performance of any Gaussian joint source-channel code depends only on the marginal distribution of , but not the joint distribution . This implies that physical degradedness does not differ from statistical degradedness in terms of the system performance. Since the Gaussian broadcast channel is always statistically degraded, we shall assume physical degradedness from here on without loss of generality. The channel noises can thus be written as

 Zk=Zk+1+ΔZk,k=1,2,3,…,K, (8)

where is a zero-mean Gaussian random variable with variance , which is independent of everything else; for convenience, we define , and it follows that and .

###### Definition 2

A distortion vector , where is achievable under power constraint and bandwidth mismatch factor , if for any and sufficiently large , there exist an integer and an Gaussian source-channel broadcast code such that

 Di+ϵ≥di,i=1,2,…,K. (9)

Note that the constraint is without loss of generality, because otherwise the problem can be reduced to an alternative one with fewer users due to the assumed physical degradedness. The collection of all the achievable distortion vectors under power constraint and bandwidth mismatch factor is denoted by , and this is the region that we are interested in.

One important result we need in this work is the following lemma, which is a slightly different version of the one given in [13].

###### Lemma 1

Let be a random variable jointly distributed with the Gaussian source vector in the alphabet , such that there exists a deterministic mapping satisfying

 Ed(Sm,g(W))≤D. (10)

Let and , where and are mutually independent Gaussian random variables independent of the Gaussian source and the random variable , with variance and , respectively. Then with and , we have

1. Mutual information bound

 I(W;U′m)≥m2log1+τ′D+τ′, (11)
2. Bound on mutual information difference

 I(W;Um)−I(W;U′m)≥m2log(1+τ)(D+τ′)(1+τ′)(D+τ). (12)

The proof of this lemma is almost identical to the one given in [13]. The only difference between the two versions is that in [13] the random variable is in fact a deterministic function of , however it is rather straightforward to verify that this condition was never used in the proof given in [13]; we include the proof of this lemma in the Appendix for completeness.

## Iii Main Results for Gaussian Broadcast Channels

Our main results for Gaussian source broadcast on Gaussian broadcast channels are summarized in Theorem 1, Corollary 1, Proposition 1, Corollary 2 and Corollary 3, the proofs of which are given in the next section; extensions of these results to general broadcast channels are given in Section V.

Define the region in (13) on the top of next page, which is in fact the inner bound via source-channel separation. Next define the regions in (14) and (15) also on the top of next page, which are in fact outer bounds to the achievable distortion region. We have the the following theorem.

###### Theorem 1
 ^D(P,b)⊆D(P,b)⊆D––∗(P,b)∩D––(P,b). (16)

Theorem 1 is stated as inner and outer bounds to the achievable distortion region, however it can be observed that the bounds have similar forms, and their difference, in terms of distortions, can be bounded by some multiplicative constants. The following corollary follows directly from Theorem 1, by comparing (13) and (14).

###### Corollary 1

If , and if for , then .

The condition in Corollary 1 is to ensure that the distortion vector satisfies the monotonicity requirement in Definition 2 and (13). This result has the following intuitive interpretation if the condition indeed holds that for all : if a genie helps the separation-based scheme by giving each individual user half a bit information per source sample, and at the same time all the better users also receive this half a bit information for free, then the separation-based scheme is as good as the optimal scheme.

This approximation can in fact be refined, and for this purpose, the following additional definition is needed. For any , we associate with it a relaxed distortion vector and a binary labeling vector in a recursive manner

 (D∗k,Bk) =⎧⎪⎨⎪⎩(D∗k−1,0)if 21+∑k−1j=1BjDkD∗k−1≥1(21+∑k−1j=1BjDk,1)otherwise (17)

for , and we have defined for convenience. It is easily verified that for , and moreover .

###### Proposition 1

Let be the relaxed distortion vector of . If , then .

The notion of relaxed distortion vector essentially removes the rather artificial condition in Corollary 1. When this condition does not hold for some , the relaxed distortion vector is introduced to replace , which in this case does not satisfy the monotonicity requirement in Definition 2 and thus is not a valid choice of a distortion vector; nevertheless, in this case, the difference between the original distortion vector and its relaxed version is in fact smaller, being , instead of for as in the case already considered in Corollary 1.

Proposition 1 can be used in the situation where there are an infinite number of users such as in a fading channel. Let the set of users indexed by and their associated distortions be denoted as , since there may be an uncountably infinite many of them. If we apply the construction given in (17), with replaced by , taking the role of and taking the role of , then the following lemma is straightforward by observing that and .

###### Lemma 2

The sequence specified by (17) satisfies .

It is clear that the maximum multiplicative constant is less than in the statement of Proposition 1. If there exists a lower bound on the achievable distortion for the best user, denoted as , which is strictly positive, i.e., , then since , the multiplicative factor can be bounded as

 21+∑xBx≤4d\small{min}.

Thus even when the number of users is infinite, as long as the lower bound is bounded away from zero, the multiplicative factors are in fact finite. More formally, we have the following corollary222Here we directly take the number of users to infinity in Proposition 1, however a more rigorous approach is to derive the outer bounds for this case and show the result holds. This can indeed be done either along the line of the proof given in Section IV with careful replacement of summation by integral, or more straightforwardly along the line of proof given in Section V..

###### Corollary 2

For an infinite number of users indexed by with , let be the relaxed distortion vector of . If , then , and furthermore, .

The next corollary gives another version of the approximation, essentially stating that for any achievable distortion vector, its -fold multiple is achievable using the separation approach. In terms of the genie-aided interpretation, the genie only needs to provide bits common information to the users in the separation-based scheme, then it is as good as the optimal scheme. More formally, the following corollary follows directly from Theorem 1.

###### Corollary 3

If , then , where .

Theorem 1, Proposition 1 and the corollaries provide approximate characterizations of the achievable distortion region, essentially stating that the loss of the source-channel separation approach is bounded by constants. The bound on the gap is chosen to be (largely) independent of a specific distortion tuple on the boundary of , but it will become clear in the next section that such a choice is not necessary.

The proofs of Theorem 1 and Proposition 1 rely heavily on the following outer bound, which is one of the main contributions of this work.

###### Theorem 2

Let be any non-negative real values, and . If , then

 K∑k=1ΔNk⎡⎣(1+τk)∏kj=2(Dj+τj−1)∏kj=1(Dj+τj)⎤⎦1b≤P+N1. (18)

With the above theorem in mind, let us denote the set of distortion vectors satisfying (18) for a specific choice of as , i.e., (19) as given on the top of next page.

Thus Theorem 2 essentially states that for any valid choice of . The following corollary is then immediate.

###### Corollary 4
 D(P,b)⊆⋂τ1≥τ2≥…≥τK−1≥0D––(P,b,τ1,…,τK−1). (20)

To illustrate Corollary 4, let us consider the case for which the bound involves only one parameter . For this case, it can be shown through some algebra that this outer bound is equivalent to the one given in [8]. In Fig. 2, we illustrate the outer bounds for several specific choices of . For comparison, the achievable region using the proposed scheme in [8] is also given. Note that although the inner bound given by this scheme is extremely close to the outer bound, it appears they do not match exactly.

It is worth emphasizing that we view this outer bound differently from the authors in [8]: for each possible value of we view the condition (18) as specifying an outer bound for the distortion region ; in contrast, the authors of [8] viewed the distortion as being lower bounded by a function of , and the parameter was viewed as an additional variable that is subject to optimization, and consequently only the optimal choice of value was of interest. These two views are complementary, however the former view appears to be more natural for the -user problem, which also readily leads to the approximate characterizations. In certain cases, the second view may be more convenient, such as when we are given a specific achievable distortion tuple, and wish to determine how much further improvement is possible or impossible.

For , the properties of the outer bound were thoroughly investigated in [8]. In certain regimes, this outer bound in fact degenerates for the case of bandwidth compression, and it is looser than the trivial outer bound with each user being optimal in the point-to-point setting333We would like to thank Dr. Zvi Reznic for clarifying this point in a private communication.. Due to its non-linear form, the optimization of this bound is rather difficult, and it also appears difficult to determine whether it is always looser than the trivial outer bound in all distortion regimes with bandwidth compression. Nevertheless, it is clear that this outer bound always holds whether the bandwidth is expanded or compressed, and the approximate characterizations are valid in either case.

A different and simpler approximate characterization may in fact be more useful for the bandwidth compression case444We would again like to thank David Tse as well as one anonymous reviewer for pointing out this different approximate characterization.. Consider a different genie who helps the separation-based scheme by giving each individual user half a bit information per channel use, and at the same time all the better users also receive this half a bit information for free, then the genie-aided separation-based scheme is as good as the optimal scheme, and moreover each user can in fact achieve the optimal point-to-point distortion. To see this approximation holds, first observe that the following broadcast channel rates are achievable by using the Gaussian broadcast channel capacity region characterization [17] (it is particularly easy by using the alternative Gaussian broadcast channel capacity characterization given in (22))

 Rk=max(12log2(1+PNk)−12log2(1+PNk−1)−12,0), k=1,2,…,K. (21)

The -th user can thus utilize a total rate of per channel use on this broadcast channel; together with the genie-provided rates, it will have at least a total rate of per channel use, i.e., the optimal point to point channel rate. Since the Gaussian source is successively refinable [15], it is now clear that each user can achieve the optimal point-to-point distortion with this genie-aided separation-based scheme. Note that though this approximation is good for bandwidth compression, it can be rather loose when the bandwidth expansion factor is large. In contrast, the approximations given in Theorem 1 and Proposition 1 are independent of the bandwidth mismatch factor (the genie provides information in terms of per source sample); another difference is that the approximations given in Theorem 1 and Proposition 1 rely on the new outer bound, instead of the simple point-to-point distortion outer bound.

It is clear from the above discussion that the outer bound in Theorem 1 may be further improved by taking its intersection with the trivial point-to-point outer bound. In the remainder of this paper, we do not pursue such possible improvements, but instead focus on the proofs for the results stated in Theorem 1 and Proposition 1.

## Iv Proof of the Main Results for Gaussian Broadcast Channels

The proofs of the main results for Gaussian source broadcast on Gaussian broadcast channels are given in this section. We start by establishing a simple inner bound for the distortion region based on source-channel separation, and then focus on deriving an outer bound, or more precisely a set of outer bounds. The approximate characterizations are then rather straightforward by combining these two bounds. From here on, we shall use natural logarithm for concreteness, though choosing logarithm of a different base does not make any essential difference.

### Iv-a A Simple Inner Bound

The source-channel separation based coding scheme we consider is extremely simple, which is the combination of a Gaussian successive refinement source code and a Gaussian broadcast channel code; this scheme was thoroughly investigated in [16], and a solution for the optimal power allocation was given to minimize the expected end-user distortion. Since Gaussian broadcast channel is degraded, a better user can always decode completely the messages sent to the worse users, and thus a successive refinement source code is a perfect match for this channel. Note that such a source-channel separation approach is not optimal in general for this joint source-channel coding problem; see for example [4].

The Gaussian broadcast channel capacity region is well known [17], which is usually given in a parametric form in terms of the power allocation. In this work, we will use an alternative representation, which first appeared in [18] and was instrumental for deriving the optimal power allocation solution in [16]. The Gaussian broadcast channel capacity region (per channel use) can be written in the form in (22) as given on the top of next page.

The rate is the individual message rate intended only to the -th user, however due to the degradedness, all the better users can also decode this message. Since the Gaussian source is successively refinable [15], by combining an optimal Gaussian successive refinement source code with a Gaussian broadcast code that (asymptotically) achieves (22), we have the following theorem.

###### Theorem 3
 ^D(P,b)⊆D(P,b). (23)
{proof}

We wish to show that any is indeed achievable. Using the separation scheme, we only need to show the channel rates specified by

 Dk=exp(−2bk∑j=1Rj),k=1,2,…,K, (24)

are achievable on this Gaussian broadcast channel. The non-negative vector is uniquely determined by , and it is straightforwardly seen that it indeed satisfies the inequality in (22). The proof is thus complete.

### Iv-B An Outer Bound

Next we derive a set of conditions that any achievable distortion vector has to satisfy, i.e., Theorem 2.

{proof}

[Proof of Theorem 2] Let us first introduce a set of auxiliary random variables, defined as

 Uk=S+Vk,k=1,2,…,K−1, (25)

where ’s are zero Gaussian random variables, with variance , and furthermore

 Vk=Vk+1+ΔVk,k=1,2,…,K−1, (26)

where is a zero-mean Gaussian random variable, independent of everything else, with variance . For convenience, we define , which implies ; furthermore, define , i.e., being a constant. This technique of introducing auxiliary random variables beyond those in the original problem was previously used in [12, 8, 13] to derive outer bounds, and specifically in [13] more than one random variable was introduced, whereas in [12, 8] only one was introduced.

For any encoding and decoding functions, we consider a quantity which bears some similarity to the expression for the Gaussian broadcast channel capacity (22), and we denote this quantity as due to its sum exponential form

 Ef,g(τ1,τ2,…,τK−1)≜ K∑k=1ΔNkexp[2nk∑j=1I(Umj;Ynj|Um1,Um2,…,Umj−1)]. (27)

The subscript makes it clear that this quantity depends on the specific encoding and decoding functions. Next we shall derive universal upper and lower bounds for this quantity regardless the specific choice of functions , which eventually yield an outer bound for .

Let be any encoding and decoding functions that (asymptotically) achieve the distortions . We first derive a lower bound for . Observe that for ,

 I(Umj;Ynj|Um1,Um2,…,Umj−1) =I(Umj;Ynj)−I(Umj−1;Ynj) ≥m2log(1+τj)(Dj+τj−1)(1+τj−1)(Dj+τj), (28)

where the equality is due to the Markov string , and the inequality is by Lemma 1. Moreover, also by Lemma 1, we have

 I(Um1;Yn1)≥m2log1+τ1D1+τ1. (29)

It follows that

 k∑j=1I(Umj;Ynj|Um1,Um2,…,Umj−1) ≥m2log1+τ1D1+τ1+m2k∑j=2log(1+τj)(Dj+τj−1)(1+τj−1)(Dj+τj) =m2log1+τkD1+τ1+m2k∑j=2logDj+τj−1Dj+τj (30)

Summarizing the above bounds, we have

 Ef,g(τ1,τ2,…,τK−1) ≥K∑k=1ΔNkexp[1blog1+τkD1+τ1+1bk∑j=2logDj+τj−1Dj+τj]. (31)

Next we turn to upper-bounding , and first write the following.

 2nk∑j=1I(Umj;Ynj|Um1,Um2,…,Umj−1) =2nk∑j=1[I(Umj;Ynj)−I(Umj−1;Ynj)] =2nk∑j=1[h(Ynj|Umj−1)−h(Ynj|Umj)] =2nk∑j=1h(Ynj|Umj−1)−2nk∑j=1h(Ynj|Umj). (32)

Applying the entropy power inequality [19] for , we have

 exp[2nh(Ynj|Umj)] ≥exp[2nh(Ynj+1|Umj)]+exp[log(2πeΔNj)] =exp[2nh(Ynj+1|Umj)]+2πeΔNj. (33)

For , it is clear that

 exp[2nh(YnK|UmK)]=exp[2nh(YnK|Sm)] =2πeNK=2πeΔNK. (34)

By defining , it now follows that

 Ef,g(τ1,τ2,…,τK−1) =K∑k=1ΔNkexp[2nk∑j=1I(Umj;Ynj|Um1,Um2,…,Umj−1)] ≤K∑k=1ΔNkexp[2n∑kj=1h(Ynj|Umj−1)]∏kj=1[exp(2nh(Ynj+1|Umj))+2πeΔNj]. (35)

We bound this summation, by considering the summands in the reversed order, i.e., . Starting with the summands when and , we have (36) as given on the top of next page

Continuing this line of reduction, we finally arrive at (37) when

where the last inequality is by the concavity of the function and the given power constraint.

Combining (IV-B) and (37), it is clear that for any encoding and decoding functions

 P+N1≥Ef,g(τ1,τ2,…,τK−1) ≥K∑k=1ΔNkexp[1blog1+τkD1+τ1+1bk∑j=2logDj+τj−1Dj+τj], (38)

which completes the proof.

The meaning of the newly introduced random variable can be roughly understood as the message meant for the -th user. Under this interpretation, the term in the quantity essentially represents the individual rate intended for the -th user in the Gaussian broadcast channel; this informal understanding provides the rationale for bounding . This interpretation is nevertheless not completely accurate, and thus the outer bound is likely to be not tight in general, but suffices to provide approximate characterizations.

### Iv-C The Approximate Characterizations

Now we are ready to prove Theorem 1 and Proposition 1. {proof}[Proof of Theorem 1 and Proposition 1] The first inclusion in Theorem 1 is simply Theorem 3, and thus we focus on the other inclusion , for which we prove and separately. From Theorem 2, it is clear that if , then (18) holds with any , and thus (18) holds when we choose for . It follows that the following condition has to be satisfied by any achievable distortion vector

 K∑k=1ΔNk⎡⎣(1+Dk)∏kj=2(Dj+Dj−1)∏kj=1(Dj+Dj)⎤⎦1b≤P+N1. (39)

However, notice that

 K∑k=1ΔNk⎡⎣(1+Dk)∏kj=2(Dj+Dj−1)∏kj=1(Dj+Dj)⎤⎦1b ≥K∑k=1ΔNk⎡⎣∏kj=2Dj−1∏kj=12Dj⎤⎦1b=K∑k=1ΔNk(2kDk)−1b. (40)

It now follows straightforwardly that any achievable distortion vector has to satisfy

 K∑k=1ΔNk(2kDk)−1b≤P+N1, (41)

and is proved.

To prove