Capacity Pre-Log of Noncoherent SIMOChannels via Hironaka’s Theorem

Capacity Pre-Log of Noncoherent SIMO Channels via Hironaka’s Theorem

Abstract

We find the capacity pre-log of a temporally correlated Rayleigh block-fading single-input multiple-output (SIMO) channel in the noncoherent setting. It is well known that for block-length and rank of the channel covariance matrix equal to , the capacity pre-log in the single-input single-output (SISO) case is given by . Here, can be interpreted as the pre-log penalty incurred by channel uncertainty. Our main result reveals that, by adding only one receive antenna, this penalty can be reduced to and can, hence, be made to vanish for the block-length , even if remains constant as . Intuitively, even though the SISO channels between the transmit antenna and the two receive antennas are statistically independent, the transmit signal induces enough statistical dependence between the corresponding receive signals for the second receive antenna to be able to resolve the uncertainty associated with the first receive antenna’s channel and thereby make the overall system appear coherent. The proof of our main theorem is based on a deep result from algebraic geometry known as Hironaka’s Theorem on the Resolution of Singularities.

\@IEEEtunefonts\frefformat

varioparPart #1 \frefformatvarioremRemark #1 \frefformatvariochaChapter #1 \frefformatvario\fancyrefseclabelprefixSection #1 \frefformatvariothmTheorem #1 \frefformatvariolemLemma #1 \frefformatvariocorCorollary #1 \frefformatvariodefDefinition #1 \frefformatvario\fancyreffiglabelprefixFig. #1 \frefformatvarioappAppendix #1 \frefformatvario\fancyrefeqlabelprefix(#1) \frefformatvariopropProperty #1

\IEEEoverridecommandlockouts\IEEEpeerreviewmaketitle

1 Introduction

It is well known that the capacity pre-log, i.e., the asymptotic ratio between capacity and the logarithm of signal-to-noise ratio (SNR), as SNR goes to infinity, of a single-input multiple-output (SIMO) fading channel in the coherent setting (i.e., when the receiver has perfect channel state information (CSI)) is equal to and is, hence, the same as that of a single-input single-output (SISO) fading channel [4]. This result holds under very general assumptions on the channel statistics. Multiple antennas at the receiver only, hence, do not result in an increase of the capacity pre-log in the coherent setting [4]. In the noncoherent setting, where neither transmitter nor receiver have CSI, but both know the channel statistics, the effect of multiple antennas on the capacity1 pre-log is understood only for a specific simple channel model, namely, the Rayleigh constant block-fading model. In this model the channel is assumed to remain constant over a block (of symbols) and to change in an independent fashion from block to block [5]. The corresponding SIMO capacity pre-log is again equal to the SISO capacity pre-log, but, differently from the coherent setting, is given by  [6, 7].

An alternative approach to capturing channel variations in time is to assume that the fading process is stationary. In this case, the capacity pre-log is known only in the SISO [8] and the multiple-input single-output (MISO[9, Thm. 4.15] cases. The capacity bounds for the SIMO stationary-fading channel available in the literature [9, Thm. 4.13] do not allow to determine whether the capacity pre-log in the SIMO case equals that in the SISO case. Resolving this question for stationary fading seems elusive at this point.

A widely used channel model that can be seen as lying in between the stationary-fading model considered in [8, 9], and the simpler constant block-fading model analyzed in [5, 7] is the correlated block-fading model, which assumes that the fading process is temporally correlated within blocks of length  and independent across blocks. The channel covariance matrix of rank is taken to be the same for each block. This channel model is relevant as it captures channel variations in time in an accurate yet simple fashion: the rank  of the covariance matrix corresponds to the minimum number of channel coefficients per block that need to be known at the receiver to perfectly reconstruct all channel coefficients within the same block. Therefore, larger corresponds to faster channel variations.

The SISO capacity pre-log for correlated block-fading channels is given by  [10]. In the SIMO and the multiple-input multiple-output (MIMO) cases the capacity pre-log is unknown. The main contribution of this paper is a full characterization of the capacity pre-log for SIMO correlated block-fading channels. Specifically, we prove that under a mild technical condition on the channel covariance matrix, the SIMO capacity pre-log, , of a channel with receive antennas and independent identically distributed (i.i.d.) SISO subchannels is given by

(1)

This shows that even with receive antennas a capacity pre-log of  can be obtained in the SIMO case (provided that ). This capacity pre-log is strictly larger than the capacity pre-log of the corresponding SISO channel (i.e., the capacity pre-log of one of the component channels), given by . Here can be interpreted as pre-log penalty due to channel uncertainty. Our result reveals that, by adding at least one receive antenna, this penalty can be made to vanish in the large block-length limit, , even if the amount of channel uncertainty scales linearly in the block-length.

A conjecture for the correlated block-fading channel model stated in [10] for the MIMO case, when particularized to the SIMO case, implies that the capacity pre-log in the SIMO case would be the same as that in the SISO case. As a consequence of \frefeq:prelogans this conjecture is disproved.

In terms of the technical aspects of our main result, we sandwich capacity between an upper and a lower bound that turn out to be asymptotically (in SNR) tight (in the sense of delivering the same capacity pre-log). The upper bound is established by proving that the capacity pre-log of a correlated block-fading channel with receive antennas can be upper-bounded by the capacity pre-log of a constant block-fading channel with receive antennas and the same SNR. The derivation of the capacity pre-log lower bound poses serious technical challenges. Specifically, after a change of variables argument applied to the integral expression for the differential entropy of the channel output signal, the main technical difficulty lies in showing that the expected logarithm of the Jacobian determinant corresponding to this change of variables is finite. As the Jacobian determinant takes on a very involved form, a per pedes approach appears infeasible. The problem is resolved by first distilling structural properties of the determinant through a suitable factorization and then introducing a powerful tool from algebraic geometry, namely [11, Th. 2.3], which is a consequence of Hironaka’s Theorem on the Resolution of Singularities [12, 13]. Roughly speaking, this result allows to rewrite every real analytic function [14, Def. 1.1.5, Def. 2.2.1] locally as a product of a monomial and a nonvanishing real analytic function. This factorization is then used to show that the integral of the logarithm of the absolute value of a real analytic function over a compact set is finite, provided that the real analytic function is not identically zero. This method is quite general and may be of independent interest when one tries to show that integrals of certain functions with singularities are finite, in particular, functions involving logarithms. In information theory such integrals often occur when analyzing differential entropy.

Notation

Sets are denoted by calligraphic letters Roman letters and designate deterministic matrices and vectors, respectively. Boldface letters and denote random matrices and random vectors, respectively. We let be the vector (of appropriate dimension) that has the th entry equal to one and all other entries equal to zero, and denote the identity matrix as . The element in the th row and th column of a deterministic matrix is (italic letters), and the th component of the deterministic vector  is (italic letters); the element in the th row and th column of a random matrix is (sans serif letters), and the th component of the random vector  is (sans serif letters). For a vector , stands for the diagonal matrix that has the entries of on its main diagonal. The linear subspace spanned by the vectors is denoted by . The superscripts  and  stand for transposition and Hermitian transposition, respectively. For two matrices  and , we designate their Kronecker product as ; to simplify notation, we use the convention that the ordinary matrix product precedes the Kronecker product, i.e., . For a finite subset of the set of natural numbers, , we write for the cardinality of . For an matrix , and a set of indices , we use  to denote the submatrix of containing the rows of with indices in . For two matrices and of arbitrary size, is the block-diagonal matrix that has in the upper left corner and in the lower right corner. For matrices , we let . The ordered eigenvalues of the matrix are denoted by . For two functions  and , the notation  means that  is bounded. For a function , we say that is not identically zero and write if there exists at least one element in the domain of such that . We say that a function is nonvanishing on a subset of its domain, if for all , . For two functions and , denotes the composition . For , . We use to designate the set of natural numbers . Let be a vector-valued function; then denotes the Jacobian matrix [15, Def. 3.8] of the function , i.e., the matrix that contains the partial derivative in its th row and th column. The logarithm to the base 2 is written as . For sets , we define . If , then . With , we denote by the open cube in with side length centered at . The set of natural numbers, including zero, is . For and , we let . If is a subset of the image of a map then denotes the inverse image of . The expectation operator is designated by . For random matrices and , we write to indicate that and have the same distribution. Finally, stands for the distribution of a jointly proper Gaussian (JPG) random vector with mean  and covariance matrix .

2 System Model

We consider a SIMO channel with receive antennas. The fading in each SISO component channel follows the correlated block-fading model described in the previous section. The input-output (IO) relation within any block of length for the th SISO component channel can be written as

(2)

where is the signal vector transmitted in the given block, and the vectors are the corresponding received signal and additive noise, respectively, at the th receive antenna. Finally, contains the channel coefficients between the transmit antenna and the th receive antenna. We assume that , for all , where (which is the same for all blocks and all component channels) has rank . The entries of the vectors are taken to be of unit variance, which implies that the main diagonal entries of are equal to 1 and the average received power is constant across time slots. It will turn out convenient to write the channel coefficient vector in whitened form as , where . Further, we assume that . As the noise vector has unit variance components, in \frefeq:model1 can be interpreted as the SNR. Finally, we assume that and are mutually independent, independent across , and change in an independent fashion from block to block. Note that for the correlated block-fading model reduces to the constant block-fading model as used in [6, 7].

With , , , and , we can write the IO relation \frefeq:model1 in the following—more compact—form

(3)

The capacity of the channel (3) is defined as

(4)

where the supremum is taken over all input distributions that satisfy the average-power constraint

(5)

The capacity pre-log, the central quantity of interest in this paper, is defined as

3 Intuitive Analysis

We start with a simple “back-of-the-envelope” calculation that allows to develop some intuition on the main result in this paper, summarized in \frefeq:prelogans. The different steps in the intuitive analysis below will be seen to have rigorous counterparts in the formal proof of the capacity pre-log lower bound detailed in Section 6.

The capacity pre-log characterizes the channel capacity behavior in the regime where additive noise can “effectively” be ignored. To guess the capacity pre-log, it therefore appears prudent to consider the problem of identifying the transmit symbols from the noise-free (and rescaled) observation

(6)

Specifically, we shall ask the question: “How many symbols can be identified uniquely from given that the vector of channel coefficients is unknown but the statistics of the channel, i.e., the matrix , are known?” The claim we make is that the capacity pre-log is given by the number of identifiable symbols divided by the block length .

We start by noting that the unknown variables in (6) are and , which means that we have a quadratic system of equations. It turns out, however, that the simple change of variables

(7)

(we make the technical assumption , in the remainder of this section) transforms (6) into a system of equations that is linear in and . Since the transformation is invertible for , uniqueness of the solution of the linear system of equations in and is equivalent to uniqueness of the solution of the quadratic system of equations in and .

For concreteness and simplicity of exposition, we first consider the case and and assume that satisfies the technical condition specified in Theorem 1, stated in \frefsec:charprelog. A direct computation reveals that upon change of variables according to \frefeq:changevar, the quadratic system (6) can be rewritten as the following linear system of equations:

(8)

The solution of \frefeq:linearsystem1 can not be unique, as we have 6 equations in 7 unknowns. The can, therefore, not be determined uniquely from . We can, however, make the solution of \frefeq:linearsystem1 to be unique if we devote one of the data symbols to transmitting a pilot symbol (known to the receiver). Take, for concreteness, . Then (8) reduces to the following inhomogeneous system of 6 equations in 6 unknowns

(9)

This system of equations has a unique solution if . We prove in Appendix 10 that under the technical condition on specified in Theorem 1, stated in \frefsec:charprelog, we, indeed, have that for almost all2 . It, therefore, follows that for almost all , the linear system of equations (9) has a unique solution. As explained above, this implies uniqueness of the solution of the original quadratic system of equations \frefeq:IOnonoise. We can therefore recover and , and, hence, and from . Summarizing our findings, we expect that the capacity pre-log of the channel (3), for the special case  and , is equal to , which is larger than the capacity pre-log of the corresponding SISO channel (i.e., one of the SISO component channels), given by  [10]. This answer, obtained through the back-of-the-envelope calculation above, coincides with the rigorous result in Theorem 1.

We next generalize what we learned in the example above to and arbitrary, and start by noting that if is a solution of for fixed , then with is also a solution of this system of equations. It is therefore immediately clear that at least one pilot symbol is needed to make this system of equations uniquely solvable.

To guess the capacity pre-log for general parameters and we first note that the homogeneous linear system of equations corresponding to that in \frefeq:linearsystem1, has equations for unknowns. As the example above indicates, we need to seek conditions under which this homogeneous linear system of equations can be converted into a linear system of equations that has a unique solution. Provided that satisfies the technical condition specified in \frefthm:mainLB below, this entails meeting the following two requirements: (i) at least one symbol is used as a pilot symbol to resolve the scaling ambiguity described in the previous paragraph; (ii) the number of unknowns in the system of equations corresponding to that in \frefeq:linearsystem1 must be smaller than or equal to the number of equations. To maximize the capacity pre-log we want to use the minimum number of pilot symbols that guarantees (i) and (ii). In order to identify this minimum, we have to distinguish two cases:

  1. When [in this case ] we will need at least pilot symbols to satisfy requirement (ii). Since , choosing exactly pilot symbols will satisfy both requirements. The number of symbols left for communication will, therefore, be . Hence, we expect the capacity pre-log to be given by , which agrees with the result stated in \frefeq:prelogans.

  2. When [in this case ], we will need at least one pilot symbol to satisfy requirement (i). Since requirement (ii) is satisfied as a consequence of , it suffices to choose exactly one pilot symbol. The number of symbols left for communication will, therefore, be and we hence expect the capacity pre-log to equal , which again agrees with the result stated in \frefeq:prelogans. Note that the resulting inhomogeneous linear system of equations has equations in unknowns. As there are more equations than unknowns, equations are redundant and can be eliminated.

The proof of our main result, stated in the next section, will provide rigorous justification for the casual arguments put forward in this section.

4 The Capacity Pre-Log

The main result of this paper is the following theorem.

Theorem 1.

Suppose that satisfies the following

Property (A): Every rows of are linearly independent.

Then, the capacity pre-log of the SIMO channel (3) is given by

(10)
Remark 1.

We will prove \frefthm:mainLB by showing, in \frefsec:ub, that the capacity pre-log of the SIMO channel (3) can be upper-bounded as

(11)

and by establishing, in \frefsec:flwSIMO, the lower bound

(12)

While the upper bound \frefeq:capUB can be shown to hold even if  does not satisfy Property (A), this property is crucial to establish the lower bound \frefeq:mainbound.

Remark 2.

The lower bound \frefeq:mainbound continues to hold if Property (A) is replaced by the following milder condition on .

Property (A’): There exists a subset of indices with cardinality

such that every rows of are linearly independent.

We decided, however, to state our main result under the stronger Property (A) as both Property (A) and Property (A’) are very mild and the proof of the lower bound \frefeq:mainbound under Property (A’) is significantly more cumbersome and does not contain any new conceptual aspects. A sketch of the proof of the stronger result (i.e., under Property (A’)) can be found in [2].

We proceed to discussing the significance of \frefthm:mainLB.

4.1 Eliminating the prediction penalty

According to \frefeq:capprelog the capacity pre-log of the SIMO channel \frefeq:IOstacked with receive antennas is given by , provided that Property (A) holds, and . Comparing to the capacity pre-log in the SISO case3 [10] (this result also follows from \frefeq:capprelog with ), we see that—under a mild condition on the channel covariance matrix —adding only one receive antenna yields a reduction of the channel uncertainty-induced pre-log penalty from to . How significant is this reduction? Recall that is the number of uncertain channel parameters within each given block of length . Hence, the ratio between the rank of the covariance matrix and the block-length, , is a measure that can be seen as quantifying the amount of channel uncertainty relative to the number of degrees of freedom for communication. It often makes sense to consider with the amount of channel uncertainty held constant. For concreteness, consider with so that . The capacity pre-log penalty due to channel uncertainty in the SISO case is then given by . \frefthm:mainLB reveals that, by adding a second receive antenna, this penalty can be reduced to and, hence, be made to vanish in the limit . Intuitively, even though the SISO channels between the transmit antenna and the two receive antennas are statistically independent, the transmit signal induces enough statistical dependence between the corresponding receive signals for the second receive antenna to be able to resolve the channel uncertainty associated with the first receive antenna’s channel and thereby make the overall system appear coherent.

4.2 Number of receive antennas

Note that for , we can rewrite \frefeq:capprelog as

(13)
12...
Figure 1: The capacity pre-log of the SIMO channel \frefeq:IOstacked.

As illustrated in \freffig:prelog, it follows from \frefeq:car2 that for fixed and with the capacity pre-log of the SIMO channel \frefeq:IOstacked grows linearly with as long as is smaller than the critical value . Once reaches this critical value, further increasing the number of receive antennas does not increase the capacity pre-log.

4.3 Property (A) is mild

Property (A) is not very restrictive and is satisfied by many practically relevant channel covariance matrices . For example, removing an arbitrary set of columns from an discrete Fourier transform (DFT) matrix results in a matrix that satisfies Property (A) when is prime [16]. (Weighted) DFT covariance matrices arise naturally in so-called basis-expansion models for time-selective channels [10].

Property (A) can furthermore be shown to be satisfied by “generic” matrices . Specifically, if the entries of are chosen randomly and independently from a continuous distribution [17, Sec. 2-3, Def. (2)] (i.e., a distribution with a well-defined probability density function (PDF)), then the resulting matrix will satisfy Property (A) with probability one. The proof of this statement follows from a union bound argument together with the fact that independent -dimensional vectors drawn independently from a continuous distribution are linearly independent with probability one.

5 Proof of the Upper Bound \frefeq:capUB

The proof of \frefeq:capUB consists of two parts. First, in \frefsec:firstpart, we prove that . This will be accomplished by generalizing—to the SIMO case—the approach developed in [10, Prop. 4] for establishing an upper bound on the SISO capacity pre-log. Second, in \frefsec:secondpart, we prove that by showing that the capacity of a SIMO channel with receive antennas and channel covariance matrix of rank can be upper-bounded by the capacity of a SIMO channel with receive antennas, the same SNR, and a rank-1 covariance matrix. The desired result, , then follows by application of [7, Eq. (27)], [18, Eq. (7)] as detailed below.

5.1 First part:

To simplify notation, we first rewrite \frefeq:IOstacked as

(14)

where , , , and .

Recall that has rank . Without loss of generality, we assume, in what follows, that the first rows of are linearly independent. This can always be ensured by reordering the scalar IO relations in \frefeq:model1. With and we can write

(15)

where (a) and (b) follow by the chain rule for mutual information and in (c) we used that and are independent conditional on . Next, we upper-bound each term in \frefeq:UBRsimo1 separately.

From [19, Thm. 4.2] we can conclude that the assumption of the first rows of being linearly independent implies that the first term on the RHS of \frefeq:UBRsimo1 grows at most double-logarithmically with SNR and hence does not contribute to the capacity pre-log. For the reader’s convenience, we repeat the corresponding brief calculation from [19, Thm. 4.2] in \frefapp:repeatamos and show that:

(16)

Here and in what follows, refers to the limit .

For the second term in \frefeq:UBRsimo1 we can write

(17)

where in (a) we used the fact that conditioning reduces entropy; (b) follows from the chain rule for differential entropy and the fact that conditioning reduces entropy; (c) follows because Gaussian random variables are differential-entropy-maximizers for fixed variance and because and are independent; (d) is a consequence of the power constraint \frefeq:apc; and (e) follows because .

Combining \frefeq:UBRsimo1, \frefeq:UB3, and \frefeq:UB4 yields

(18)

Since this completes the proof of the bound .

It follows from \frefeq:UB1 that for , the capacity pre-log is zero and can grow no faster than double-logarithmically in .

Recall that is the capacity pre-log of the correlated block-fading SISO channel [10]. As the proof of the upper bound reveals, the capacity pre-log of the SIMO channel (3) can not be larger than times the capacity pre-log of the corresponding SISO channel (i.e., the capacity pre-log of one of the SISO component channels). The upper bound may seem crude, but, surprisingly, it matches the lower bound for .

5.2 Second part:

The proof of will be accomplished in two steps. In the first step, we show that the capacity of a SIMO channel with receive antennas and rank- channel covariance matrix is upper-bounded by the capacity of a SIMO channel with receive antennas, the same SNR, and rank- covariance matrix. In the second step, we exploit the fact that the channel \frefeq:model3 with rank- covariance matrix (under the assumption that the rows of have unit norm) is a constant block-fading channel for which the capacity pre-log was shown in [7] to equal . We now implement the proof program just outlined.

Let denote the columns of the matrix so that . Let denote the transposed rows of the matrix so that . We can rewrite the IO relation (14) in the following form that is more convenient for the ensuing analysis:

Let be independent random matrices of dimension , each with i.i.d. entries. As, by assumption, the rows of have unit norm, we have that

Hence, we can rewrite as

(19)

where

(20)

Note now that each is the output of a SIMO channel with receive antennas, rank- channel covariance matrix, and SNR . Realizing that, by \frefeq:ytoyq and \frefeq:yq, forms a Markov chain, we conclude, by the data-processing inequality [20, Sec. 2.8], that

The claim now follows by noting that the matrix obtained by stacking the matrices next to each other can be interpreted as the output of a SIMO channel with receive antennas, rank- covariance matrix, independent fading across receive antennas, and SNR . The proof is completed by upper-bounding the capacity of this channel by means of the following lemma.

Lemma 2.

The capacity of the SIMO channel (14) with receive antennas, , and can be upper-bounded according to

This result follows from [7, Eq. (27)]. A simpler and more detailed proof can be found in [18, Eq. (7)].

6 Proof of the Lower Bound \frefeq:mainbound

To help the reader navigate through the proof of the lower bound \frefeq:mainbound, we start by explaining the architecture of the proof.

6.1 Architecture of the proof

The proof consists of the following steps, each of which corresponds to a subsection in this section:

  1. Choose an input distribution; we will see that i.i.d. input symbols allow us to establish the capacity pre-log lower bound \frefeq:mainbound.

  2. Decompose the mutual information between the input and the output of the channel according to .

  3. Using standard information-theoretic bounds show that is upper-bounded by.

  4. Split into three terms: a term that depends on SNR, a differential entropy term that depends on the noiseless channel output only, and a differential entropy term that depends on the noise vector only. Conclude that the last of these three terms is a finite constant4.

  5. Conclude that the SNR-dependent term obtained in Step 4 scales (in SNR) as . Together with the decomposition from Step 2 and the result from Step 3 this gives the desired lower bound \frefeq:mainbound provided that the -dependent differential entropy obtained in Step 4 can be lower-bounded by a finite constant.

  6. To show that the -dependent differential entropy obtained in Step 4 can be lower-bounded by a finite constant, apply the change of variables to rewrite the differential entropy as a sum of the differential entropy of and the expected (w.r.t. and ) logarithm of the Jacobian determinant corresponding to the transformation . Conclude that the differential entropy of is a finite constant. It remains to show that the expected logarithm of the Jacobian determinant is lower-bounded by a finite constant as well.

  7. Factor out the -dependent terms from the expected logarithm of the Jacobian determinant and conclude that these terms are finite constants. It remains to show that the expected logarithm of the -dependent factor in the Jacobian determinant is lower-bounded by a finite constant as well. This poses the greatest technical difficulties in the proof of the lower bound \frefeq:mainbound and is addressed in the remaining steps.

  8. Based on a deep result from algebraic geometry, known as Hironaka’s Theorem on the Resolution of Singularities, conclude that the expected logarithm of the -dependent factor in the Jacobian determinant is lower-bounded by a finite constant, provided that this factor is nonzero for at least one element in its domain.

  9. Prove by explicit construction that there exists at least one , for which the -dependent factor in the Jacobian determinant is nonzero.

We next implement the proof program outlined above.

6.2 Step 1: Choice of input distribution

First note that for the lower bound in \frefeq:mainbound is reduced to and is hence trivially satisfied. In the remainder of the paper we shall therefore assume that .

We shall furthermore work under the assumption

(21)

which trivially leads to a capacity pre-log lower bound as capacity is a nondecreasing function of (one can always switch off receive antennas).

A capacity lower bound is trivially obtained by evaluating the mutual information in (4) for an appropriate input distribution. Specifically, we take i.i.d. , . This implies that and, hence [19, Lem. 6.7],

(22)

We point out that every input vector with i.i.d., zero mean, unit variance entries that satisfy would allow us to prove \frefeq:mainbound. The choice is made for concreteness and convenience.

6.3 Step 2: Mutual information decomposition

Decompose

(23)

and separately bound the two differential entropy terms for the input distribution chosen in Step 1.

6.4 Step 3: Analysis of

As  conditioned on  is JPG, the conditional differential entropy  can be upper-bounded in a straightforward manner as follows:

(24)

Here, (a) follows from Jensen’s inequality, and (b) holds because has rank  and, therefore, for all .

6.5 Step 4: Splitting into three terms

Finding an asymptotically (in SNR) tight lower bound on is the main technical challenge of the proof of \frefthm:mainLB. The back-of-the-envelope calculation presented in \frefsec:intuition suggests that the problem can be approached by splitting into a term that depends on the noiseless channel output only and a term that depends on noise  only. This can be realized as follows.

Consider a set of indices (we shall later discuss how to choose ) and define the following projection matrices

We can lower-bound according to

(25)

Here, (a) follows by the chain rule for differential entropy; (b) follows from \frefeq:IOstacked, \frefeq:IOnonoise, and because conditioning reduces entropy; (c) follows because differential entropy is invariant under translations and because and are independent; (d) follows because and are independent; and in (e) we used the fact that is a -dimensional vector and , where here and in what follows denotes a constant that is independent of and can take a different value at each appearance.

Through this chain of inequalities, we disposed of noise  and isolated SNR-dependence into a separate term. This corresponds to considering the noise-free IO relation (6) in the back-of-the-envelope calculation. Note further that we also rid ourselves of the components of indexed by ; this corresponds to eliminating unnecessary equations in the back-of-the-envelope calculation. The specific choice of the set is crucial and will be discussed next.

6.6 Step 5: Analysis of the Snr-dependent term in \frefeq:lboundhy

If , we can substitute \frefeq:lboundhy and \frefeq:UBcond into \frefeq:midecomp which then yields a capacity lower bound of the form

(26)

This bound needs to be tightened by choosing the set such that is as large as possible while guaranteeing . Comparing the lower bound \frefeq:cap1bound to the upper bound \frefeq:capUB we see that the bounds match if

(27)

Condition \frefeq:Icond dictates that for we must set , which yields . When the set must be a proper subset of . Specifically, we shall choose as follows. Set

(28)

let

and define .

This choice can be verified to satisfy \frefeq:Icond. Obviously, this is not the only choice for  that satisfies \frefeq:Icond. The specific set chosen here will be seen to guarantee and at the same time simplify the calculations in \frefsec:analdet.

Substituting \frefeq:Icond into \frefeq:cap1bound, we obtain the desired result \frefeq:mainbound, provided that . Establishing that is, as already mentioned, the major technical difficulty in the proof of \frefthm:mainLB and will be addressed next.

6.7 Step 6: Analysis of through change of variables

It is difficult to analyze directly since depends on the pair of variables in a nonlinear fashion. We have seen, in \frefsec:intuition, that \frefeq:IOnonoise has a unique solution in , provided that the appropriate number of pilot symbols is used. This suggests that there must be a one-to-one correspondence between and the pair . The existence of such a one-to-one correspondence allows us to locally linearize the equation and to relate to . This idea is key to bringing into a form that eventually allows us to conclude that .

Formally, it is possible to relate the differential entropies of two random vectors of the same dimension that are related by a deterministic one-to-one function (in the sense of [21, p.7]) according to the following lemma.

Lemma 3 (Transformation of differential entropy).

Assume that is a continuous vector-valued function that is one-to-one and differentiable almost everywhere (a.e.) on . Let be a continuous [17, Sec. 2-3, Def. (2)] random vector (i.e., it has a well-defined PDF) and let . Then

where is the Jacobian of the function .

The proof follows from the change-of-variables theorem for integrals [21, Thm. 7.26] and is given in \frefapp:Entrchange for completeness since the version of the theorem for complex-valued functions does not seem to be well documented in the literature.

Note that with given in \frefeq:Icond and . Since (see \frefeq:Icond), the vectors and are of different dimensions and \freflem:Entrchange can therefore not be applied directly to relate to . This problem can be resolved by conditioning on a subset (specified below) of components of according to

(29)

The components correspond to the pilot symbols in the back-of-the-envelope calculation. The set is chosen such that (i) the set of remaining components in , , is of appropriate size ensuring that and are of the same dimension, and (ii) and are related by a deterministic bijection so that \freflem:Entrchange can be applied to relate  to . Specifically, set

(30)

let , which implies . Observe that  (conditioned on ) depends only on , and due to our choice of (it is actually the choice of that is important here), the vectors and are of the same dimension. Furthermore, these two vectors are related through a deterministic bijection: Consider the vector-valued function

(31)

Here, and whenever we refer to the function in the following, we use the convention that the parameter vector and the variable vector are stacked into the vector and we set .

Lemma 4.

If has nonzero components only, i.e., for all , then the function is one-to-one a.e. on .

The proof of \freflem:bijection is given in \frefapp:bijection and is based on the results obtained later in this section. We therefore invite the reader to first study the remainder of \frefsec:ub and to return to \frefapp:bijection afterwards.

Recall that and hence