The SIMO Pre-Log Can Be Larger Than the SISO Pre-Log

# The SIMO Pre-Log Can Be Larger Than the SISO Pre-Log

## Abstract

We establish a lower bound on the noncoherent capacity pre-log of a temporally correlated Rayleigh block-fading single-input multiple-output (SIMO) channel. Surprisingly, when the covariance matrix of the channel satisfies a certain technical condition related to the cardinality of its smallest set of linearly dependent rows, this lower bound reveals that the capacity pre-log in the SIMO case is larger than that in the single-input single-output (SISO) case.

\@IEEEtunefonts\frefformat

varioparPart #1 \frefformatvariochaChapter #1 \frefformatvario\fancyrefseclabelprefixSection #1 \frefformatvariothmTheorem #1 \frefformatvariolemLemma #1 \frefformatvariocorCorollary #1 \frefformatvariodfnDefinition #1 \frefformatvario\fancyreffiglabelprefixFigure #1 \frefformatvarioappAppendix #1 \frefformatvario\fancyrefeqlabelprefix(#1) \IEEEoverridecommandlockouts

\IEEEpeerreviewmaketitle

## 1 Introduction

It is well known that the coherent-capacity pre-log (i.e., the asymptotic ratio between capacity and the logarithm of SNR, as SNR goes to infinity) of a single-input multiple-output (SIMO) fading channel is equal to and is, hence, the same as that of a single-input single-output (SISO) fading channel [1]. In the practically more relevant noncoherent setting, where neither transmitter nor receiver have channel-state information, but both are aware of the channel statistics, the effect of multiple antennas on the capacity1 pre-log is understood only for a specific simple channel model, namely, the constant block-fading model. In this model, the channel is assumed to remain constant over a block of symbols and to change in an independent fashion from block to block [2]. For this model, the SIMO capacity pre-log is again equal to the SISO capacity pre-log, but, differently from the coherent case, is given by  [3, 4].

A more general way of capturing channel variations in time is to assume that the fading process is stationary. In this case, the capacity pre-log is known only in the SISO [5] and the MISO [6, Thm. 4.15] cases. The capacity bounds for the SIMO stationary-fading channel available in the literature [6, Thm. 4.13] do not allow one to determine whether the capacity pre-log in the SIMO case can be larger than that in the SISO case.

In this paper, we focus on a channel model that can be seen as lying in between the general stationary-fading model considered in [5, 6], and the simpler constant block-fading model analyzed in [2, 4]. Specifically, we assume that the fading process is independent across blocks of length  and temporally correlated within blocks, with the rank of the corresponding channel covariance matrix given by2 . For this channel model, referred to as the correlated block-fading model in the following, the SISO capacity pre-log is equal to  [8].3 The SIMO and MIMO capacity pre-logs are not known in this case. A conjecture in [8] on the MIMO capacity pre-log implies that the capacity pre-log in the SIMO case would be the same as that in the SISO case. In this paper, we disprove the conjecture in [8] by showing that in the SIMO case a capacity pre-log of  can be obtained when the number of receive antennas is equal to , and the channel covariance matrix satisfies a certain technical condition detailed in Theorem 1.

#### Notation

Uppercase boldface letters denote matrices, and lowercase boldface letters designate vectors. The all-zero matrix of appropriate size is written as . The element in the th row and th column of a matrix is denoted as , and the th component of the vector  is . For a vector , denotes the diagonal matrix that has the entries of on its main diagonal. The superscripts  and  stand for transposition and Hermitian transposition, respectively. The expectation operator is denoted as . For two matrices  and , we designate the Kronecker product as ; to simplify notation, we use the convention that the ordinary matrix product always precedes the Kronecker product, i.e., . For two functions  and , the notation  means that  is bounded above by a constant. We use to designate the set of natural numbers . Let be a vector-valued function; then denotes the Jacobian matrix of the function , i.e., the matrix that contains the partial derivative in its th row and th column. We write to denote the cardinality of the set . For an matrix , and two sets of indices and , we use  to denote the submatrix of containing the elements . Similarly, for an -dimensional vector and a set , we define . For an matrix , we set and . Furthermore,

 D(A)≜⎡⎢ ⎢⎣diag([a11…aM1]T)⋮diag([a1N…aMN]T)⎤⎥ ⎥⎦. (1)

The eigenvalues of the matrix are denoted by . The logarithm to the base 2 is written as . Finally, stands for the distribution of a jointly proper Gaussian (JPG) random vector with mean  and covariance matrix .

## 2 System Model

We consider a SIMO channel with receive antennas. The fading in each component channel follows the correlated block-fading model described in the previous section, namely, it is independent across blocks of length , and correlated within blocks, with the rank of the corresponding channel covariance matrix given by . Note that we assume the rank of the channel covariance matrix to be equal to the number of receive antennas. Our analysis relies heavily on this assumption. Across component channels, the fading is independent and identically distributed. The input-output (I/O) relation (within any block) for the th component channel can be written as

 ym=√ρdiag(hm)x+wm,m∈[1\phantom{\tiny{.}}:\phantom{\tiny{.}}Q]

where the vector contains the -dimensional signal transmitted within the block, and the vectors contain the corresponding received signal and additive noise, respectively, at the th antenna. Finally, contains the channel coefficients between the transmit antenna and the th receive antenna. We assume that  and  are mutually independent (and independent across ) and that (which is the same for all blocks) has rank . It will turn out convenient to write the channel-coefficient vector in whitened form as , where . Finally, we assume that and change in an independent fashion from block to block.

If we define , , , and , we can write the channel I/O relation in the following—more compact—form

 y=√ρ(IQ⊗XP)s+w. (2)

The capacity of the channel (2) is defined as

 C(ρ)≜(1/T)supfx(⋅)I(x;y) (3)

where the supremum is taken over all input distributions that satisfy the average-power constraint .

## 3 Intuitive Analysis

In this section, we describe a “back-of-the-envelope” method for guessing the capacity pre-log. A formal justification of this procedure is provided in Section 4.

The capacity pre-log characterizes the asymptotic behavior of the fading-channel capacity at high SNR, i.e., in the regime where the additive noise can “effectively” be ignored. In order to guess the capacity pre-log, we therefore consider the problem of identifying the transmit symbols from the noise-free (and rescaled) observation

 ^y≜(IQ⊗XP)s. (4)

Specifically, we shall ask the question: “How many symbols can be identified uniquely from given that the channel coefficients are unknown but the statistics of the channel, i.e., the matrix , are known?” The claim we make is that the capacity pre-log is given by the number of these symbols divided by the block length .

We start by noting that the unknown variables in (4) are and , which means that we have a quadratic system of equations. It turns out, however, that the simple change of variables (we make the technical assumption , in the remainder of this section) transforms (4) into a system of equations that is linear in and . Since the transformation is invertible for , uniqueness of the solution of the linear system of equations in and is equivalent to uniqueness of the solution of the quadratic system of equations in and . For simplicity of exposition and concreteness, we consider the special case and . A direct computation reveals that (4) is equivalent to

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣p11p1200^y100p21p22000^y20p31p320000^y300p11p12^y40000p21p220^y5000p31p3200^y6⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣s1s2s3s4−z1−z2−z3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=0. (5)

The solution of this linear system of equations is not unique, as we have 6 equations in 7 unknowns. The can, therefore, not be determined uniquely from . However, if we transmit one pilot symbol and two data symbols, the system of equations becomes solvable. Take for example and let the receiver know the value of this (pilot) symbol. Then (5) reduces to the following inhomogeneous system of 6 equations in 6 unknowns

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣p11p120000p21p2200^y20p31p32000^y300p11p120000p21p22^y5000p31p320^y6⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦B⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣s1s2s3s4−z2−z3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣^y100^y400⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦. (6)

This system of equations has a unique solution if . We prove in Appendix 6 that under the technical condition on specified in Theorem 1 below, we, indeed, have that for almost all4 . It, therefore, follows that for almost all , the system of equations (6) has a unique solution. Consequently, we can recover and , and, hence, and . Summarizing our findings, we expect that the capacity pre-log of the channel (2), for the special case  and , is equal to . This is larger than the capacity pre-log of the corresponding SISO channel (i.e., the capacity pre-log of one of the component channels), which is equal to  [8].

In general, we expect that under some technical conditions on the capacity pre-log of the SIMO channel as defined in Section 2 is equal to . This is exactly what we intend to show rigorously in the next section.

## 4 A Lower Bound on the Capacity Pre-Log

The main result of this paper is the following theorem.

###### Theorem 1

Assume that there exists a subset of indices of cardinality such that the -dimensional submatrix of the matrix in (2) satisfies the following Property (A): Any set of  rows of  is linearly independent. Then, the capacity of the SIMO channel (2) can be lower-bounded as

 C(ρ)≥(1−1/T)log(ρ)+O(1), ρ→∞. (7)
###### Remark 1

For the special case , (7) yields a lower bound on the capacity pre-log that is tight. A matching upper bound can be obtained through steps similar to those in the proof of [8, Prop. 4]. Establishing tight upper bounds on the capacity pre-log for general values of , however, seems to be an open problem. Different tools than those used in [8] are probably needed.

###### Remark 2

When , the channel in (2) reduces to a SISO constant block-fading channel, and the lower bound (7) yields the correct capacity pre-log of [4, 3].

###### Remark 3

Property (A) is not very restrictive and is satisfied by many practically relevant matrices . For example, removing any set of columns from a discrete Fourier transform (DFT) matrix, results in a matrix that satisfies Property (A) when is prime [9]. DFT covariance matrices occur naturally in basis-expansion models for time-varying channels [8].

{proof}

We choose an input distribution for which the entries , of , are independent and identically distributed (i.i.d.), have zero mean and unit variance, and satisfy  and . For example, we can take . We then lower-bound in (3), evaluated for this input distribution. More precisely, we use and bound the two differential entropy terms separately. Note that the class of input distributions for which (7) holds is large. This does not come as a surprise, as we are interested in the capacity pre-log only.

As  conditional on  is JPG, the conditional differential entropy  can be upper-bounded in a straightforward fashion as follows:

 h(y|x)=QTlog(πe) +Ex[logdet(IQT+ρ(IQ⊗XP)Es[ssH](IQ⊗PHXH))] Extra open brace or missing close brace (8)

where the inequality holds because has rank .

Finding a tight lower bound on is the main difficulty of the proof. In fact, the differential entropy of is often intractable even for simple input distributions. The main technical contribution of this paper is presented in Section 4.1 below, where we show that if Property (A) is satisfied and if the input distribution satisfies the conditions specified at the beginning of this proof, we have

 h(y)≥(T−1+Q2)log(ρ)+c (9)

where , here and in the remainder of the paper, stands for a constant5 that is independent of . Combining (4) and (9) then yields the desired result. Note that in order to establish (4) it is sufficient to use that has rank , whereas the more restrictive Property (A) is crucial to establish (9).

### 4.1 A Lower Bound on h(y)

The main idea of our approach is to relate to , which is generally much simpler to compute than . It is possible to relate the entropies of two random vectors in a simple way if the vectors are of the same dimension and are connected by a deterministic one-to-one (in the sense of [10, p.7]) function. This is not the case for  and . It is, however, possible to show that if is a fixed parameter, then there is a deterministic one-to-one function between and a specific subset of components of the noiseless version  of the output vector. This allows us to relate to , which will turn out to be sufficient for our purposes as can be linked to according to (12) below. We now describe the details of the proof program outlined above.

###### Lemma 2

Assume that the matrix satisfies the conditions of Theorem 1 and take the submatrix defined in Theorem 1 to consist of the first  rows of for simplicity.6 Let

 J≜[1\phantom{\tiny{.}}:\phantom{\tiny{.}% }T] ∪[T+1\phantom{\tiny{.}}:\phantom{\tiny{.}}T+Q+1] ∪[2T+1\phantom{\tiny{.}}:\phantom{\tiny{.}}2T+Q+1]∪⋯ ∪[(Q−1)T+1\phantom{\tiny{.}}:\phantom{\tiny{.}}(Q−1)T+Q+1] (10)

where , and consider the vector-valued function

 ^yJ(s,x[2% \phantom{\tiny{.}}:\phantom{\tiny{.}}T])=((IQ⊗XP)s)J (11)

parametrized by . To simplify the notation we will not indicate this parametrization explicitly. The function is one-to-one almost everywhere (a.e.) on .

{proof}

See Appendix 6. The following comments on Lemma 2 are in order. For and as in the simple example in Section 3, , so that . Therefore, the one-to-one correspondence established in this lemma simply means that (4) has a unique solution for fixed . For the proof of the lemma, it is crucial that is fixed. In fact, one can check that if none of the components of is fixed, the resulting equivalent of the function cannot be one-to-one, no matter how the set is chosen.Fixing in order to make the function be one-to-one corresponds to transmitting a pilot, as done in the simple example in Section 3 by setting . The cardinality of the set , which determines the lower bound on the capacity pre-log, as we shall see below, is dictated by the requirement that and are of the same dimension, which implies that must contain elements. The specific choice of in (2) simplifies the proof of the lemma (see Appendix 7).

Lemma 2 can be used to relate the conditional differential entropy to . Before doing so, we establish a simple lower bound on that is explicit in . Let be the complement of in . Then

 h(y) Missing or unrecognized delimiter for \left ≥h(√ρ^yJ+wJ)+h(yN|s,x,yJ) Missing or unrecognized delimiter for \left ≥|J|log(ρ)+h(^yJ|x1)+c. (12)

Through this chain of inequalities, we got rid of the noise . This corresponds to considering the noise-free I/O relation (4) in the intuitive explanation given in Section 3. Inserting  into (12), we obtain the desired result (9) provided that is finite, which will be proved by means of the following lemma.

###### Lemma 3 (Transformation of differential entropy)

Assume that is a continuous vector-valued function that is one-to-one a.e. on . Let be a random vector and . Then

 h(v)=h(u)+Eu[log|det(∂g/∂u)|].
{proof}

The proof follows from the change-of-variable theorem for integrals [10, Thm. 7.26].

Let denote the density of . Then

 h(^yJ|x1)=∫fx1(x)h(^yJ|x1=x)dx =h(s,x[2\phantom{\tiny{.}}:% \phantom{\tiny{.}}T]|x1)+Es,x[log∣∣ ∣∣det∂^yJ∂(s,x[2\phantom{\tiny{.}}:% \phantom{\tiny{.}}T])∣∣ ∣∣] (13)

where in the second equality we applied Lemma 3 to , using that the function in (11) is continuous and is one-to-one a.e. as shown in Lemma 2. The first term on the RHS of (4.1) satisfies

 h(s,x[2\phantom{\tiny{.}}:\phantom{\tiny{.}}T]|x1)=h(s)+h(x[2\phantom{\tiny{.}}:\phantom{\tiny% {.}}T])>−∞

where the inequality follows because the are i.i.d. and have finite differential entropy. It therefore remains to show that the second term on the RHS of (4.1) is finite as well. As the RHS of (11) is linear in , we have that

 ∂^yJ/∂s=[IQ⊗XP]J,⋄.

Furthermore, using [11, Eq. (5), Sec. 7.2], the RHS of (11) can be rewritten as , where the operator was defined in (1) and

 ST=[s1…sQ]. (14)

Hence, we have that

 ∂^yJ/∂x[2\phantom% {\tiny{.}}:\phantom{\tiny{.}}T]=[(S⊗IT)D(P)]J,[2\phantom{\tiny{.}}:% \phantom{\tiny{.}}T].

To summarize, the Jacobian matrix in (4.1) is given by

 Unknown environment '% (15)

As shown in Appendix 7, the determinant of this Jacobian matrix can be factorized as follows

 ∣∣det[[IQ⊗XP]J,⋄[(S⊗IT)D(P)]J,[2%.:.T]]∣∣ =∏j∈[Q+2\phantom{\tiny{.}}:\phantom{\tiny{.}}T]∣∣ ∣∣Q∑q=1s1qpjq∣∣ ∣∣|detM1(X)||detM2(S)| Missing or unrecognized delimiter for \right (16)

where

 M1(X)≜IQ⊗[X][1\phantom{\tiny{.}}:\phantom{\tiny{.}}% Q+1],[1\phantom{\tiny{.}}:\phantom{\tiny{.}}Q+1] M2(S)≜S⊗IQ+1 M3(P)≜[(IQ⊗~P)[D(~P)]⋄,[2\phantom{\tiny{.}}:\phantom{\tiny{.}}Q+1]] M4(S)≜[(S−1⊗IQ)00IQ] M5(X)≜[IQ200[X]−1[2\phantom{\tiny{.}}:\phantom{\tiny{% .}}Q+1],[2\phantom{\tiny{.}}:\phantom{\tiny{.}}Q+1]].

Hence, we can rewrite the second term on the RHS of (4.1) as

 Missing or unrecognized delimiter for \left =∑j∈[Q+2\phantom{\tiny{.}}:\phantom{\tiny{.}}T]Es[log∣∣ ∣∣Q∑q=1s1qpjq∣∣ ∣∣]+Ex[log|detM1(X)|] Missing or unrecognized delimiter for \left +Es[log|detM4(S)|]+Ex[log|detM5(X)|]. (17)

The first and the third term on the RHS of (17) are finite because has i.i.d. Gaussian components. The fifth term is finite for the same reason, because . The second and the sixth term are finite because , by assumption. Finally, we show in Appendix 8 that the matrix has full rank if Property (A) is satisfied. This then implies that the fourth term on the RHS of (17) is also finite.

## 5 Conclusion and Further Work

In this paper, we analyzed the noncoherent-capacity pre-log of a temporally correlated block-fading channel. We showed that, surprisingly, the capacity pre-log in the SIMO case can be larger than that in the SISO case. This result was established for the special case of the number of receive antennas being equal to the rank of the channel covariance matrix. Interesting open issues include extending the lower bound in Theorem 1 to an arbitrary number of receive antennas and finding a tight upper bound on the capacity pre-log.

\appendices

## 6

#### Proof of Lemma 2

We need to show that the function is one-to-one a.e. Hence, we can exclude sets of measure zero from its domain. In particular, we shall consider the restriction of the function to the set of pairs , which satisfy (i) for all ; (ii) the matrix , defined in (14) is invertible; (iii) the sum is nonzero for all .

To show that this restriction of the function [which, with slight abuse of notation we still call ] is one-to-one, we take an element from its range and prove that the equation

 ^yJ(s′,x′[2\phantom{\tiny{.}}:\phantom{\tiny{.}}T])=~y (18)

has a unique solution in the set of pairs satisfying the constraints (i)–(iii). The element can be represented as , with , where satisfies the constraints (i)–(iii). Hence, (18) can be rewritten in the following way

 ((IQ⊗X′P)s′)J=((IQ⊗XP)s)J (19)

with . To prove that (19) has a unique solution, we follow the approach described in Section 3 and convert (19) into a linear system of equations through a change of variables. In particular, thanks to constraint (i), we can multiply both sides of (19) by and by and perform the substitution . Then, we define and manipulate the equation such that all the unknowns are on one side of the equation and all the terms depending on the constant are on the other side. These steps yield the following inhomogeneous linear system of equations

 G[s′−z′]=1x′1u (20)

where

 G Missing \left or extra \right (21) u =[pT1s1 0⋯0T−1 times pT1s2 0⋯0Q times ⋯ pT1sQ 0⋯0Q times]T

and  denotes the first row of . The solution of (20) is unique if and only if . To establish this, it is useful to note that the matrix has a structure similar to the matrix on the RHS of (15) [the only difference is that in (15) is replaced by in ]. Therefore, we can factorize in exactly the same way as the RHS of (15). Finally, we invoke the constraints (i), (ii), and (iii) together with Property (A) to conclude that each term in the resulting factorization is nonzero. This completes the proof.

We point out that for and , the matrix defined in (6) is related to in (21) according to , and therefore a.e., as claimed in Section 3.

## 7

#### Proof of (4.1)

As a consequence of the choice of  in (2), each of the last columns of the matrix on the RHS of (15) has exactly one nonzero element. This allows us to use the Laplace formula to expand the determinant along these columns iteratively to get

 ∣∣det[[IQ⊗XP]J,⋄[(S⊗IT)D(P)]J,[2%.:.T]]∣∣ =∏j∈[Q+2\phantom{\tiny{.}}:\phantom{% \tiny{.}}T]∣∣ ∣∣Q∑q=1s1qpjq∣∣ ∣∣|det(E)| (22)

where

 E =[IQ⊗~X~P(S⊗IQ+1)~D] ~D =[D(~P)]⋄,[2\phantom{% \tiny{.}}:\phantom{\tiny{.}}Q+1],~X=[X][1\phantom{\tiny{.}}:\phantom{\tiny{.}}Q+1],[1\phantom{% \tiny{.}}:\phantom{\tiny{.}}Q+1].

Next, using simple properties of the Kronecker product and exploiting the block-diagonal structure of , we factorize into a product of simple terms:

 E =(IQ⊗~X)(S⊗IQ+1)[(IQ⊗~P)~D] ×[(S−1⊗IQ)00IQ]⎡⎢⎣IQ200[~X]−1[2\phantom{\tiny{.}}:% \phantom{\tiny{.}}Q+1],[2\phantom{\tiny{.}}:\phantom{\tiny{.}}Q+1]⎤⎥⎦. (23)

The proof is completed by inserting (23) into (7), and using the multiplicativity of the determinant.

## 8

###### Lemma 4

Let be an matrix. If any set of rows of  is linearly independent, then the matrix defined as has full rank.

{proof}

The proof is by contradiction. Assume that does not have full rank. Then there exists an -dimensional nonzero vector , where , such that . Because , we have in particular that (i) and (ii) . We next analyze these two equalities separately. Equality (i) can be restated as for all , which implies that all vectors lie in the kernel of the matrix . Because has rank , its kernel must be of dimension . Hence, all vectors must be collinear, i.e., there exists a vector  and a set of constants such that , for all . The vector  and at least one of the constants must be nonzero because is nonzero. Furthermore, because , and because any set of  rows of is linearly independent by assumption, all components of must be nonzero.

We now use this property of  to analyze equality (ii), which can be restated as

 uT[D(A)]⋄,[2\phantom{% \tiny{.}}:\phantom{\tiny{.}}N+1]=[c1vT⋯ cNvT][D(A)]⋄,[2\phantom{% \tiny{.}}:\phantom{\tiny{.}}N+1]=0

or, after straightforward manipulations, as

 [diag(v)][2\phantom{\tiny{.}}:% \phantom{\tiny{.}}N+1],[2\phantom{\tiny{.}}:\phantom{\tiny{.}}N+1][A][2\phantom{\tiny{.}}:\phantom{\tiny{.}}N+1],⋄[c1⋯cN]T=0.

Because all the components of are nonzero, this last equality implies that However, this contradicts the assumption that any set of rows of is linearly independent (recall that at least one of the constants is nonzero). Hence, must have full rank.

### Footnotes

1. In the remainder of the paper, we consider the noncoherent setting only. Consequently, we will refer to capacity in the noncoherent setting simply as capacity. Furthermore, we shall assume Rayleigh fading throughout.
2. When , capacity is known to grow double-logarithmically in SNR [7], and, hence, the capacity pre-log is zero.
3. The constant block-fading model is obviously a special case () of the correlated block-fading model.
4. Except for a set of measure zero.
5. The value of this constant can change at each appearance.
6. This assumption will be made in the remainder of the paper, without explicitly mentioning it again.

### References

1. İ. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Eur. Trans. Telecommun., vol. 10, no. 6, pp. 585–595, Nov. 1999.
2. T. L. Marzetta and B. M. Hochwald, “Capacity of a mobile multiple-antenna communication link in Rayleigh flat fading,” IEEE Trans. Inf. Theory, vol. 45, no. 1, pp. 139–157, Jan. 1999.
3. B. M. Hochwald and T. L. Marzetta, “Unitary space–time modulation for multiple-antenna communications in Rayleigh flat fading,” IEEE Trans. Inf. Theory, vol. 46, no. 2, pp. 543–564, Mar. 2000.
4. L. Zheng and D. N. C. Tse, “Communication on the Grassmann manifold: A geometric approach to the noncoherent multiple-antenna channel,” IEEE Trans. Inf. Theory, vol. 48, no. 2, pp. 359–383, Feb. 2002.
5. A. Lapidoth, “On the asymptotic capacity of stationary Gaussian fading channels,” IEEE Trans. Inf. Theory, vol. 51, no. 2, pp. 437–446, Feb. 2005.
6. T. Koch, On heating up and fading in communication channels, ser. Information Theory and its Applications, A. Lapidoth, Ed.   Konstanz, Germany: Hartung-Gorre Verlag, May 2009, vol. 5.
7. A. Lapidoth and S. M. Moser, “Capacity bounds via duality with applications to multiple-antenna systems on flat-fading channels,” IEEE Trans. Inf. Theory, vol. 49, no. 10, pp. 2426–2467, Oct. 2003.
8. Y. Liang and V. V. Veeravalli, “Capacity of noncoherent time-selective Rayleigh-fading channels,” IEEE Trans. Inf. Theory, vol. 50, no. 12, pp. 3095–3110, Dec. 2004.
9. T. Tao, “An uncertainty principle for cyclic groups of prime order,” Math. Res. Lett., vol. 12, no. 1, pp. 121–127, 2005.
10. W. Rudin, Real and Complex Analysis, 3rd ed.   New York, NY, USA: McGraw-Hill, 1987.
11. H. Lütkepohl, Handbook of Matrices.   Chichester, U.K.: Wiley, 1996.
101738