Noncoherent SIMO PreLog via Resolution of Singularities
Abstract
We establish a lower bound on the noncoherent capacity prelog of a temporally correlated Rayleigh blockfading singleinput multipleoutput (SIMO) channel. Our result holds for arbitrary rank of the channel correlation matrix, arbitrary blocklength , and arbitrary number of receive antennas , and includes the result in Morgenshtern et al. (2010) as a special case. It is well known that the capacity prelog for this channel in the singleinput singleoutput (SISO) case is given by , where is the penalty incurred by channel uncertainty. Our result reveals that this penalty can be reduced to by adding only one receive antenna, provided that and the channel correlation matrix satisfies mild technical conditions. The main technical tool used to prove our result is Hironaka’s celebrated theorem on resolution of singularities in algebraic geometry.
1 Introduction
It was shown in [1] that the noncoherent capacity
The assumption made in [1] is very restrictive, and the proof technique used in [1] heavily relies on this assumption. More precisely, the main result in [1] is based on a lower bound on the differential entropy of the channel output signal that is obtained by applying a change of variables argument [1, Lem. 3]. The proof is then completed by showing that the expected logarithm of the Jacobian determinant corresponding to this change of variables is finite. For , the Jacobian determinant takes a very involved form making it difficult to say anything about its expected logarithm. The main contribution of this paper is to resolve this problem by introducing a new proof technique based on a result from algebraic geometry, namely [3, Th. 2.3], which is a consequence of Hironaka’s celebrated theorem on resolution of singularities [4, 5]. Roughly speaking, this result allows us to rewrite any real analytic function [6, Def. 1.6.1] locally as a product of a monomial and a nonvanishing real analytic function. The proof of our main result, a lower bound on the prelog for the correlated blockfading channel with arbitrary number of receive antennas , is then effected by using this factorization to show that the integral of the logarithm of the absolute value of a real analytic function over a compact set is finite, provided that the real analytic function is not identically zero. This method is very general and could be of independent interest when one tries to show that a certain differential entropy is finite.
We conclude by noting that the main result in this paper shows that the prelog penalty incurred in the SISO case, which is due to channel uncertainty, can be reduced to by adding only one receive antenna (i.e., by taking ), provided that and the channel correlation matrix satisfies mild technical conditions. In the limit with constant, the penalty in the SIMO case becomes arbitrarily small, whereas the penalty in the SISO case remains unchanged.
1.1 Notation
Finite subsets of the set of natural numbers, , are denoted by calligraphic letters and we write for the cardinality of . We use to designate the set of natural numbers . Uppercase boldface letters denote matrices, lowercase boldface letters designate vectors. The superscripts and stand for transposition and Hermitian transposition, respectively. The allzero matrix of appropriate size is written as . For a matrix , the entry in the th row and th column is denoted by and we write for its th row. If , we denote by the submatrix of obtained by retaining all rows of with row index . Similarly, for a vector we denote its th entry by . If we denote by the vector obtained by retaining the entries of with . We write for the th unit vector of appropriate size and for the identity matrix of size . For a vector , denotes the diagonal matrix that has the entries of in its main diagonal. For two matrices and of arbitrary size, is the block matrix that has the matrix as upper left block, as lower right block, and as upper right and lower left block. For matrices , we define . We designate the Kronecker product of the matrices and as ; to simplify notation, we use the convention that the ordinary matrix product precedes the Kronecker product, i.e., . For a function , we write if there exists a vector in the domain of such that . For two functions and , the notation means that . If , and with denoting the set of integers. The logarithm to the base 2 is written as . The expectation operator is denoted by . Finally, stands for the distribution of a jointly proper Gaussian random vector with mean and covariance matrix .
2 System model
We consider a SIMO channel with receive antennas. The fading in each component SISO channel follows a correlated blockfading model [2], with inputoutput relation for a given block
(1) 
where
denotes the signaltonoise ratio (SNR),
is the transmitted signal vector,
is the received signal vector corresponding to the th receive antenna, and
is additive noise.
Finally,
is the vector of channel coefficients
between the transmit antenna and the th receive antenna. Here,
Without loss of generality, we assume that the row vectors of satisfy (). The vectors and are assumed to be mutually independent and independent across . It will turn out to be convenient to write the channelcoefficient vector in whitened form as , where with . Finally, we assume that and change in an independent fashion from block to block for all .
Setting , , , and , we can combine the individual inputoutput relations in (1) into the overall inputoutput relation
(2) 
3 Lower bound on the prelog
The capacity of the channel (2) is defined as
(3) 
where denotes mutual information [8, p. 251] and the supremum is taken over all input distributions on that satisfy the average power constraint . The prelog is defined as .
The main result of this paper is the following theorem.
Theorem 1
Suppose that satisfies the following
Property (A): There exists a subset of indices with cardinality
(4) 
such that every row vectors of are linearly independent. Then, the capacity of the SIMO channel (2) can be lowerbounded as
(5) 
Remark 1
The SISO prelog is [2].
Remark 3
Remark 4
The prelog is , provided that , even if only.
Remark 5
Property (A) in Theorem 1 is not restrictive and is satisfied for a broad class of correlation matrices.
Since we are interested in a capacity lower bound, we can evaluate the mutual information in (3) for an appropriate input distribution. Specifically, we take the input distribution to have entries that are independent and identically distributed (i.i.d.), zero mean, unit variance, and satisfy . This implies that [7, Lem. 6.7]
(6) 
For example, we can take . The mutual information in (3), evaluated for any input distribution satisfying these constraints, is then lowerbounded as follows. We first upperbound according to [1, Eq. (8)]
(7) 
and then lowerbound as in [1, Eq. (12)]
(8) 
where is a constant that is independent of , is independent of , with , and
(9) 
for sets satisfying
(10) 
The set can be interpreted as a set of pilot positions [1]. Combining (7) and (8), the capacity lower bound in (5) is established by choosing
provided that we can find sets such that . The remainder of the paper is devoted to identifying such a choice for and proving that the corresponding differential entropy is, indeed, finite. The main idea is to choose the sets such that can be related to with through a deterministic onetoone mapping. The quantity is much easier to deal with than .
Condition (10) implies that the mapping
(11) 
is between two vector spaces of the same dimension , which is a necessary condition for this mapping to be onetoone. Note that the RHS of (11) also depends on , which is, however, taken to be fixed, reflecting the fact that the pilot symbols are known to both transmitter and receiver. Any dependence on will henceforth implicitly mean a dependence on only. We set and shall choose as follows:

If , we set .

If , we let
(12) with and .
Now let
(13) 
be the Jacobian of the mapping in (11). If this mapping is onetoone on almost everywhere (a.e.), we can apply the changeofvariables theorem for integrals [9, Th. 7.26] in combination with [10, Th. 7.2] and find that
The proof is then concluded by establishing that the mapping in (11) is onetoone a.e. and
(14) 
This requires an indepth analysis of the Jacobian in (13), which will be carried out in the next section.
4 Properties of the Jacobian
The following lemma provides important insights into the structure of the determinant of the Jacobian in (13).
Lemma 1
The lemma follows by noting that
Based on (6), we can conclude that
To conclude the proof of (14) it therefore remains to show that . Direct computation reveals that each vector in (17) with contains only one nonzero element, which is given by . Applying the Laplace formula [11, p.7] to in (17) therefore yields the decomposition
(20) 
with
(21) 
and
The expectation of the logarithm of the second term on the RHS in (20) is finite because . It remains to show that . This is the most technical part in the proof of Theorem 1 and can be accomplished by applying methods from algebraic geometry, namely Theorem 2 in Appendix .1, which is a consequence of Hironaka’s celebrated theorem on resolution of singularities [4, 5]. A direct proof would require showing that the expected log of the determinant of the (highdimensional) matrix is finite, which seems exceedingly difficult. Hironaka’s theorem drastically simplifies the proof as it tells us that implies that . We start by noting that is a homogeneous polynomial in of degree , i.e.,
(22) 
which allows us to apply the following proposition (for and defined above):
Proposition 1
Let be a homogeneous polynomial in of degree with . Then implies that
(23) 
Writing as and using the fact that is a homogeneous polynomial of degree , we can upperbound the absolute value of the expectation in (23) by
Then [7, Lem. 6.7] together with implies that . Introducing polar coordinates [12, p. 55] and
for the complex vector , we can further upperbound according to
where is obtained from by changing to polar coordinates . Note that is a real analytic function [6, Def. 1.6.1], implies that , and we are integrating over a compact set . We can therefore apply Theorem 2 in Appendix .1 to conclude that .
It remains to show that for our specific choice of sets , which will be proved in the following lemma:
Lemma 2
Property (A) in Theorem 1 implies that .
See Appendix .2
In summary, we proved that (14) holds provided that Property (A) in Theorem 1 is satisfied. It turns out that also implies that the mapping in (11) is onetoone a.e. on . The proof, which is along the lines of the proof of [1, Lem. 2], is omitted due to space limitations. This completes the proof of Theorem 1.
.1 Resolution of singularities
In this appendix, we show how Hironaka’s theorem on resolution of singularities can be used to prove that provided that is a real analytic function and is a compact set.
We start by defining notation that will be used in this appendix. Let denote the open cube with side length centered at . For and let . If is a subset of the image of a map then denotes the inverse image of .
The following lemma is an immediate consequence of a modified version of Hironaka’s theorem [3, Th. 2.3]. This modified version originally appeared in [13]. The main point of this lemma is that it allows us to rewrite any real analytic function [6, Def. 1.6.1] locally as a product of a monomial and a nonvanishing real analytic function.
Lemma 3
Let be a real analytic function from a neighborhood of to . Suppose that . Then, there exists a triple , where

is an open set in with ,

is a dimensional real analytic manifold [3, Def. 2.10],

is a real analytic map
that satisfies the following conditions:

The map is proper, i.e., the inverse image of any compact set is compact.

The map is a real analytic isomorphism between and .

For each point , there exists a coordinate chart such that , is a real analytic isomorphism for some with ,
where is a nonvanishing real analytic function on and , and the determinant of the Jacobian of the mapping satisfies
where is a nonvanishing real analytic function on and .
The main idea is to apply [3, Th. 2.3] to the function . We omit the details due to space limitations.
We are now in a position to state the theorem that is needed to prove in the proof of Proposition 1.
Theorem 2
Let be a real analytic function on an open set . Then
(24) 
for all compact sets .
Let . If then Lemma 3 implies that there exists a triple where is an open set containing , is a real analytic manifold, and is a proper real analytic map. Moreover, for each there exists a coordinate chart such that , , and
(25) 
for all , where and are nonvanishing real analytic functions on . We can choose sufficiently small so that and are bounded on . If the existence of a triple with the properties specified above is guaranteed by taking sufficiently small such that does not vanish on and by setting to be the identity map.
Now for each , we choose an open neighborhood and a compact neighborhood such that . Since is a compact set, there exists a finite set of vectors in such that
For each , set , , , and . Since the mapping is proper, each set is a compact set. Therefore, there exists a finite set of points in such that
(26) 
with . Since (26) holds for all , we can upperbound the integral in (24) as follows:
where are positive real numbers, are bounded nonvanishing real analytic functions on , are vectors of nonnegative integers, and we changed variables according to (25).
.2 Proof of Lemma 2
We present a proof for and skip the (simpler) cases and .
Suppose that and . We can write in (17) as with and defined as
Property (A) in Theorem 1 implies that for arbitrary subsets
with , we can find vectors () such that

for all vectors with ;

for all vectors with .
This implies that for each choice of such sets (), there exists a set of vectors () such that the number of nonzero elements in each matrix satisfies
(27) 
Moreover, we have
which implies that we can choose the subsets and the vectors () such that each column of contains precisely one nonzero element. Applying the Laplace formula [11, p. 7] iteratively, we therefore get
where is a positive constant and we used Property (A) in Theorem 1 in the last step. \endproof
Footnotes
 Noncoherent capacity denotes capacity in the setting where transmitter and receiver know the channel statistics but neither of them is aware of the channel realizations.
 When , capacity is known to grow doublelogarithmically in SNR [7], and, hence, the prelog is equal to zero. If we can always achieve the same prelog as for by simply using only receive antennas.
 We assume that for all .
References
 V. I. Morgenshtern, G. Durisi, and H. Bölcskei, “The SIMO prelog can be larger than the SISO prelog,” in Proc. IEEE Int. Symp. Inf. Th. (ISIT 2010), Austin, TX, June 2010, pp. 320–324.
 Y. Liang and V. V. Veeravalli, “Capacity of noncoherent timeselective Rayleighfading channels,” IEEE Trans. Inf. Th., vol. 50, no. 12, pp. 3095–3110, Dec. 2004.
 S. Watanabe, Algebraic Geometry and Statistical Learning Theory. Cambridge, U.K.: Cambridge Univ. Press, 2009, vol. 25.
 H. Hironaka, “Resolution of singularities of an algebraic variety over a field of characteristic zero: I,” Math. Ann., vol. 79, no. 1, pp. 109–203, Jan. 1964.
 ——, “Resolution of singularities of an algebraic variety over a field of characteristic zero: II,” Math. Ann., vol. 79, no. 2, pp. 205–326, Mar. 1964.
 S. G. Krantz and H. R. Parks, A Primer of Real Analytic Functions. Basel, Switzerland: Birkhäuser, 1992, vol. 4.
 A. Lapidoth and S. M. Moser, “Capacity bounds via duality with applications to multipleantenna systems on flatfading channels,” IEEE Trans. Inf. Th., vol. 49, no. 10, pp. 2426–2467, Oct. 2003.
 T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. New York, NY: Wiley, 2006.
 W. Rudin, Real and Complex Analysis, 3rd ed. New York, NY: McGrawHill, 1987.
 K. Fritzsche and H. Grauert, From Holomorphic Functions to Complex Manifolds, 1st ed. New York, NY: Springer, 2002.
 R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, U.K.: Cambridge Univ. Press, 1985.
 R. J. Muirhead, Aspects of Multivariate Statistical Theory. New York, NY: Wiley, 1982.
 M. F. Atiyah, “Resolution of singularities and division of distributions,” Comm. Pure and Appl. Math., vol. 13, pp. 145–150, 1970.