Performance analysis of spatial smoothing schemes in the context of large arrays.
Abstract
This paper adresses the statistical behaviour of spatial smoothing subspace DoA estimation schemes using a sensor array in the case where the number of observations is significantly smaller than the number of sensors , and that the smoothing parameter is such that and are of the same order of magnitude. This context is modelled by an asymptotic regime in which and both converge towards at the same rate. As in recent works devoted to the study of (unsmoothed) subspace methods in the case where and are of the same order of magnitude, it is shown that it is still possible to derive improved DoA estimators termed as GeneralizedMUSIC with spatial smoothing (GMUSIC SS). The key ingredient of this work is a technical result showing that the largest singular values and corresponding singular vectors of low rank deterministic perturbation of certain Gaussian blockHankel large random matrices behave as if the entries of the latter random matrices were independent identically distributed. This allows to conclude that when the number of sources and their DoA do not scale with , a situation modelling widely spaced DoA scenarios, then both traditional and Generalized spatial smoothing subspace methods provide consistent DoA estimators whose convergence speed is faster than . The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that the convergence speed of GMUSIC SS estimates is unchanged, but that it is no longer the case for MUSIC SS ones.
1 Introduction
The statistical analysis of subspace DoA estimation methods using an array of sensors is a topic that has received a lot of attention since the seventies. Most of the works were devoted to the case where the number of available samples of the observed signal is much larger than the number of sensors of the array (see e.g. [14] and the references therein). More recently, the case where and are large and of the same order of magnitude was addressed for the first time in [10] using large random matrix theory. [10] was followed by various works such as [7], [17], [6], [5]. The number of observations may also be much smaller than the number of sensors. In this context, it is well established that spatial smoothing schemes, originally developped to address coherent sources ([4], [13], [11]), can be used to artificially increase the number of snapshots (see e.g. [14] and the references therein, see also the recent related contributions [15], [16] devoted to the case where ). Spatial smoothing consists in considering overlapping arrays with sensors, and allows to generate artificially snapshots observed on a virtual array of sensors. The corresponding matrix, denoted , collecting the observations is the sum of a low rank component generated by dimensional steering vectors with a noise matrix having a blockHankel structure. Subspace methods can still be developed, but the statistical analysis of the corresponding DoA estimators was addressed in the standard regime where remains fixed while converges towards . This context is not the most relevant when is large because must be chosen in such a way that the number of virtual sensors be small enough w.r.t. , thus limiting the statistical performance of the estimates. In this paper, we study the statistical performance of spatial smoothing subspace DoA estimators in asymptotic regimes where and both converge towards at the same rate, where in order to not affect the aperture of the virtual array, and where the number of sources does not scale with . For this, it is necessary to evaluate the behaviour of the largest eigenvalues and corresponding eigenvectors of the empirical covariance matrix . To address this issue, we prove that the above eigenvalues and eigenvectors have the same asymptotic behaviour as if the noise contribution to matrix , a blockHankel random matrix, was a Gaussian random matrix with independent identically distributed. To establish this result, we rely on the recent result [8] addressing the behaviour of the singular values of large blockHankel random matrices built from i.i.d. Gaussian sequences. [8] implies that the empirical eigenvalue distribution of matrix converges towards the MarcenkoPastur distribution, and that its eigenvalues are almost surely located in the neigborhood of the support of the above distribution. This allows to generalize the results of [3] to our random matrix model, and to characterize the behaviour of the largest eigenvalues and eigenvectors of . We deduce from this improved subspace estimators, called DoA GMUSIC SS (spatial smoothing) estimators, which are similar to those of [17] and [5]. We deduce from the results of [18] that when the DoAs do not scale with , i.e. if the DoAs are widely spaced compared to aperture array, then both GMUSIC SS and traditional MUSIC SS estimators are consistent and converge at a rate faster than . Moreover, when the DoAs are spaced of the order of , the behaviour of GMUSIC SS estimates remains unchanged, but the convergence rate of traditional subspace estimates is lower.
This paper is organized as follows. In section 2, we precise the signal models, the underlying assumptions, and formulate our main results. In section 3, we prove that the largest singular values and corresponding singular vectors of low rank deterministic perturbation of certain Gaussian blockHankel large random matrices behave as if the entries of the latter random matrices were independent identically distributed. In section 4, we apply the results of section 3 to matrix , and follow [5] in order to propose a GMUSIC algorithm to the spatial smoothing context of this paper. The consistency and the convergence speed of the GMUSIC SS estimates and of the traditional MUSIC SS estimates are then deduced from the results of [18]. Finally, section 5 present numerical experiments sustaining our theoretical results.
Notations : For a complex matrix , we denote by its transpose and its conjugate transpose, and by and its trace and spectral norm. The identity matrix will be and will refer to a vector having all its components equal to except the th equals to . For a sequence of random variables and a random variable , we write
when converges almost surely towards . Finally, will stand for the convergence of to in probability, and will stand for tightness (boundedness in probability).
2 Problem formulation and main results.
2.1 Problem formulation.
We assume that narrowband and farfield source signals are impinging on a uniform linear array of sensors, with . In this context, the –dimensional received signal can be written as
where

is the matrix of –dimensionals steering vectors , with the source signals DoA, and ;

contains the source signals received at time , considered as unknown deterministic ;

is a temporally and spatially white complex Gaussian noise with spatial covariance .
The received signal is observed between time and time , and we collect the available observations in the matrix defined
(1) 
with and . We assume that for each greater than . The DoA estimation problem consists in estimating the DoA from the matrix of samples .
When the number of observations is much less than the number of sensors , the standard subspace method fails. In this case, it is standard to use spatial smoothing schemes in order to artificially increase the number of observations. In particular, it is well established that spatial smoothing schemes allow to use subspace methods even in the single snapshot case, i.e. when (see e.g. [14] and the references therein). If , spatial smoothing consists in considering overlapping subarrays of dimension . At each time , snapshots of dimension are thus available, and the scheme provides observations of dimension . In order to be more specific, we introduce the following notations. If is an integer less than , we denote by the Hankel matrix defined by
(2) 
Column of matrix corresponds to the observation on subarray at time . Collecting all the observations on the various subarrays allows to obtain snapshots, thus increasing artificially the number of observations. We define as the blockHankel matrix given by
(3) 
In order to express , we consider the Hankel matrix defined from vector in the same way than . We remark that is rank 1, and can be written as
(4) 
We consider the matrix
(5) 
which, of course, is a rank matrix whose range coincides with the subspace generated by the dimensional vectors . can be written as
(6) 
where matrix is the block Hankel matrix corresponding to the additive noise. As matrix is full rank, the extended obervation matrix appears as a noisy version of a low rank component whose range is the –dimensional subspace generated by vectors . Moreover, it is easy to check that
Therefore, it is potentially possible to estimate the DoAs using a subspace approach based on the eigenvalues / eigenvectors decomposition of matrix . The asymptotic behaviour of spatial smoothing subspace methods is standard in the regimes where remains fixed while converges towards . This is due to the law of large numbers which implies that the empirical covariance matrix has the same asymptotic behaviour than , In this context, the orthogonal projection matrix onto the eigenspace associated to the smallest eigenvalues of is a consistent estimate of the orthogonal projection matrix on the noise subspace, i.e. the orthogonal complement of . In other words, it holds that
(7) 
The traditional pseudospectrum estimate defined by
thus verifies
(8) 
where is the MUSIC pseudospectrum. Moreover, the MUSIC traditional DoA estimates, defined formally, for , by
(9) 
where is a compact interval containing and such that for , are consistent, i.e.
(10) 
However, the regime where remains fixed while converges towards is not very interesting in practice because the size of the subarrays may be much smaller that the number of antennas , thus reducing the resolution of the method. We therefore study spatial smoothing schemes in regimes where the dimensions and of matrix are of the same order of magnitude and where in order to keep unchanged the aperture of the array. More precisely, we assume that integers and depend on and that
(11) 
In regime (11), thus converges towards but at a rate that may be much lower than thus modelling contexts in which is much smaller than . As , it also holds that . Therefore, it is clear that where verifies with . may thus converge towards (even faster than if ) but in such a way that . As in regime (11) depends on , it could be appropriate to index the various matrices and DoA estimators by integer rather than by integer as in definitions (5) and (9). However, we prefer to use the index in the following in order to keep the notations unchanged. We also denote projection matrix and pseudospectrum by and because they depend on . Moreover, in the following, the notation should be understood as regime (11) for some .
2.2 Main results.
In regime (11), (7) is no more valid. Hence, (10) is questionable. In this paper, we show that it is possible to generalize the GMUSIC estimators introduced in [5] in the case where to the context of spatial smoothing schemes in regime (11) and establish the following results. Under the separation condition that the non zero eigenvalues of matrix are above the threshold for each large enough, we deduce from [18] that:

the spatial smoothing traditional MUSIC estimates and the GMUSIC SS estimates, denoted are consistent and verify
(12) (13)
(12) and (13) hold when the DoA do not scale with . In pratice, this assumption corresponds to practical situations where the DoA are widely spaced because when the DoA are fixed, the ratio
converges towards . We deduce from [18] that:
As in the case , the above mentioned separation condition ensures that the largest eigenvalues of the empirical covariance matrix correspond to the sources, and the signal and noise subspaces can be separated. In order to obtain some insights on this condition, and on the potential benefit of the spatial smoothing, we study the separation condition when and converge towards at the same rate, i.e. when , or equivalently that and that does not scale with . In this case, it is clear that coincides with . Under the assumption that converges towards a diagonal matrix when increases, then we establish that the separation condition holds if
(14) 
for each large enough. If , the separation condition introduced in the context of (unsmoothed) GMUSIC algorithms ([5]) is of course recovered, i.e.
If is large and that , matrix is close from and the separation condition is nearly equivalent to
Therefore, it is seen that the use of the spatial smoothing scheme allows to reduce the threshold corresponding to GMUSIC method without spatial smoothing by the factor . Therefore, if and are the same order of magnitude, our asymptotic analysis allows to predict an improvement of the performance of the GMUSIC SS methods when increases provided . If becomes too large, the above rough analysis is no more justified and the impact of the diminution of the number of antennas becomes dominant, and the performance tends to decrease.
3 Asymptotic behaviour of the largest singular values and corresponding singular vectors of finite rank perturbations of certain large random blockHankel matrices.
In this section, still satisfy (11) while is a fixed integer that does not scale with . We consider the blockHankel random matrix defined previously, and introduce matrix defined
in order to simplify the notations. The entries of have of course variance . In the following, represents a deterministic matrix verifying
(15) 
for each large enough. We denote by the non zero eigenvalues of matrix arranged in decreasing order, and by and the associated left and right singular vectors of . The singular value decomposition of is thus given by
Moreover, we assume that:
Assumption 1.
The non zero eigenvalues of matrix converge towards when .
Here, for ease of exposition, we assume that the eigenvalues have multiplicity 1 and that for . However, the forthcoming results can be easily adapted if some coincide.
We define matrix as
(16) 
can thus be interpreted as a rank perturbation of the blockHankel matrix . The purpose of this section is to study the behaviour of the largest eigenvalues of matrix as well as of their corresponding eigenvectors . It turns out that and behave as if the entries of matrix where i.i.d. To see this, we have first to precise the behaviour of the eigenvalues of matrix in the asymptotic regime (11).
3.1 Behaviour of the eigenvalues of matrix .
We first recall the definition of the MarcenkoPastur distribution of parameters and (see e.g. [1]). is the probability distribution defined by
with and . Its Stieltjes transform defined by
is known to satisfy the fundamental equation
(17) 
or equivalently,
(18)  
(19) 
where is known to coincide with Stieltjes transform of the MarcenkoPastur distribution .
In order to simplify the notations, we denote by and the Stieltjes transforms of MarcenkoPastur distributions and . and verify Equations (18) and (19) for . We also denote by and the terms and . We recall that function defined by
(20) 
is analytic on , verifies , and increases from to when increases from to (see [3], section 3.1). Moreover, if denotes function defined by
(21) 
then, increases from to when increases from to . Finally, it holds that
(22) 
for each .
We denote by and the socalled resolvent of matrices and defined by
Then, the results of [8] imply the following proposition.
Proposition 1.

(i) The eigenvalue distribution of matrix converges almost surely towards the MarcenkoPastur distribution , or equivalently, for each ,
(23) 
(ii) For each , almost surely, for large enough, all the eigenvalues of belong to if , and to if .

(iii) Moreover, if are –dimensional deterministic vectors satisfying , then it holds that for each
(24) Similarly, if and are –dimensional deterministic vectors verifying , then for each , it holds that
(25) Moreover, for each , it holds that
(26) Finally, for each , convergence properties (24, 25, 26) hold uniformly w.r.t. on each compact subset of .
The proof is given in the Appendix.
Remark 1.
Proposition 1 implies that in a certain sense, matrix behaves as if the entries of were i.i.d because Proposition 1 is known to hold for i.i.d. matrices. In the i.i.d. case, (23) was established for the first time in [9], the almost sure location of the eigenvalues of can be found in [1] (see Theorem 511), while (24), (25) and (26) are trivial modifications of Lemma 5 of [5].
We notice that the convergence towards the MarcenkoPastur distribution holds as soon as and . In particular, the convergence is still valid if for each , or equivalently if for each . can therefore converge towards much faster than . However, the hypothesis that , which is also equivalent to with , is necessary to establish item (ii).
3.2 The largest eigenvalues and eigenvectors of .
While matrix does not meet the conditions formulated in [3], Proposition 1 allows to use the approach used in [3], and to prove that the largest eigenvalues and corresponding eigenvectors of . behave as if the entries of were i.i.d. In particular, the following result holds.
Theorem 1.
We denote by , , the largest integer for which
(27) 
Then, for , it holds that
(28) 
Moreover, for , it holds that
(29) 
Finally, for all deterministic sequences of unit norm vectors , , we have for
(30) 
where function is defined by
(31) 
4 Derivation of a consistent GMUSIC method.
We now use the results of section 3 for matrix and . We recall that and represent the eigenvalues and eigenvectors of the empirical covariance matrix , and that and are the non zero eigenvalues and corresponding eigenvectors of . We recall that represents the orthogonal projection matrix onto the noise subspace, i.e. the orthogonal complement of the space generated by vectors and that is the corresponding MUSIC pseudospectrum
Theorem 1 allows to generalize immediately the results of [5] and [18] concerning the consistency of GMUSIC and MUSIC DoA estimators in the case . More precisely:
Theorem 2.
Assume that the non zero eigenvalues converge towards deterministic terms and that
(32) 
Then, the estimator of the pseudospectrum defined by
(33) 
verifies
(34) 
This result can be proved as Proposition 1 in [5].
In order to obtain some insights on condition (32) and on the potential benefits of the spatial smoothing, we explicit the separation condition (32) when and converge towards at the same rate, i.e. when , or equivalently that and that does not scale with . In this case, it is clear that coincides with . It is easily seen that
(35) 
where represents the Hadamard (i.e. element wise) product of matrices, and where stands for the complex conjugation operator of the elements of matrix . If we assume that converges towards a diagonal matrix when increases, then converges towards the diagonal matrix . Therefore, when is large enough. Using that , we obtain that the separation condition is nearly equivalent to
or to
(36) 
for each large enough. If , the separation condition introduced in the context of (unsmoothed) GMUSIC algorithms ([5]) is of course recovered, i.e.
for each large enough. If is large and that , matrix is close from and the separation condition is nearly equivalent to
Therefore, it is seen that the use of the spatial smoothing scheme allows to reduce the threshold corresponding to GMUSIC method without spatial smoothing by the factor . Hence, if
and are the same order of magnitude, our asymptotic analysis allows to predict an improvement of the performance of the GMUSIC methods based on spatial smoothing when increases provided . If becomes too large, the above rough analysis is no more justified and the impact of the diminution of the number of antennas becomes dominant, and the performance tends to decrease. This analysis is sustained by the numerical simulations presented in section 5.
We define the DoA GMUSIC SS estimates by
(37) 
where is a compact interval containing and such that for . As in [5], (34) as well as the particular structure of directional vectors imply the following result which can be proved as Theorem 3 of [5]
Theorem 3.
Remark 2.
We remark that under the extra assumption that converges towards a diagonal matrix,[5] (see also [19] for more general matrices ) proved when that converges in distribution towards a Gaussian distribution. It would be interesting to generalize the results of [5] and [19] to the GMUSIC estimators with spatial smoothing in the asymptotic regime (11). This is a difficult task that is not within the scope of the present paper.