Performance analysis of spatial smoothing schemes in the context of large arrays
This paper adresses the statistical behaviour of spatial smoothing subspace DoA estimation schemes using a sensor array in the case where the number of observations is significantly smaller than the number of sensors , and that the smoothing parameter is such that and are of the same order of magnitude. This context is modelled by an asymptotic regime in which and both converge towards at the same rate. As in recent works devoted to the study of (unsmoothed) subspace methods in the case where and are of the same order of magnitude, it is shown that it is still possible to derive improved DoA estimators termed as Generalized-MUSIC with spatial smoothing (G-MUSIC SS). The key ingredient of this work is a technical result showing that the largest singular values and corresponding singular vectors of low rank deterministic perturbation of certain Gaussian block-Hankel large random matrices behave as if the entries of the latter random matrices were independent identically distributed. This allows to conclude that when the number of sources and their DoA do not scale with , a situation modelling widely spaced DoA scenarios, then both traditional and Generalized spatial smoothing subspace methods provide consistent DoA estimators whose convergence speed is faster than . The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that the convergence speed of G-MUSIC SS estimates is unchanged, but that it is no longer the case for MUSIC SS ones.
The statistical analysis of subspace DoA estimation methods using an array of sensors is a topic that has received a lot of attention since the seventies. Most of the works were devoted to the case where the number of available samples of the observed signal is much larger than the number of sensors of the array (see e.g.  and the references therein). More recently, the case where and are large and of the same order of magnitude was addressed for the first time in  using large random matrix theory.  was followed by various works such as , , , . The number of observations may also be much smaller than the number of sensors. In this context, it is well established that spatial smoothing schemes, originally developped to address coherent sources (, , ), can be used to artificially increase the number of snapshots (see e.g.  and the references therein, see also the recent related contributions ,  devoted to the case where ). Spatial smoothing consists in considering overlapping arrays with sensors, and allows to generate artificially snapshots observed on a virtual array of sensors. The corresponding matrix, denoted , collecting the observations is the sum of a low rank component generated by -dimensional steering vectors with a noise matrix having a block-Hankel structure. Subspace methods can still be developed, but the statistical analysis of the corresponding DoA estimators was addressed in the standard regime where remains fixed while converges towards . This context is not the most relevant when is large because must be chosen in such a way that the number of virtual sensors be small enough w.r.t. , thus limiting the statistical performance of the estimates. In this paper, we study the statistical performance of spatial smoothing subspace DoA estimators in asymptotic regimes where and both converge towards at the same rate, where in order to not affect the aperture of the virtual array, and where the number of sources does not scale with . For this, it is necessary to evaluate the behaviour of the largest eigenvalues and corresponding eigenvectors of the empirical covariance matrix . To address this issue, we prove that the above eigenvalues and eigenvectors have the same asymptotic behaviour as if the noise contribution to matrix , a block-Hankel random matrix, was a Gaussian random matrix with independent identically distributed. To establish this result, we rely on the recent result  addressing the behaviour of the singular values of large block-Hankel random matrices built from i.i.d. Gaussian sequences.  implies that the empirical eigenvalue distribution of matrix converges towards the Marcenko-Pastur distribution, and that its eigenvalues are almost surely located in the neigborhood of the support of the above distribution. This allows to generalize the results of  to our random matrix model, and to characterize the behaviour of the largest eigenvalues and eigenvectors of . We deduce from this improved subspace estimators, called DoA G-MUSIC SS (spatial smoothing) estimators, which are similar to those of  and . We deduce from the results of  that when the DoAs do not scale with , i.e. if the DoAs are widely spaced compared to aperture array, then both G-MUSIC SS and traditional MUSIC SS estimators are consistent and converge at a rate faster than . Moreover, when the DoAs are spaced of the order of , the behaviour of G-MUSIC SS estimates remains unchanged, but the convergence rate of traditional subspace estimates is lower.
This paper is organized as follows. In section 2, we precise the signal models, the underlying assumptions, and formulate our main results. In section 3, we prove that the largest singular values and corresponding singular vectors of low rank deterministic perturbation of certain Gaussian block-Hankel large random matrices behave as if the entries of the latter random matrices were independent identically distributed. In section 4, we apply the results of section 3 to matrix , and follow  in order to propose a G-MUSIC algorithm to the spatial smoothing context of this paper. The consistency and the convergence speed of the G-MUSIC SS estimates and of the traditional MUSIC SS estimates are then deduced from the results of . Finally, section 5 present numerical experiments sustaining our theoretical results.
Notations : For a complex matrix , we denote by its transpose and its conjugate transpose, and by and its trace and spectral norm. The identity matrix will be and will refer to a vector having all its components equal to except the -th equals to . For a sequence of random variables and a random variable , we write
when converges almost surely towards . Finally, will stand for the convergence of to in probability, and will stand for tightness (boundedness in probability).
2 Problem formulation and main results.
2.1 Problem formulation.
We assume that narrow-band and far-field source signals are impinging on a uniform linear array of sensors, with . In this context, the –dimensional received signal can be written as
is the matrix of –dimensionals steering vectors , with the source signals DoA, and ;
contains the source signals received at time , considered as unknown deterministic ;
is a temporally and spatially white complex Gaussian noise with spatial covariance .
The received signal is observed between time and time , and we collect the available observations in the matrix defined
with and . We assume that for each greater than . The DoA estimation problem consists in estimating the DoA from the matrix of samples .
When the number of observations is much less than the number of sensors , the standard subspace method fails. In this case, it is standard to use spatial smoothing schemes in order to artificially increase the number of observations. In particular, it is well established that spatial smoothing schemes allow to use subspace methods even in the single snapshot case, i.e. when (see e.g.  and the references therein). If , spatial smoothing consists in considering overlapping subarrays of dimension . At each time , snapshots of dimension are thus available, and the scheme provides observations of dimension . In order to be more specific, we introduce the following notations. If is an integer less than , we denote by the Hankel matrix defined by
Column of matrix corresponds to the observation on subarray at time . Collecting all the observations on the various subarrays allows to obtain snapshots, thus increasing artificially the number of observations. We define as the block-Hankel matrix given by
In order to express , we consider the Hankel matrix defined from vector in the same way than . We remark that is rank 1, and can be written as
We consider the matrix
which, of course, is a rank matrix whose range coincides with the subspace generated by the -dimensional vectors . can be written as
where matrix is the block Hankel matrix corresponding to the additive noise. As matrix is full rank, the extended obervation matrix appears as a noisy version of a low rank component whose range is the –dimensional subspace generated by vectors . Moreover, it is easy to check that
Therefore, it is potentially possible to estimate the DoAs using a subspace approach based on the eigenvalues / eigenvectors decomposition of matrix . The asymptotic behaviour of spatial smoothing subspace methods is standard in the regimes where remains fixed while converges towards . This is due to the law of large numbers which implies that the empirical covariance matrix has the same asymptotic behaviour than , In this context, the orthogonal projection matrix onto the eigenspace associated to the smallest eigenvalues of is a consistent estimate of the orthogonal projection matrix on the noise subspace, i.e. the orthogonal complement of . In other words, it holds that
The traditional pseudo-spectrum estimate defined by
where is the MUSIC pseudo-spectrum. Moreover, the MUSIC traditional DoA estimates, defined formally, for , by
where is a compact interval containing and such that for , are consistent, i.e.
However, the regime where remains fixed while converges towards is not very interesting in practice because the size of the subarrays may be much smaller that the number of antennas , thus reducing the resolution of the method. We therefore study spatial smoothing schemes in regimes where the dimensions and of matrix are of the same order of magnitude and where in order to keep unchanged the aperture of the array. More precisely, we assume that integers and depend on and that
In regime (11), thus converges towards but at a rate that may be much lower than thus modelling contexts in which is much smaller than . As , it also holds that . Therefore, it is clear that where verifies with . may thus converge towards (even faster than if ) but in such a way that . As in regime (11) depends on , it could be appropriate to index the various matrices and DoA estimators by integer rather than by integer as in definitions (5) and (9). However, we prefer to use the index in the following in order to keep the notations unchanged. We also denote projection matrix and pseudo-spectrum by and because they depend on . Moreover, in the following, the notation should be understood as regime (11) for some .
2.2 Main results.
In regime (11), (7) is no more valid. Hence, (10) is questionable. In this paper, we show that it is possible to generalize the G-MUSIC estimators introduced in  in the case where to the context of spatial smoothing schemes in regime (11) and establish the following results. Under the separation condition that the non zero eigenvalues of matrix are above the threshold for each large enough, we deduce from  that:
the spatial smoothing traditional MUSIC estimates and the G-MUSIC SS estimates, denoted are consistent and verify
converges towards . We deduce from  that:
As in the case , the above mentioned separation condition ensures that the largest eigenvalues of the empirical covariance matrix correspond to the sources, and the signal and noise subspaces can be separated. In order to obtain some insights on this condition, and on the potential benefit of the spatial smoothing, we study the separation condition when and converge towards at the same rate, i.e. when , or equivalently that and that does not scale with . In this case, it is clear that coincides with . Under the assumption that converges towards a diagonal matrix when increases, then we establish that the separation condition holds if
for each large enough. If , the separation condition introduced in the context of (unsmoothed) G-MUSIC algorithms () is of course recovered, i.e.
If is large and that , matrix is close from and the separation condition is nearly equivalent to
Therefore, it is seen that the use of the spatial smoothing scheme allows to reduce the threshold corresponding to G-MUSIC method without spatial smoothing by the factor . Therefore, if and are the same order of magnitude, our asymptotic analysis allows to predict an improvement of the performance of the G-MUSIC SS methods when increases provided . If becomes too large, the above rough analysis is no more justified and the impact of the diminution of the number of antennas becomes dominant, and the performance tends to decrease.
3 Asymptotic behaviour of the largest singular values and corresponding singular vectors of finite rank perturbations of certain large random block-Hankel matrices.
In this section, still satisfy (11) while is a fixed integer that does not scale with . We consider the block-Hankel random matrix defined previously, and introduce matrix defined
in order to simplify the notations. The entries of have of course variance . In the following, represents a deterministic matrix verifying
for each large enough. We denote by the non zero eigenvalues of matrix arranged in decreasing order, and by and the associated left and right singular vectors of . The singular value decomposition of is thus given by
Moreover, we assume that:
The non zero eigenvalues of matrix converge towards when .
Here, for ease of exposition, we assume that the eigenvalues have multiplicity 1 and that for . However, the forthcoming results can be easily adapted if some coincide.
We define matrix as
can thus be interpreted as a rank perturbation of the block-Hankel matrix . The purpose of this section is to study the behaviour of the largest eigenvalues of matrix as well as of their corresponding eigenvectors . It turns out that and behave as if the entries of matrix where i.i.d. To see this, we have first to precise the behaviour of the eigenvalues of matrix in the asymptotic regime (11).
3.1 Behaviour of the eigenvalues of matrix .
We first recall the definition of the Marcenko-Pastur distribution of parameters and (see e.g. ). is the probability distribution defined by
with and . Its Stieltjes transform defined by
is known to satisfy the fundamental equation
where is known to coincide with Stieltjes transform of the Marcenko-Pastur distribution .
In order to simplify the notations, we denote by and the Stieltjes transforms of Marcenko-Pastur distributions and . and verify Equations (18) and (19) for . We also denote by and the terms and . We recall that function defined by
is analytic on , verifies , and increases from to when increases from to (see , section 3.1). Moreover, if denotes function defined by
then, increases from to when increases from to . Finally, it holds that
for each .
We denote by and the so-called resolvent of matrices and defined by
Then, the results of  imply the following proposition.
(i) The eigenvalue distribution of matrix converges almost surely towards the Marcenko-Pastur distribution , or equivalently, for each ,
(ii) For each , almost surely, for large enough, all the eigenvalues of belong to if , and to if .
(iii) Moreover, if are –dimensional deterministic vectors satisfying , then it holds that for each
Similarly, if and are –dimensional deterministic vectors verifying , then for each , it holds that
Moreover, for each , it holds that
The proof is given in the Appendix.
Proposition 1 implies that in a certain sense, matrix behaves as if the entries of were i.i.d because Proposition 1 is known to hold for i.i.d. matrices. In the i.i.d. case, (23) was established for the first time in , the almost sure location of the eigenvalues of can be found in  (see Theorem 5-11), while (24), (25) and (26) are trivial modifications of Lemma 5 of .
We notice that the convergence towards the Marcenko-Pastur distribution holds as soon as and . In particular, the convergence is still valid if for each , or equivalently if for each . can therefore converge towards much faster than . However, the hypothesis that , which is also equivalent to with , is necessary to establish item (ii).
3.2 The largest eigenvalues and eigenvectors of .
While matrix does not meet the conditions formulated in , Proposition 1 allows to use the approach used in , and to prove that the largest eigenvalues and corresponding eigenvectors of . behave as if the entries of were i.i.d. In particular, the following result holds.
We denote by , , the largest integer for which
Then, for , it holds that
Moreover, for , it holds that
Finally, for all deterministic sequences of unit norm vectors , , we have for
where function is defined by
4 Derivation of a consistent G-MUSIC method.
We now use the results of section 3 for matrix and . We recall that and represent the eigenvalues and eigenvectors of the empirical covariance matrix , and that and are the non zero eigenvalues and corresponding eigenvectors of . We recall that represents the orthogonal projection matrix onto the noise subspace, i.e. the orthogonal complement of the space generated by vectors and that is the corresponding MUSIC pseudo-spectrum
Assume that the non zero eigenvalues converge towards deterministic terms and that
Then, the estimator of the pseudo-spectrum defined by
This result can be proved as Proposition 1 in .
In order to obtain some insights on condition (32) and on the potential benefits of the spatial smoothing, we explicit the separation condition (32) when and converge towards at the same rate, i.e. when , or equivalently that and that does not scale with . In this case, it is clear that coincides with . It is easily seen that
where represents the Hadamard (i.e. element wise) product of matrices, and where stands for the complex conjugation operator of the elements of matrix . If we assume that converges towards a diagonal matrix when increases, then converges towards the diagonal matrix . Therefore, when is large enough. Using that , we obtain that the separation condition is nearly equivalent to
for each large enough. If , the separation condition introduced in the context of (unsmoothed) G-MUSIC algorithms () is of course recovered, i.e.
for each large enough. If is large and that , matrix is close from and the separation condition is nearly equivalent to
Therefore, it is seen that the use of the spatial smoothing scheme allows to reduce the threshold corresponding to G-MUSIC method without spatial smoothing by the factor . Hence, if
and are the same order of magnitude, our asymptotic analysis allows to predict an improvement of the performance of the G-MUSIC methods based on spatial smoothing when increases provided . If becomes too large, the above rough analysis is no more justified and the impact of the diminution of the number of antennas becomes dominant, and the performance tends to decrease. This analysis is sustained by the numerical simulations presented in section 5.
We define the DoA G-MUSIC SS estimates by
where is a compact interval containing and such that for . As in , (34) as well as the particular structure of directional vectors imply the following result which can be proved as Theorem 3 of 
Under condition (32), the DoA G-MUSIC SS estimates verify
for each .
We remark that under the extra assumption that converges towards a diagonal matrix, (see also  for more general matrices ) proved when that converges in distribution towards a Gaussian distribution. It would be interesting to generalize the results of  and  to the G-MUSIC estimators with spatial smoothing in the asymptotic regime (11). This is a difficult task that is not within the scope of the present paper.