Sample Size Cognizant Detection of Signals in White Noise


The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a computationally simple, sample eigenvalue based procedure for estimating the number of high-dimensional signals in white noise when there are relatively few samples. We highlight a fundamental asymptotic limit of sample eigenvalue based detection of weak high-dimensional signals from a limited sample size and discuss its implication for the detection of two closely spaced signals.

This motivates our heuristic definition of the effective number of identifiable signals. Numerical simulations are used to demonstrate the consistency of the algorithm with respect to the effective number of signals and the superior performance of the algorithm with respect to Wax and Kailath’s “asymptotically consistent” MDL based estimator.


Raj Rao Nadakuditithanks: Thanks NSF DMS-0411962 and ONR N00014-07-1-0269.
Massachusetts Institute of Technology
Department of EECS
Cambridge, MA 02139.
Alan Edelman
Massachusetts Institute of Technology
Department of Mathematics
Cambridge, MA 02139.

Index Terms—  Signal detection, eigen-inference, random matrices

1 Introduction

The observation vector, in many signal processing applications, can be modelled as a superposition of a finite number of signals embedded in additive noise. Detecting the number of signals present becomes a key issue and is often the starting point for the signal parameter estimation problem. When the signals and the noise are assumed to be samples of a stationary, ergodic Gaussian vector process, the sample covariance matrix formed from observations has the Wishart distribution.

The proposed algorithm uses an information theoretic criterion, motivated by the approach taken by Wax and Kailath (henceforth WK) in [1], for determining the number of signals in white noise by performing inference on the eigenvalues of the resulting sample covariance matrix. The form of the estimator is motivated by the distributional properties of moments of the eigenvalues of large dimensional Wishart matrices [2].

The proposed estimator was derived by explicitly accounting for the blurring and fluctuations of the eigenvalues due to sample size constraints. Consequently, there is a greater theoretical justification for employing the proposed estimator in sample starved settings unlike the WK estimators which were derived assuming that the sample size greatly exceeds the number of sensors. This is reflected in the improved performance relative to the “asymptotically consistent” WK MDL based estimator.

Another important contribution of this paper is the description of a fundamental limit of eigen-inference, i.e., inference using the sample eigenvalues alone. The concept of effective number of identifiable signals, introduced herein, explains why, asymptotically, if the signal level is below a threshold that depends on the noise variance, sample size and the dimensionality of the system, then reliable detection is not possible.

This paper is organized as follows. The problem is formulated in Section 2. An estimator for the number of signals present that exploits results from random matrix theory is derived in Section 3. The fundamental limits of sample eigenvalue based detection and the concept of effective number of signals are discussed in Section 4. Simulation results are presented in Section 5 while some concluding remarks and directions for future research are presented in Section 6.

2 Problem Formulation

We observe samples (“snapshots”) of possibly signal bearing -dimensional snapshot vectors where for each , and are mutually independent. The snapshot vectors are modelled as


where , denotes an -dimensional (real or complex) Gaussian noise vector where is generically unknown, denotes a -dimensional (real or complex) Gaussian signal vector with covariance , and is a unknown non-random matrix.

Since the signal and noise vectors are independent of each other, the covariance matrix of can hence be decomposed as




with denoting the conjugate transpose. Assuming that the matrix is of full column rank, i.e., the columns of are linearly independent, and that the covariance matrix of the signals is nonsingular, it follows that the rank of is . Equivalently, the smallest eigenvalues of are equal to zero.

If we denote the eigenvalues of by then it follows that the smallest eigenvalues of are all equal to so that


Thus, if the true covariance matrix were known apriori, the dimension of the signal vector can be determined from the multiplicity of the smallest eigenvalue of . The problem in practice is that the covariance matrix is unknown so that such a straight-forward algorithm cannot be used. The signal detection and estimation problem is hence posed in terms of an inference problem on samples of -dimensional multivariate real or complex Gaussian snapshot vectors.

A classical approach to this problem, developed by Bartlett [3] and Lawley [4], uses a sequence of hypothesis tests. Though this approach is sophisticated, the main problem is the subjective judgement needed by the practitioner in selecting the threshold levels for the different tests. This was overcome by Wax and Kailath in [1] wherein they propose an estimator for the number of signals (assuming ) based on the eigenvalues of the sample covariance matrix (SCM) defined by


where is the matrix of observations (samples). The Akaike Information Criteria (AIC) form of the estimator is given by


while the Minimum Descriptive Length (MDL) criterion is given by


where is the geometric mean of the smallest sample eigenvalues and is their arithmetic mean.

It is known [1] that the AIC form inconsistently estimates the number of signals, while the MDL form estimates the number of signals consistently. The simplicity of the estimator, and the large sample consistency are among the primary reasons why the Kailath-Wax MDL estimator continues to be employed in practice [5]. In the two decades since the publication of the WK paper, researchers have come up with many innovative solutions ([6, 7, 8] to list a few) for making the estimators more robust by exploiting some type of prior knowledge.

The most important deficiency of the WK and related estimators that remains unresolved occurs when the sample size is smaller than the number of sensors, i.e., when . In this situation, the SCM is singular and the estimators become degenerate. Practitioners often overcome this in an ad-hoc fashion by, for example, restricting in (7) to integer values in the range . Since large sample, i.e., , asymptotics were used to derive the estimators in [1], there is no rigorous theoretical justification for such a reformulation even if the simulation results suggest that the WK estimators are working “well enough.”

Other sample eigenvalue based solutions found in the literature that exploit the sample eigenvalue order statistics [6], or employ a Bayesian framework by imposing priors on the number of signals [9] are computationally more intensive and do not address the sample size starved setting in their analysis or their simulations. Particle filter based techniques [8], while useful, require the practitioner to the model the eigenvectors of the underlying population covariance matrix as well; this makes them especially sensitive to model mismatch errors that are endemic to high-dimensional settings. This motivates our development of an sample eigenvalue based estimator with a computational complexity comparable to that of the WK estimators.

3 Estimating the Number of Signals

Given an observation and a family of models, or equivalently a parameterized family of probability densities indexed by the parameter vector , we select the model which gives the minimum Akaike Information Criterion (AIC) [10] defined by


where is the maximum likelihood estimate of , and is the number of free parameters in . We derive an AIC based estimator for the number of signals by exploiting the following distributional properties of the moments of eigenvalues of the (signal-free) SCM.

Theorem 1

(Dumitriu-Edelman [2]) Assume is formed from snapshots modelled as (1) with , then as and , then

where denotes convergence in distribution, (or ) when is real (or complex) valued, respectively, and

Proposition 2

Assume satisfies the hypotheses of Theorem 1 for some then as and , then


and the test statistic is given by

Proof. This follows from applying the delta method [11] to the results in Theorem 1.     

When signals are present and assuming , then the distributional properties of the “noise” eigenvalues are closely approximated by the distributional properties of the eigenvalues given by Theorem 1 of the signal-free SCM, i.e., . Hence, by evaluating the statistic in Proposition 2 over a sliding window, and using the normal approximation for the statistic from Proposition 2 with and free parameters in the AIC formulation in (8) results in the estimator:


Here if , and if . When the measurement vectors represent quaternion valued narrowband signals, then we set . Quaternion valued vectors arise when the data collected from vector sensors is represented using quaternions as in [12].

4 Fundamental Limit of Detection

The following result exposes when the “signal” eigenvalues are asymptotically distinguishable from the “noise” eigenvalues.

Proposition 3

Assume satisfies the hypotheses of Theorem 1. Denote the eigenvalues of by . Let denote the -th largest eigenvalue of . Then as with , and ,


where the convergence is almost surely.

Proof. This result appears in [13] for very general settings. A matrix theoretic proof for when for the real case may be found in [14] and an interacting particle system interpretation appears in [15].     

Motivated by Proposition 3, we define the effective number of signals as


4.1 Identifiability of closely spaced signals

Suppose there are two uncorrelated (hence, independent) signals so that . In (1), let . In a sensor array processing application, we think of and as encoding the array manifold vectors for a source and an interferer with powers and , located at and , respectively. The covariance matrix given by


has the smallest eigenvalues and the two largest eigenvalues


respectively. Applying the result in Proposition 3 allows us to express the effective number of signals as


In the special situation when and , we can (in an asymptotic sense) reliably detect the presence of both signals from the sample eigenvalues alone whenever


Equation (16) captures the tradeoff between the identifiability of two closely spaced signals, the dimensionality of the system, the number of available snapshots and the cosine of the angle between the vectors and . It may prove to be a useful heuristic for experimental design.

5 Simulations

(a) Prob versus for fixed .
(b) Prob versus for fixed .
Fig. 1: Comparison of the estimators over 20,000 trials.

Assume the covariance matrix has “noise” eigenvalues with , and two “signal” eigenvalues with and . When samples are available, Figure 1(a) shows that the proposed estimator consistently detects two signals while the WK MDL estimator does not. However, when , Figure 1(a) suggests that neither estimator is able to detect both the signals present. A closer examination of the empirical data presents a different picture. For the covariance matrix considered, when , then from (12), . Figure 1(b) shows that for large and , the new estimator consistently estimates one signal, as expected. The WK MDL estimator detects no signals. We conjecture that the new estimator consistently estimates in the sense.

6 Conclusions

An estimator for the number of signals in white noise was presented that exhibits robustness to high-dimensionality, and sample size constraints. The concept of effective number of signals described provides insight into the (asymptotic) regime in which reliable detection with sample eigenvalue based methods, including the proposed method, is possible. This helps identify scenarios where algorithms that exploit any structure in the eigenvectors of the signals, such as the MUSIC and the Capon-MVDR [5] algorithms in sensor array processing, might be better able to tease out lower level signals from the background noise. It is worth noting that the proposed approach remains relevant in situations where the eigenvector structure has been identified. This is because eigen-inference methodologies are inherently robust to eigenvector modelling errors that are endemic to high-dimensional settings. Thus the practitioner may use the proposed estimator to complement and “robustify” the inference provided by algorithms that exploit the eigenvector structure.


We thank Arthur Baggeroer, William Ballance and the anonymous reviewers for their feedback and encouragement.


  • [1] Mati Wax and Thomas Kailath, “Detection of signals by information theoretic criteria,” IEEE Trans. Acoust. Speech Signal Process., vol. 33, no. 2, pp. 387–392, 1985.
  • [2] I. Dumitriu and A. Edelman, “Global spectrum fluctuations for the -Hermite and -Laguerre ensembles via matrix models,” J. Math. Phys., vol. 47, no. 6, pp. 063302, 36, 2006.
  • [3] M. S. Bartlett, “A note on the multiplying factors for various approximations,” J. Roy. Stat. Soc., ser. B, vol. 16, pp. 296–298, 1954.
  • [4] D. N. Lawley, “Tests of significance of the latent roots of the covariance and correlation matrices,” Biometrica, vol. 43, pp. 128–136, 1956.
  • [5] H. L. Van Trees, Detection, Estimation, and Modulation Theory Part IV: Optimum Array Processing, John wiley and Sons, Inc., new York, 2002.
  • [6] Eran Fishler and Hagit Messer, “On the use of order statistics for improved detection of signals by the mdl criterion,” IEEE Trans. of Signal Process., vol. 48, no. 8, pp. 2242–2247, August 2000.
  • [7] Eran Fishler and H. Vincent Poor, “Estimation of the number of sources in unbalanced arrays via information theoretic criteria,” IEEE Trans. of Signal Process., vol. 53, no. 9, pp. 3543–3553, September 2005.
  • [8] Jean-René Larocque, James P. Reilly, and William Ng, “Particle filters for tracking and unknown number of sources,” IEEE Trans. of Signal Processing, vol. 50, no. 12, pp. 2926–2937, December 2002.
  • [9] N. K. Bansal and M. Bhandary, “Bayes estimation of number of signals,” Ann. Inst. Statist. Math., vol. 43, no. 2, pp. 227–243, 1991.
  • [10] Hirotugu Akaike, “A new look at the statistical model identification,” IEEE Trans. Automatic Control, vol. AC-19, pp. 716–723, 1974, System identification and time-series analysis.
  • [11] George Casella and Roger L. Berger, Statistical inference, The Wadsworth & Brooks/Cole Statistics/Probability Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1990.
  • [12] S. Miron, N. Le Bihan, and J.I. Mars, “Quaternion-MUSIC for vector-sensor array processing,” IEEE Trans. on Signal Process., vol. 54, no. 4, pp. 1218–1229, April 2006.
  • [13] J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” Journal of Multivariate Analysis, , no. 6, pp. 1382–1408, 2006.
  • [14] D. Paul, “Asymptotics of sample eigenstructure for a large dimensional spiked covariance model,” Technical report, Stanford University, 2005,
  • [15] R. R. Nadakuditi, Applied Stochastic Eigen-Analysis, Ph.D. thesis, Massachusetts Institute of Technology, February 2007, Department of Electrical Engineering and Computer Science.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description