Sparse Bayesian Learning for DOA Estimation in Heteroscedastic Noise

# Sparse Bayesian Learning for DOA Estimation in Heteroscedastic Noise

## Abstract

The paper considers direction of arrival (DOA) estimation from long-term observations in a noisy environment. In such an environment the noise source might evolve, causing the stationary models to fail. Therefore a heteroscedastic Gaussian noise model is introduced where the variance can vary across observations and sensors. The source amplitudes are assumed independent zero-mean complex Gaussian distributed with unknown variances (i.e. the source powers), inspiring stochastic maximum likelihood DOA estimation. The DOAs of plane waves are estimated from multi-snapshot sensor array data using sparse Bayesian learning (SBL) where the noise is estimated across both sensors and snapshots. This SBL approach is more flexible and performs better than high-resolution methods since they cannot estimate the heteroscedastic noise process. An alternative to SBL is simple data normalization, whereby only the phase across the array is utilized. Simulations demonstrate that taking the heteroscedastic noise into account improves DOA estimation.

## 1Introduction

With long observation times weak signals can be extracted in a noisy environment. Most analytic treatments analyze these cases assuming Gaussian noise with constant variance. For long observation times the noise process though is likely to change with time causing the noise variance to evolve. This is called a heteroscedastic Gaussian process, meaning that the noise variance is evolving. While the noise variance is a nuisance parameter that we are not interested in, it still needs to be estimated or included in the processing in order to obtain an accurate estimate of the weaker signals.

Accounting for the noise variation is certainly important for machine learning [1] and related to robust statistics [3]. Heteroscedastic noise models have been used in e.g. finance [5] and image processing [6]. In statistical signal processing, the noise has been assumed to vary spatially [7], but spatiotemporally varying noise as considered here has not been studied. The proposed processing could be applied to spatial coherence loss [10] or to wavefront decorrelation, where turbulence causes the wave front to be incoherent for certain observations (thus more noisy). This has lead to so-called lucky imaging in astronomy [13] or lucky ranging in ocean acoustics [14], where only the measurements giving good results are used. As a result an involved hypothesis testing is needed to determine the measurements to be used. In contrast, we propose to use all measurements.

In applications, a simple way to account for noise power variations is to normalize the data magnitude to only contain phase information as demonstrated for beamforming in seismology [15], noise cross correlation in seismology [17] and acoustics [22], source deconvolution in ocean acoustics [23] and speaker localization [25]. High-resolution beamformers such as MUSIC [27] relying on a sample covariance matrix are not likely to perform well for heteroscedastic noise as the loud noise might dominate the sample covariance matrix. More advanced methods than an eigenvalue decomposition are needed to separate the signal and noise subspaces. We demonstrate that for well-separated sources normalizing the data to only contain phase information works well.

When the sources are closely spaced, more advanced parametric methods are needed for DOA estimation when the noise power is varying in space and time and the sources are weak. We derive and demonstrate this for the application of multiple measurement vector (MMV, or multi-snapshot) compressive beamforming [28]. We solve the MMV problem using the sparse Bayesian learning (SBL) framework [30] and use the maximum-a-posteriori (MAP) estimate for DOA reconstruction. We assume the source signals to jointly follow a zero-mean multivariate complex normal distribution with unknown power levels. The noise across sensors and snapshots also follows a zero-mean multivariate normal distribution with unknown variances. These assumptions lead to a Gaussian likelihood function.

The corresponding posterior distribution is also Gaussian and already developed SBL approaches solve this well. We base our development on our fast SBL method [32] which we augment to estimate noise variances, potentially as many variances as observations. Standard techniques are based on minimization-majorization [34] and expectation maximization (EM) [30], though not all estimates work well. Instead, we estimate the unknown variances using stochastic maximum likelihood [41], modified to obtain noise estimates even for a single observation.

### 1.1Heteroscedastic noise observation model

For the th observation snapshot, we assume the linear model

where the dictionary is constant and known, and the source vector contains the physical information of interest. Further, is additive zero-mean circularly symmetric complex Gaussian noise, which is generated from a heteroscedastic Gaussian process . Due to the circular symmetry of the noise the phase is uniformly distributed.

We specialize to diagonal covariance matrices, parameterized as

where with the th standard basis vector. Note that the covariance matrices are varying over the snapshot index and we introduce the matrix of all noise standard deviations as

where denotes non-negative real numbers.

We consider three special cases for the a priori knowledge on the noise covariance model (Equation 2) which are expressed as constraints on .

Case I

We assume wide-sense stationarity of the noise in space and time: . The model is homoscedastic,

Case II

We assume wide-sense stationarity of the noise in space only, i.e., the noise variance for all sensor elements is equal across the array, and it varies over snapshots. The noise variance is heteroscedastic in time (across snapshots),

Case III

No additional constraints other than (Equation 3). The noise variance is heteroscedastic across both time and space (sensors and snapshots.). In this case .

The relation between these noise cases is . From the sets ( is I, II, or III) both in and in can be constructed.

### 1.2Array model

Let be the complex source amplitudes, with and , at DOAs (e.g., ) and snapshots for a frequency . We observe narrowband waves on sensors for snapshots . A linear regression model relates the array data to the source amplitudes as

The dictionary contains the array steering vectors for all hypothetical DOAs as columns, with the th element given by ( is the distance to the reference element and the sound speed).

We assume and thus (Equation 4) is underdetermined. In the presence of only few stationary sources, the source vector is -sparse with . We define the th active set

and assume is constant across all snapshots . Also, we define which contains only the “active” columns of . In the following, denotes the vector -norm and the matrix Frobenius norm.

Similar to other DOA estimators, can be estimated by model order selection criteria or by examining the angular spectrum. The parameter is required only for the noise power in the SBL algorithm. An inaccurate estimate influences the algorithm’s convergence.

### 1.3Prior on the sources

We assume that the complex source amplitudes are independent both across snapshots and across DOAs and follow a zero-mean circularly symmetric complex Gaussian distribution with DOA-dependent variance , ,

i.e., the source vector at each snapshot is multivariate Gaussian with potentially singular covariance matrix,

as (typically ). Note that the diagonal elements of , denoted as , represent source powers and thus . When the variance , then with probability 1. The sparsity of the model is thus controlled with the parameter , and the active set is equivalently

The SBL algorithm ultimately estimates rather than the complex source amplitudes . This amounts to a significant reduction of the degrees of freedom resulting in a low variance of the DOA estimates.

### 1.4Stochastic likelihood

We here derive the well-known stochastic maximum likelihood function [44]. Given the linear model with Gaussian source and noise the array data is Gaussian with for each snapshot the covariance given by

The probability density function of is thus given by

The -snapshot log-likelihood for estimating and is

This likelihood function is identical to the Type II likelihood function (evidence) in standard SBL [36] which is obtained by treating as a hyperparameter. The Type II likelihood is obtained by integrating the likelihood function over the complex source amplitudes, cf. (29) in [32]. The stochastic maximum likelihood approach is used here as it is more direct.

The parameter estimates and are obtained by maximizing the likelihood, leading to

where is the feasible set of noise variances in corresponding to the noise cases ( I, II, or III, see Section 1.1). The likelihood function is similar to the ones derived for SBL and LIKES [34]. If and are known, then the MAP estimate is the posterior mean and covariance [47],

The diagonal elements of , i.e., , control the row-sparsity of as for the corresponding th row of becomes .

### 1.5LASSSO versus SBL

Both LASSO and SBL use the linear model with complex zero-mean Gaussian random noise but they differ in the modeling of the source matrix .

The LASSO approach assumes a priori random with uniformly i.i.d. distributed phase and Laplace-like prior amplitudes,

Thus only the summed amplitudes are Laplacian. The elements in are unknown and must be estimated. The LASSO approach uses the conditional likelihood (Type I) for ) and applies Bayes rule with the prior giving the MAP estimate

The LASSO approach estimates the realization of for each snapshot . The number of parameters to be estimated for the LASSO approach grows linearly with the number of snapshots.

The SBL approach on the other hand, assumes that every column of is random with the same complex zero-mean Gaussian a priori distribution . The SBL approach uses the likelihood in and estimates but not the realization of . Therefore, the number of parameters to be estimated for the SBL approach is independent of the number of snapshots. Thus the quality of the estimate improves faster than with LASSO.

For the heteroscedastic noise model, a different covariance matrix could be included in the data fit of . However, since the number of parameters to be estimated grows with the number of snapshots, this seems less attractive than using SBL.

### 1.6Pre-whitening

The purpose of this section is to motivate the empirical evidence [15][26] that phase-only processing might provide improved estimates over using also amplitudes. This is most likely to happen when sources are not closely spaced and at low SNR as demonstrated in the examples, Section 5.

Let us introduce the factorization , where is a square and non-singular matrix. For our particular setup, we have . The matrix is useful for pre-whitening the sensor data. The corresponding whitened sensor data and dictionary matrix are

For known diagonal noise covariance the above means we have to normalize each row in with as then the noise satisfies , and thus all entries are identically distributed.

If the noise covariance matrices are not known, they can be estimated as we will show later on. More specifically, at low SNR, the Noise Cases I, II, and III, lead to the noise variance estimates , . The corresponding pre-whitening matrices can then be computed as

Empirically, it has been found that applying pre-whitening to the data only (the dictionary is non-whitened) effective in finding the strongest DOA.

For Noise Case I, pre-whitening does not play a role and the conventional beamformer is formulated using the power spectrum at DOA

where the sample covariance matrix is

For Noise Cases II and III on the other hand, we will work with the pre-whitened data. In those cases, , the conventional beamformer leads to the powerspectra and , respectively, which can be formulated as

where the whitened sample covariance matrix is

The weighting is given by . For Case III, only phase is used as can be observed from , which results in phase-only processing. As demonstrated in Section 5.2 this simple pre-whitening can improve the DOA performance at low SNR.

## 2Source power estimation

Let us now focus on the SBL algorithm solving . The algorithm iterates between the source power estimates derived in this section and the noise variance estimates computed in Section 3. After detailing these estimation procedures, the full algorithm is summarized in Section 4.

We impose the diagonal structure , in agreement with , and form derivatives of with respect to the diagonal elements , cf. [44]. Using

the derivative of is formulated as

For the solution to (Equation 11), we impose the necessary condition . To obtain an iterative equation in , we multiply the first term in the above equation with the factor , whereby

Assuming and given (from previous iterations or initialization) and forcing to zero, we obtain the following fixed point iteration [48] for the :

We use , but have not carefully tested for optimal values and this value depends on many factors such as which noise estimate is used and closeness of the DOAs. A value of gives the update equation used in [47] and gives the update equation used in [32].

## 3Noise variance estimation

While there has been more focus on estimating the DOAs or , noise is an important part of the physical system and a correct estimate is needed for good convergence properties. In SBL, the noise variance controls the sharpness of the peaks in the spectrum, with higher noise levels giving broader peaks. Thus, as we optimize the DOAs we expect the noise levels to decrease and the spectrum to become sharper.

In this section we estimate the noise variance for the three noise cases in Section 1.1. In Secs. Section 3.1Section 3.3, we will assume the support of is known.

### 3.1Noise estimate, Case I

Under Noise Case I, where with the identity matrix of size , stochastic maximum likelihood [38] can provide an asymptotically efficient estimate of if the set of active DOAs is known.

Let be the covariance matrix of the active sources obtained above with corresponding active steering matrix which maximizes (Equation 10). The corresponding data covariance matrix is

Note that for Noise Case I, the data covariance matrices and are identical. Following [42], we continue from (Equation 21),

for all active sources (). Since , Eq.(Equation 25) simplifies to

To obtain Jaffer’s condition below, we impose the following additional constrains

Together, the conditions (Equation 26), (Equation 27) give Jaffer’s condition ([42]:Eq.(6)), i.e.,

which we enforce at the optimal solution . Jaffer’s condition follows from allowing arbitrary correlations among the source signals, i.e. when the matrix is not restricted to be diagonal. Substituting (Equation 24) into (Equation 28) gives

Let us then define the projection matrix onto the subspace spanned by the active steering vectors

Left-multiplying with and right-multiplying it with , we obtain

Evaluating the trace, using and , gives

where we have defined .

The above approximation motivates the noise power estimate for Noise Case I , which is error-free if , unbiased because , consistent since also its variance tends to zero for [49], and asymptotically efficient as it approaches the CRLB for [50]. Note that, the Noise Case I estimate is valid for any number of snapshots, even for just one snapshot.

### 3.2Noise estimate, Case II

For Noise Case II, where , we apply for each snapshot individually, leading to

Several alternative estimators for the noise variance are proposed based on EM [36]. For a comparative illustration in Sec. ? we will use the iterative noise estimate from [36], given by

where the posterior mean and covariance are given in and . Empirically, EM noise estimates such as significantly underestimate the true noise variance in our applications.

### 3.3Noise estimate, Case III

Let us start from the definition of the noise covariance

This motivates plugging-in the single-observation signal estimate for the active (non-zero) entries in . This estimate is based on the single observation and the projection matrix , giving the rank-1 estimate

Since the signal estimate maximizes the estimated signal power, this noise covariance estimate is biased and the noise level is likely underestimated.

Since we assume the noise independent across sensors, all off-diagonal elements of are known to be zero. With this constraint in mind, we modify as

The estimate is demanding as for all the complex-valued observations in , we obtain estimates of the noise variance. Note that the estimate in is not invertible whereas the diagonal constraint in leads to a non-singular estimate of with high probability (it is singular only if an element of is ). As a result, the expression for that is used for estimating in is likely invertible.

On the other hand, an overestimate of the noise is easily obtained by assuming which is equivalent to setting in , resulting in

or

This can be shown be the maximum likelihood estimate for no sources () or very low power sources.

## 4SBL Algorithm

The complete SBL algorithm is summarized in Table 1. The same algorithm is valid for Noise Cases I, II, and III (at high SNR). Given the observed , we iteratively update by using the current and . The is computed directly as the numerical inverse of . For updating , we use . For the initialization of we use the conventional beamformer (CBF) power

Based on the corresponding noise case, , , , or is used to estimate . The noise is initialized using , , , or with , which provides an over estimate of the noise variance.

The convergence rate measures the relative change in estimated total source power,

The algorithm stops when and the output is the active set (see ) from which all source parameters are computed.

## 5Examples

When the noise process is stationary and only one source is active, the peak of the CBF spectrum provides the optimal estimate of the source location. With heteroscedastic noise this may not hold true. We propose to use a robust CBF (Sec.Section 5.2) or find the DOA with SBL, which takes the heteroscedastic noise into account (Sec. Section 5.3). With multiple sources and heteroscedastic noise we use SBL (Section 5.4).

### 5.1Simulation setup

In the analysis of seismic data the noise for each snapshot was observed to be log-normal distributed [52]. Noise has also been modeled with extreme distributions [53]. In the simulations here, the noise follows a normal-uniform hierarchical model. The noise is complex zero-mean Gaussian with the noise standard deviation uniformly distributed over two decades, i.e., , where is the uniform distribution. Three noise cases are simulated: (a) Case I: constant noise standard deviation over snapshots and sensors, (b) Case II: standard deviation changes across snapshots with , and (c) Case III: standard deviation changes across both snapshots and sensors with .

A realization of the noise standard deviation is shown for Noise Cases II (Fig. ?a) and III (Fig. ?b) for sensors and snapshots. The corresponding histogram is presented in Fig. ?c.

### 5.2Single DOA with CBF and MUSIC

The single source is located at . The complex source amplitude is stochastic and there is additive heteroscedastic Gaussian noise with SNR variation from to dB. The sensor array has elements with half wavelength spacing. We process snapshots. The angle space is divided into a grid (). The single-snapshot array signal-to-noise ratio (SNR) is

We first compute the beam spectra using CBF , CBF2 and CBF-Phase both but with different weighting . When the noise is homoscedastic (constant standard deviation), the beam spectra for the three processors behave similarly (first row in Fig. ?). For heteroscedastic noise CBF2 and CBF-Phase give much better discrimination between signal to noise, see Fig. ? middle (Noise Case II) and bottom row (Noise Case III).

The root mean squared error (RMSE) of the DOA estimates over runs with random noise realizations is used for evaluating the algorithm

where is the true DOA and is the estimated DOA for the th source when sources are present.

The SNR curves (Fig. ?) demonstrate increased robustness of CBF2 and CBF-Phase, failing 20 dB later. Due to the heteroscedastic noise, MUSIC performs worse than CBF for a single source. CBF-Phase diverges at an SNR 15–20 dB later than CBF for Noise Cases II and III.

### 5.3Single DOA estimation with SBL

We use the following SBL methods, with update and

SBL:

Solves Case I using standard SBL, with from .

SBL2:

Solves Case II, with for each snapshot from .

SBL3:

Solves Case III, with from .

In addition to these methods we also use basis pursuit (BP) as implemented in [31].

For Noise Case I, all the methods fail near the same SNR of dB, Figure 1a. When the noise is heteroscedastic across snapshots, CBF, BP, and SBL fail early. Since both SBL2 and SBL3 correctly model the noise, they perform the best for Noise Case II, Figure 1b. CBF-Phase also performs well. For heteroscedastic Noise Case III, SBL3 fails the last and as before CBF-Phase also performs well. SBL3 fails 15 dB later than any other method. This demonstrates the usefulness of accurate noise models in DOA processing.

The histograms (Fig. Figure 2) of the DOA (location of peak in power spectrum) at SNR dB for Noise Case III demonstrate that when the heteroscedastic noise is accounted for, the histograms concentrate near the true DOA ().

An example statistic of the heteroscedastic noise standard deviation is shown in Fig. ?. The standard deviation for each sensor is either or . SBL3 estimates the standard deviation from . Average noise in Fig. ?b resembles well the true noise (Fig. ?a) whereas the sample estimate (Fig. ?c) has high variance. Fig. ?d plots the mean across simulations and snapshots of the estimated standard deviation. On average, the noise estimate is close to the true noise.

### 5.4Three DOA estimation with SBL

Now consider three sources located at with power dB, see Figure 3. Relative to the single source case, the CBF-Phase performs significantly worse than SBL3, as the sources at and are both in the CBF main lobe.

The localization ability of the algorithms can also be gauged from the histogram of the top three peaks, see Figure 4. Since SBL3 accounts for the heteroscedastic Noise Case III, its histogram is concentrated near the true DOAs.

#### Noise estimate convergence

The performance of the SBL methods is strongly related to the accuracy of the noise estimates. For the th snapshot, the true noise power (Noise Case II) is

The estimated deviates from randomly. We also consider the noise estimate based on the EM method. Figure 5 compares the two noise estimates for noise generated with . The evolution of the histograms of the relative noise variance with iterations is in Figure 5b. The mean of the ratio of is close to 1 for SBL2 but is much lower for the EM noise estimate , this likely causes the SBL2 (EM noise, ) to fails 5 dB earlier.

#### Number of snapshots

The RMSE versus number of snapshots for Noise Case III (Fig. Figure 6) shows that SBL3 is most accurate with CBF-Phase following.

#### Noise distribution

For Noise Case III, we are just using one observation to estimate the standard deviation for SBL3 . Thus the estimates are not good, see Figure 7. The true noise standard deviation is generated from , see Figure 7a. The distribution of the deviation from the true standard deviation (Figure 7c) is well-centered (mean 0.007).

## 6Conclusion

Likelihood based methods for single and multiple DOA estimation from long-term observations corrupted by nonstationary additive noise are discussed. In such a setting, the DOA estimators for a stationary noise model perform poorly. Therefore a heteroscedastic Gaussian noise model is introduced where the noise variance varies across sensors and snapshots. We develop Sparse Bayesian Learning (SBL) approaches to estimate source powers, source DOAs, and the noise variance parameters.

Simulations show that normalizing the array data magnitude (such that only the phase information is retained) is simple and useful for estimating a single source in heteroscedastic Gaussian noise. For the estimation of (multiple) closely spaced sources, a problem specific SBL approach gives a much lower RMSE in the DOA estimates.

### References

1. Machine learning: a probabilistic perspective.
K. P. Murphy. MIT press, 2012.
2. Pattern recognition and machine learning.
C. M. Bishop. Springer, 2006.
3. Robust statistics.
P. J. Huber. Springer, 2011.
4. Robust statistics.
R. Maronna, D. Martin, and V. Yohai. John Wiley, Chichester., 2006.
5. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation.
R. F. Engle. Econometrica, pages 987–1007, 1982.
6. Camera model identification based on the heteroscedastic noise model.
T.H. Thai, R. Cogranne, and F Retraint. IEEE Trans. Image Proc., 23(1):250–263, 2014.
7. Maximum likelihood array processing in spatially correlated noise fields using parameterized signals.
M. Viberg, P. Stoica, and B. Ottersten. IEEE Trans Signal Proc., 45(4):996–1004, 1997.
8. Stochastic maximum-likelihood doa estimation in the presence of unknown nonuniform noise.
C. E. Chen, F. Lorenzelli, R.E Hudson, and K. Yao. IEEE Trans. Signal Proc., 56(7):3038–3044, 2008.
9. Maximum likelihood direction finding in spatially colored noise fields using sparse sensor arrays.
T Li and A. Nehorai. IEEE Trans Signal Proc., 59(3):1048–1062, 2011.
10. Line array performance when the signal coherence is spatially dependent.
H. Cox. J Acoust. Soc. Am., 54(6):1743–1746, 1973.
11. Direction of arrival estimation by eigenstructure methods with imperfect spatial coherence of wave fronts.
A. Paulraj and T. Kailath. J Acoust. Soc. Am., 83(3):1034–1040, 1988.
12. Sub-antenna processing for coherence loss in underwater direction of arrival estimation.
R. Lefort, R. Emmetière, S. Bourmani, G. Real, and A. Drémeau. J Acoust. Soc. Am., 142(4):2143–2154, 2017.
13. Lucky imaging: high angular resolution imaging in the visible from the ground.
N.M Law, C. D. Mackay, and J.E Baldwin. Astronomy & Astrophysics, 446(2):739–745, 2006.
14. Lucky ranging with towed arrays in underwater environments subject to non-stationary spatial coherence loss.
H. Ge and I. P. Kirsteins. In IEEE Intl. Conf. Acoust., Speech and Signal Proc. (ICASSP), pages 3156–3160, 2016.
15. When katrina hit california.
P. Gerstoft, M. C. Fehler, and K. G. Sabra. Geophys. Res. Lett., 33(17), 2006.
16. Weather bomb” induced seismic signals.
P. Gerstoft and P. D. Bromirski. Science, 353(6302):869–870, 2016.
17. P-waves from cross-correlation of seismic noise.
P. Roux, K. G. Sabra, P. Gerstoft, W. A. Kuperman, and M. C. Fehler. Geophys. Res. Lett., 32(19), 2005.
18. Phase velocities from seismic noise using beamforming and cross correlation in Costa Rica and Nicaragua.
N. Harmon, P. Gerstoft, C. A. Rychert, G. A. Abers, M. Salas de La Cruz, and K. M. Fischer. Geophy. Res. Lett., 35(19), 2008.
19. Origin of deep ocean microseisms by using teleseismic body waves.
M. Landès, F. Hubans, N. M Shapiro, A. Paul, and M. Campillo. J. Geophys Res: Solid Earth, 115(B5), 2010.
20. Retrieval of moho-reflected shear wave arrivals from ambient seismic noise.
Z. Zhan, S. Ni, D. V Helmberger, and R. W. Clayton. Geophys. J. Int., 182(1):408–420, 2010.
21. On estimating attenuation from the amplitude of the spectrally whitened ambient seismic field.
C. Weemstra, W. Westra, R. Snieder, and L. Boschi. Geophys. J. Int., 197(3):1770–1788, Apr 2014.
22. Passive fathometer processing.
P. Gerstoft, W. S. Hodgkiss, M. Siderius, C.-F. Huang, and C.H. Harrison. J. Acoust. Soc. Am., 123(3):1297–1305, 2008.
23. Blind deconvolution in ocean waveguides using artificial time reversal.
K.G. Sabra and D.R. Dowling. J. Acoust. Soc. Am., 116(1):262–271, 2004.
24. Ray-based blind deconvolution in ocean sound channels.
K.G. Sabra, H-C. Song, and D.R. Dowling. J. Acoust. Soc. Am., 127(2):EL42–EL47, 2010.
25. Speaker tracking using recursive em algorithms.
O. Schwartz and S. Gannot. IEEE Trans. Audio, Speech, Language Proc., 22(2):392–402, 2014.
26. Tree-based recursive expectation-maximization algorithm for localization of acoustic sources.
Y Dorfan and S Gannot. IEEE Trans. Audio, Speech and Lang. Proc., 23(10):1692–1703, 2015.
27. Optimum Array Processing, chapter 1–10.
H.L. Van Trees. Wiley-Interscience, New York, 2002.
28. A sparse signal reconstruction perspective for source localization with sensor arrays.
D. Malioutov, M. Çetin, and A. S. Willsky. IEEE Trans. Signal Process., 53(8):3010–3022, 2005.
29. Compressive beamforming.
A. Xenaki, P. Gerstoft, and K. Mosegaard. J. Acoust. Soc. Am., 136(1):260–271, 2014.
30. An empirical Bayesian strategy for solving the simultaneous sparse approximation problem.
D. P. Wipf and B.D. Rao. IEEE Trans. Signal Proc., 55(7):3704–3716, 2007.
31. Multiple and single snapshot compressive beamforming.
P. Gerstoft, A. Xenaki, and C.F. Mecklenbräuker. J. Acoust. Soc. Am., 138(4):2003–2014, 2015.
32. Multisnapshot sparse Bayesian learning for DOA.
P. Gerstoft, C. F. Mecklenbräuker, A. Xenaki, and S. Nannuru. IEEE Signal Proc. Lett., 23(10):1469–1473, 2016.
33. Sparse Bayesian learning with uncertainty models and multiple dictionaries.
S. Nannuru, K.L Gemba, P. Gerstoft, W.S. Hodgkiss, and C.F. Mecklenbräuker. arXiv:1704.00436, 2017.
34. SPICE and LIKES: Two hyperparameter-free methods for sparse-parameter estimation.
P. Stoica and P. Babu. Signal Proc., 92(7):1580–1590, 2012.
35. Beamforming using the relevance vector machine.
D. P. Wipf and S. Nagarajan. In Proc. 24th Int. Conf. Machine Learning, New York, NY, USA, 2007.
36. Sparse Bayesian learning for basis selection.
D. P. Wipf and B.D. Rao. IEEE Trans. Signal Proc, 52(8):2153–2164, 2004.
37. Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning.
Z. Zhang and B. D Rao. IEEE J Sel. Topics Signal Proc.,, 5(5):912–926, 2011.
38. An efficient maximum likelihood method for direction-of-arrival estimation via sparse Bayesian learning.
Z.-M. Liu, Z.-T. Huang, and Y.-Y. Zhou. IEEE Trans. Wireless Comm., 11(10):1–11, Oct. 2012.
39. Multiple-measurement vector based implementation for single-measurement vector sparse Bayesian learning with reduced complexity.
JA. Zhang, Z. Chen, P. Cheng, and X. Huang. Signal Proc., 118:153–158, 2016.
40. Type I and type II Bayesian methods for sparse signal recovery using scale mixtures.
R. Giri and B. Rao. IEEE Trans Signal Proc. Signal Proc., 64(13):3418–3428, July 2016.
41. Source-parameter estimation by approximate maximum likelihood and nonlinear regression.
J.F. Böhme. IEEE J. Oceanic Eng., 10(3):206–212, 1985.
42. Maximum likelihood direction finding of stochastic sources: A separable solution.
A.G. Jaffer. In IEEE Int. Conf. on Acoust., Speech, and Sig. Proc. (ICASSP-88), volume 5, pages 2893–2896, 1988.
43. On the concentrated stochastic likelihood function in array processing.
P. Stoica and A. Nehorai. Circuits Syst. Signal Proc., 14(5):669–674, 1995.
44. Estimation of spectral parameters of correlated signals in wavefields.
Johann F Böhme. Signal Proc., 11(4):329–337, 1986.
45. Two decades of array signal processing research: the parametric approach.
H. Krim and M. Viberg. IEEE Signal Proc. Mag., 13(4):67–94, 1996.
46. Maximum likelihood array processing for stochastic coherent sources.
P. Stoica, B. Ottersten, M. Viberg, and R.L. Moses. IEEE Trans. Signal Proc., 44(1):96–105, 1996.
47. Sparse Bayesian learning and the relevance vector machine.
M. E. Tipping. J. Machine Learning Research, 1:211–244, 2001.
48. Numerical analysis.
R.L. Burden, J.D. Faires, and A.M. Burden. Cengage Learning, 2016.
49. Approximative Maximum-Likelihood-Schätzung und verwandte Verfahren zur Ortung und Signalschätzung mit Sensorgruppen.
D. Kraus. Shaker Verlag, Aachen, Germany, 1993.
50. Optimality of high resolution array processing using the eigensystem approach.
G. Bienvenu and L. Kopp. IEEE Trans on Acous., Speech, Signal Proc., 31(5):1235–1248, Oct 1983.
51. Spatiotemporal sparse Bayesian learning with applications to compressed sensing of multichannel physiological signals.
Z Zhang, T-P Jung, S. Makeig, Zhouyue P, and BD. Rao. IEEE Trans. Neural Syst. and Rehab. Eng., 22(6):1186–1197, Nov 2014.
52. The seismic traffic footprint: Tracking trains, aircraft, and cars seismically.
N. Riahi and P. Gerstoft. Geophys. Res. Lett., 42(8):2674–2681, 2015.
53. Multichannel sparse recovery of complex-valued signals using Huber’s criterion.
E. Ollila. In Int. Workshop on Compressed Sensing Theory and Appl. to Radar, Sonar, and Remote Sensing, Pisa, Italy, June 2015.
10362