# Compressive Joint Angular-Frequency Power Spectrum Estimation

###### Abstract

We introduce a new compressive power spectrum estimation approach in both frequency and direction of arrival (DOA). Wide-sense stationary signals produced by multiple uncorrelated sources are compressed in both the time and spatial domain where the latter compression is implemented by activating only some of the antennas in the underlying uniform linear array (ULA). We sample the received signal at every active antenna at sub-Nyquist rate, compute both the temporal and spatial correlation functions between the sub-Nyquist rate samples, and apply least squares to reconstruct the full-blown two-dimensional power spectrum matrix where the rows and columns correspond to the frequencies and the angles, respectively. This is possible under the full column rank condition of the system matrices and without applying any sparsity constraint on the signal statistics. Further, we can estimate the DOAs of the sources by locating the peaks of the angular power spectrum. We can theoretically estimate the frequency bands and the DOAs of more uncorrelated sources than active sensors using sub-Nyquist sampling.

Compressive Joint Angular-Frequency Power Spectrum Estimation

Dyonisius Dony Ariananda and Geert Leus |

Faculty of EEMCS, Delft University of Technology, The Netherlands |

{d.a.dyonisius, g.j.t.leus}@tudelft.nl. |

## 1 Introduction

Compressive sampling and multi-coset sampling have drawn a lot of interest from the signal processing community due to the possibility to reconstruct a signal sampled at sub-Nyquist rate with no or little information loss under the constraint that the signal is sparse in a particular basis [1, 2]. All these works on sub-Nyquist sampling are important especially when it is needed to relax the requirements on the analog-to-digital converters (ADCs). For a wide-sense stationary (WSS) signal, it has also been shown that perfect reconstruction of its second-order statistics from sub-Nyquist rate samples is theoretically possible even without sparsity constraint [3]. This invention is important for some applications, such as wideband spectrum sensing for cognitive radio, where only perfect reconstruction of the temporal auto-correlation function is required instead of the signal itself. The principle of reconstructing the temporal auto-correlation function of a signal from the time-domain compressive measurements has in a dual form also been proposed in the spatial domain. Given a linear antenna array, [4] and [5] show that if the locations of the antennas are arranged according to a nested or coprime array, the spatial correlation values between the outputs of the antennas in the array can be used to generate the spatial correlation values between the outputs of the antennas in the virtual array or difference co-array (which is uniform in this case) which generally has more antennas and a larger aperture than the actual array. This enhances the degrees of freedom and allows [4] and [5] to estimate the direction of arrival (DOA) of more uncorrelated sources than sensors. The minimum redundancy array (MRA) of [6] can also be used to produce this feature but in a more optimal way. This has been exploited by [7] to perform compressive angular power spectrum reconstruction. The advantage offered by the nested and coprime arrays over the MRA however, is the possibility to derive a closed-form expression for the array geometry and the achievable number of correlation values in the resulting uniform difference co-array. In the aforementioned concept, the spatial compression is performed in the sense that we select a subset of antennas from a uniform linear array (ULA).

In this paper, we jointly reconstruct both the frequency-domain and angular-domain power spectrum using compressive samples. We use a ULA as the underlying array and activate only some of its antennas leading to a spatial-domain compression. The received signal at each active antenna is then sampled at sub-Nyquist-rate using multi-coset sampling. Next, we compute all the correlation values between the resulting sub-Nyquist rate samples at all active antennas both in the time domain and the spatial domain and use them to reconstruct the two-dimensional (2D) power spectrum matrix where each row gives the power spectrum in the frequency domain for a given angle and where each column contains the power spectrum in the angular domain for a given frequency. Further, we can estimate the DOA of the sources active at each frequency by locating the peaks in the angular power spectrum. This 2D power spectrum reconstruction can be done for more uncorrelated sources than active sensors without any sparsity constraint on the true power spectrum.

## 2 Preliminaries

First, consider a ULA having antennas receiving signals from uncorrelated WSS sources. We assume that the distance between the sources and the ULA is large enough compared to the length of the ULA and thus the wave incident on the ULA is assumed to be planar and the sources can be assumed as point sources. We also assume that the inverse of the bandwidth of the aggregated incoming signals is larger than the propagation delay across the ULA, which allows us to represent the delay between the antennas as a phase shift. Based on these assumptions, we can write the ULA output as

(1) |

where is the output vector containing the received signal at the antennas of the ULA, is the additive white Gaussian noise vector, is the extended source vector with the incoming signal from the investigated angle , and is the extended array manifold matrix with the array response vector containing the phase shifts experienced by at each element of the ULA. Note that is known and might only approximately contain the actual DOAs of the sources. We generally assume that and are uncorrelated, that the impact of the wireless channel has been taken into account in , and that the noises at different antennas are uncorrelated with variance , i.e., , with the identity matrix. We consider the first element of the ULA as a reference point and express the array response vector as , where and is the distance between two consecutive antennas in wavelengths, which is set to in order to prevent spatial aliasing.

In order to simplify the further analysis, we introduce , , and as a digital representation of , , and , respectively, where is the Nyquist sampling rate at every ADC associated with each antenna. We then collect the output vectors at consecutive sample indices into the matrix , for , as and write as

(2) |

where is similarly defined as and the matrix is given by . Let us also write the vector as with a digital representation of .

## 3 Time-Domain and Spatial-Domain Compression

In this section, we introduce the compression operations on the output matrix both in the spatial domain and time domain. The spatial-domain compression is implemented by activating only out of available antennas in the ULA leading to a possibly non-ULA of less active antennas than sources. Further, in the receiver branches associated with the active antennas, time-domain compression is performed by sampling the received analog signal at sub-Nyquist-rate using the multi-coset sampling principle discussed in [2], which can be implemented using the practical sampling device proposed in [3]. Here, the multi-coset sampling process is represented by selecting only out of time samples.

We first introduce the spatial-domain selection matrix , which is formed by selecting rows of . Here, the indices of the selected rows of used to construct correspond to the indices of the active antennas selected from the available antennas in the ULA. Based on (2), the matrix is then compressed in the spatial-domain by leading to the matrix

(3) |

where with , is the array response matrix with the array response vector associated with the activated antennas, the matrix is given by , and is the discrete noise vector given by . Observe that generally has correlation matrix . The next step is to introduce the time-domain selection matrix formed by selecting rows of the identity matrix , and further compress in (3) in the time domain, leading to the matrix

(4) |

## 4 Power Spectrum Reconstruction

Denote the -th row of and in (4) as and , respectively, and write the vector in terms of its elements as and the vector as . This allows us to rewrite the time-domain compression in (4) in terms of the row vectors of and , i.e.,

(5) |

Using (5), our next step is to calculate the correlation matrix between and for all as

(6) |

In practice, the expectation operator in (6) can be estimated by taking an average over available matrices . After cascading all columns of into the vector vec and by taking into account the fact that is a real matrix, we can express vec based on (6) as

(7) |

where vec is the operator that cascades all columns of a matrix into a single column vector and represents the Kronecker product operation. Up to this stage, let us recall that in (1) are WSS processes since we have WSS sources. Based on this fact, as well as (3), it is obvious that the elements of in (5) also form a WSS sequence. This means that the matrix in (7) has a Toeplitz structure allowing us to condense into the vector and write

(8) |

where is a special repetition matrix whose -th row is given by the -th row of the identity matrix . By combining (7) and (8), we obtain

(9) |

where is an matrix. Observe that it is possible to reconstruct from in (9), for all , using least squares (LS) if and has full column rank.

The next step is to figure out the relationship between in (9) and the extended source matrix in (3). By taking into account the fact that every row of and is a WSS sequence and the assumption that the extended source vector and the noise vector are uncorrelated, it is straightforward to find that the correlation matrix between and is given by

(10) |

for . Since the point sources are assumed to be uncorrelated, the elements of are also uncorrelated and thus the matrix is a diagonal matrix. By exploiting this fact and stacking all columns of the matrix in (10) into the vector vec, we obtain

(11) |

where represents the Khatri-Rao product operation. Let us now investigate the relationship between the elements of in (9) and in (11). We can find that is actually related to as . Hence, we can use the elements of the reconstructed in (9) to form in (11) and then use them to reconstruct in (11), which can be performed using LS if and has full column rank.

If we combine the vectors as , we can observe that the -th row of actually corresponds to the temporal auto-correlation of the incoming signal from the investigated angle , which can be written as . By defining as the discrete Fourier transform (DFT) matrix, we can compute the power spectrum of as , where is the power spectrum vector of the incoming signal from the investigated angle . By combining into the matrix , we can write

(12) |

Note that in (12) can be perceived as a 2D power spectrum matrix where every row of gives the power spectrum in the frequency-domain for a given investigated angle and every column of provides the power spectrum information in the angular domain for a given frequency.

## 5 Construction of the Compression Matrices

Recall that the 2D power spectrum matrix can be reconstructed from in (9), which contains the cross-correlations between the rows of the measurement matrix in (4), by solving (9) and (11) using LS and then applying the DFT on the rows of the resulting matrix . We now discuss the choice of the selection matrix and the extended array response matrix that ensure the uniqueness of the LS solution of (9) and (11), respectively.

We first investigate the choice of that results in a full column rank matrix . Since the rows of and in (9) are formed by selecting the rows of the identity matrix, it is clear that every row of both and only contains a single one and zeros elsewhere. This fact guarantees that each row of has only a single one and thus, in order to ensure the full column rank condition of , we need to ensure that each column of it has at least a single one. This problem actually has been encountered and solved in [3] where the solution is to construct by selecting the rows of based on the so-called minimal length- sparse ruler problem. In practice, this results in a multi-coset sampling procedure called the minimal sparse ruler sampling [3].

Next, we examine the choice of , which boils down to the selection of the activated antennas in the ULA and the investigated angles . Let us write in terms of as

(13) |

and in terms of as

(14) |

where is the distance in wavelengths between the -th active antenna and the reference antenna of the ULA defined in Section 2. It is clear from (13) and (14) that the -th column of contains the elements , for . While our task to find general design conditions to guarantee the full column rank of is not trivial, the following theorem suggests one possible way to achieve a full column rank .

Theorem 1: The matrix has full column rank if:
1) There exist distinct values of satisfying , and
2) There exists an integer such that contains an arithmetic sequence of terms having a difference of between each two consecutive terms.

The proof of Theorem 1 can be found in Appendix A. The second condition indicates that there exist distinct rows from that form the array response matrix of a virtual ULA with antennas, which can only be achieved for . This second condition also implies that we have more antennas in this virtual ULA than investigated angles. Some possible ways to satisfy Theorem 1 is to select the active antennas from the antennas in the ULA based on the MRA discussed in [6] (which also obeys the minimal sparse ruler problem [7]), the two-level nested array [4], or the coprime array [5]. For the MRA and the two-level nested array, Theorem 1 can be satisfied even for . Note that although the different values of can be chosen in an arbitrary fashion, they should not be too close to each other, since otherwise the resulting might be ill-conditioned. Theorem also implies that the maximum number of detectable sources is upper bounded by since we cannot detect more than sources.
Apart from satisfying Theorem 1, another way to achieve a full column rank is suggested by Theorem 2.

Theorem 2: The matrix has full column rank if:

1) has at least different values and 2) the grid of investigated angles is designed based on the inverse sinusoidal angular grid where

(15) |

The proof for this theorem can be found in Appendix B. Note that the first condition from Theorem 2 is less strict than the second condition from Theorem 1. A good option is to use a configuration satisfying Theorem 1 with and , and to use (15) with . This will not only ensure that the resulting matrix has full column rank but also that there exists a submatrix from that forms a row-permuted version of the inverse DFT matrix, meaning that is well-conditioned.

## 6 Numerical Study

In this section, we examine the proposed approach with some numerical study. We consider a ULA having antennas as the underlying array and construct an MRA of active antennas by selecting the antenna indices based on the minimal length- sparse ruler problem discussed in [3, 7]. This leads to activated antennas with where is set to . The set of investigated angles is set according to (15) with . In the receiver branch corresponding to each active antenna, the time-domain compression rate of is obtained by setting and . We construct the selection matrix by first solving the minimal length- sparse ruler problem which gives the indices of the rows of that have to be selected. The selection of these rows will ensure that the resulting matrix in (9) has at least a single one in each column. The additional rows of are then randomly selected from the remaining rows of that have not been selected. We simulate the case when we have more sources than active antennas by generating uncorrelated sources having DOAs with 9 degrees of separation, i.e., the set of DOAs is given by . The sources produce complex baseband signals whose frequency bands are given in Table 1 and which are generated by passing circular complex zero-mean Gaussian i.i.d. noise with variance into a digital filter of length with the unit-gain passband of the filter for each source set according to Table 1. This will ensure that the true auto-correlation sequence for each source is limited to . We assume a spatially and temporally white noise with variance and set the number of measurement matrices to .

Source | Actual DOA | Occupied frequency band |
---|---|---|

Fig. 2 illustrates the estimate of the power spectrum as a function of the frequency and the investigated angles. It is clear that the uncorrelated sources can generally be detected. We can find the DOA estimates by locating the peak of this spectrum though the actual DOAs might not fall on top of the defined investigated angles. For a given DOA estimate, we can locate the active frequency band of the corresponding source together with the value of the power spectrum estimate. The top view of Fig. 2, which is provided by Fig. 2, gives a much clearer picture of the quality of the estimate. We can easily compare this figure with the data provided in Table 1. Observe that the estimate of the DOA, the power spectrum, as well as the active frequency band of the sources is quite satisfactory except for the sources with DOAs and . For these two sources, it is apparent from Fig. 2 that the impact of the grid mismatch effect is quite significant and their power spectrum estimates seem to have been distributed among the two nearest grid points. Note that this 2D power spectrum estimate can be produced without applying any sparsity contraint on the true power spectrum, but can of course be improved if such a constraint is used.

## Appendix A Proof of Theorem 1

The second requirement of Theorem 1 implies that there exists a matrix , which is a submatrix of in (13), that forms the array response matrix of a virtual ULA of antennas with given by , where gives the distance between the first antenna in the virtual ULA and the reference antenna in the underlying ULA in Section 2. Hence, it is clear that is a column-wise Vandermonde matrix. From the well-known properties of a column-wise Vandermonde matrix, has full column rank due to the first requirement of Theorem 1 and since . It is then trivial to show that also has full column rank.

## Appendix B Proof of Theorem 2

Based on (13) and (14) and the fact that the inverse sinusoidal angular grid in (15) is used, we can write in terms of its row vectors, i.e., , with given by . Observe that is a row-wise Vandermonde matrix since the elements of are ordered according to geometric progression. In order to ensure that has full column rank, we need distinct values of modulo which is guaranteed by the first requirement of Theorem 2.

## References

- [1] E.J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489-509, February 2006.
- [2] M. Mishali and Y. Eldar, “Blind multiband signal reconstruction: compressed sensing for analog signals,” IEEE Trans. Signal Process., vol. 57, no. 3, pp. 993-1009, March 2009.
- [3] D.D. Ariananda and G. Leus, “Compressive wideband power spectrum estimation,” IEEE Trans. Signal Process., vol. 60, no. 9, pp. 4775-4789, September 2012.
- [4] P. Pal and P.P. Vaidyanathan, “Nested arrays: a novel approach to array processing with enhanced degrees of freedom,” IEEE Trans. Signal Process., vol. 58, no. 8, pp. 4167-4181, August 2010.
- [5] P. Pal and P.P. Vaidyanathan, “Coprime sampling and the MUSIC algorithm,” Proc. IEEE Digital Signal Process. and Signal Process. Educ. Workshop, Sedona, Arizona, pp. 289-294, January 2011.
- [6] A. Moffet, “Minimum-redundancy linear arrays,” IEEE Trans. Antennas Propag., vol. 16, no. 2, pp. 172-175, March 1968.
- [7] S. Shakeri, D.D. Ariananda and G. Leus, “Direction of arrival estimation using sparse ruler array design,” Proc. IEEE Workshop Signal Process. Adv. Wireless Commun., Cesme, Turkey, June 2012.