Structure-Based Subspace Method for Multi-Channel Blind System Identification
In this work, a novel subspace-based method for blind identification of multichannel finite impulse response (FIR) systems is presented. Here, we exploit directly the impeded Toeplitz channel structure in the signal linear model to build a quadratic form whose minimization leads to the desired channel estimation up to a scalar factor. This method can be extended to estimate any predefined linear structure, e.g. Hankel, that is usually encountered in linear systems. Simulation findings are provided to highlight the appealing advantages of the new structure-based subspace (SSS) method over the standard subspace (SS) method in certain adverse identification scenarii.
Blind system identification (BSI) is one of the fundamental signal processing problems that was initiated more than three decades ago. BSI refers to the process of retrieving the channel’s impulse response based on the output sequence only. As it has so different applications, such as mobile communication, seismic exploration, image restoration and other medical applications, it has drawn researchers’ attention and resulted in a plethora of methods. Since then, a class of subspace-based methods dedicated to BSI has been developed, including the standard subspace method (SS) [1, 2], the cross-relation (CR) method  and the two-step maximum likelihood (TSML) method . According to the comparative studies which have been done early in  and , the SS method is claimed to be the most powerful one.
In this paper, we introduce another subspace-based method based on the channel’s Toeplitz structure which is employed directly to formulate our cost function. The Toeplitz structure is an inherent nature that exists in most of the linear systems due to their convolutive nature.
The paper presentation focuses at first on the development of the proposed structure-subspace (SSS) method. Then, we highlight the improvement that is obtained by the SSS method over SS in the case of channels with closely spaced roots. The SSS method sounds to be a promising technique, yet it has a higher computational complexity that needs to be addressed in a future work.
Notation: The invertible column vector-matrix mappings are denoted by and . is the Kronecker product. and denote the transpose and Hermitian transpose, respectively.
Ii Problem Formulation
Ii-a Multi-channel model
Multichannel framework is considered in this work. It is obtained either by oversampling the received signal or using an array of antennas or a combination of both . To further develop the multi channel system model, consider the observed signal from a linear modulation over a linear channel with additive noise given by
where is the FIR channel impulse response, are the transmitted symbols and is the additive noise. If the received signal is oversampled or recorded with sensors, the signal model in (1) becomes -variate and expressed as
where , , . Define the system transfer function with . Consider the noise to be additive independent white circular noise with . Assume reception of a window of samples, by stacking the data into a vector/matrix representation, we get:
where , , is stacked in a similar way to as , and is an block Toeplitz matrix defined as
is the desired parameter vector containing all channels taps, i.e. . Using the observation data in (3), our objective is to estimate the different channels’ impulse responses, i.e, recover up to a possible scalar ambiguity. In the following subsection, we describe the subspace method, briefly.
Ii-B Subspace method revisited
For consistency and reader’s convenience, the SS method  which is also referred to as noise subspace method, shall be reviewed hereafter. The SS method implicitly exploits the Toeplitz structure of the filtering matrix . Let , where , be in the orthogonal complement space of the range space of such that
Using the block Toeplitz structure of , the above linear equation can be written in terms of the channel parameter as
Assume that the components of have no common zeros, and . Let be a basis of the orthogonal complement of the column space of , then for any with we have
where is some scalar factor.
One of the encountered ways to estimate the orthogonal complement of , i.e. noise subspace, is the signal-noise subspace decomposition. From the multi-channel model and noise properties, the received signal covariance matrix is given as
The singular value decomposition of has the form
where , are the principal eigenvalues of the covariance matrix . Also, the columns of and span the so-called signal and noise subspaces (orthogonal complement), respectively. After having the basis of the noise subspace, the channel identification can be performed based on the following quadratic optimization criterion:
In brief, the SS method achieves the channel estimation by exploiting the subspace information (i.e. ideally, () as well as the block Sylvester (block-Toeplitz) structure of the channel matrix. More precisely, it enforces the latter matrix structure through the use of relations (5) and (6) and minimizes the subspace orthogonality error in (10). In the sequel, we propose a dual approach which enforces the subspace information (i.e. where refers to the principal subspace of the sample covariance matrix) while minimizing a cost function representing the deviation of from the Sylvester structure as indicated in Table I.
Iii Structure-Based SS method (SSS)
In the proposed subspace method, one searches for the system matrix in the form so that the orthogonality criterion in (10) is set equal to zero, i.e. while is chosen in such a way the resulting matrix is close to the desired block Toepliz structure. This is done by minimizing w.r.t. the following structure-based cost function (informal Matlab notions are used):
where and refers to . The cost function in (11) is inspired and matched to the Toeplitz structure introduced in (4). It is a composite of three parts; seeks to force Toeplitz structure on the possibly non-zero entries, while and account for the zero entries in the first rows and first column, respectively.
Starting with , one can express it in a more compact way as follows:
is the left identity square matrix with setting the last diagonal entries to zeros.
is the right identity square matrix with setting the last diagonal entry to zero.
is a square translation matrix with ones on the sub-diagonal and zeros elsewhere, i.e., .
is a square translation matrix with ones on the super-diagonal and zeros elsewhere, i.e., .
Now, using the Kronecker product property , one can write as follows:
where . In a similar way, can be expressed as
where is the sub-matrix of given by its first rows, and is the square identity matrix with setting the first diagonal entries to zero. Finally, can also be set up as
where is the sub-matrix of given by its last rows, and is the square diagonal matrix with one at the first diagonal entry and zeros elsewhere.
The optimal solution of (16), under unit norm constraint of , is the least eigenvector that corresponds to the smallest eigenvalue of . The square matrix can be constructed by reshaping the obtained solution from a vector into the matrix format, such that . Once matrix is obtained, the channel taps are estimated by averaging over the non-zero diagonal blocks of matrix .
In this section, we provide some insightful comments in order to highlight the advantages and drawbacks of the proposed subspace method.
As explained earlier the proposed approach consists of neglecting the subspace error (i.e. considering Range() as perfect in the sense one searches for the desired solution within that subspace) while minimizing the system matrix (Toeplitz) structure error. The motivation behind this choice resides in the fact that the subspace error at the first order is null as shown in  and hence it can be neglected at the first order in favor of more flexibility for searching the appropriate channel matrix. This explains the observed gain of the SSS over SS method in certain difficult scenarii including the case of closely spaced channels roots.
In the favorable cases where the channel matrix is well conditioned, the two subspace methods lead to similar performance as illustrated next by the simulation example of Fig. 1.
For the SS method to apply one needs that the noise subspace vectors generate a minimal polynomial basis of the rational subspace orthogonal to (see  for more details) and so the condition is considered to guarantee such requirement to hold. As the SSS does not explicitly rely on the orthogonality relation in (10), the latter condition might be relaxed as illustrated by the simulation example of Fig. 5.
The proposed subspace method has a higher numerical cost as compared to the SS method. However, the cost might be reduced by taken into account the Kronecker products involved in building matrix . This issue is still under investigation together with an asymptotic statistical performance analysis of SSS.
V Simulation Results
In this section, the devised SSS method will be compared to the standard SS method as a benchmark. Three different experiments will be examined to illustrate the behavior of SSS in different contexts.
Two FIR channels are considered, each has a second order impulse response given by :
where is the absolute phase value of ’s zeros and indicates the angular distance between the zeros of the two channels on the unit circle. Small results into an ill-conditioned system. In all simulations, the excitation signal is a 4-QAM, each channel receives samples, and the noise is additive white Gaussian. Note that the SNR is defined as
The performance measure is the mean-square-error (MSE), given as
where refers to the number of Monte-Carlo runs and is the channel vector estimate at the -th run.
In the first experiment given by Fig. 1, we show that for a well-conditioned system (), both methods have a comparable performance. In the second one, we consider ill-conditioned systems (i.e. poor channel diversity). In that case, the devised SSS method outperforms the SS method at a moderate ill-conditioned system (), and its performance gain becomes more obvious at severely ill-conditioned case () as shown in Fig.’s 2 and 3, respectively. When the system is ill-conditioned, the difference becomes more pronounced at low and moderate SNR values. Also, at the severe ill-conditioned case, the SS methods becomes unresponsive to the changes in the signal’s SNR, as revealed in Fig. 3, since the effect of ill-conditioning becomes prominent at low SNR. Figure 4 depicts the consequence of varying on the MSE for SNR=10dB.
In the last experiment, the number of channels is and the number of taps in each channel is , the transfer function of the channels are given in . In this experiment, we are primarily interested to look at the impact of the processing window length on the estimation performance. As can be seen from the results reported in Fig. 5, the performance of the SS method gets worse and degrades when the processing window length becomes less than the number of the channels’ taps , while our proposed SSS is weakly affected by the window length condition, i.e. . This allows us to reduce the dimension of the channel matrix with smaller window size values, especially for large dimensional systems where .
In this paper, we proposed a dual approach to the standard subspace method, whereby the channel matrix is forced to belong to the principal subspace of the data covariance matrix estimate while its deviation from Toeplitz structure is minimized. By doing so, we show that the channel estimation is significantly improved in the difficult context of weak channels diversity (i.e. channels with closely spaced roots). Interestingly, the principle of the proposed approach can be applied for estimation problems with other matrix structures where subspace method can be used.
-  E. Moulines, P. Duhamel, J.-F. Cardoso, and S. Mayrargue, “Subspace methods for the blind identification of multichannel FIR filters,” IEEE Transactions on signal processing, vol. 43, no. 2, pp. 516–525, 1995.
-  W. Kang and B. Champagne, “Subspace-based blind channel estimation: Generalization and performance analysis,” IEEE transactions on signal processing, vol. 53, no. 3, pp. 1151–1162, 2005.
-  G. Xu, H. Liu, L. Tong, and T. Kailath, “A least-squares approach to blind channel identification,” IEEE Transactions on signal processing, vol. 43, no. 12, pp. 2982–2993, 1995.
-  Y. Hua, “Fast maximum likelihood for blind identification of multiple FIR channels,” IEEE transactions on Signal Processing, vol. 44, no. 3, pp. 661–672, 1996.
-  W. Qiu and Y. Hua, “Performance comparison of three methods for blind channel identification,” in Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on, vol. 5. IEEE, 1996, pp. 2423–2426.
-  ——, “Performance analysis of the subspace method for blind channel identification,” Signal Processing, vol. 50, no. 1, pp. 71–81, 1996.
-  L. Tong and S. Perreau, “Multichannel blind identification: From subspace to maximum likelihood methods,” PROCEEDINGS-IEEE, vol. 86, pp. 1951–1968, 1998.
-  Z. Xu, “Perturbation analysis for subspace decomposition with applications in subspace-based algorithms,” IEEE Transactions on Signal Processing, vol. 50, no. 11, pp. 2820–2830, 2002.
-  K. Abed-Meraim, J.-F. Cardoso, A. Y. Gorokhov, P. Loubaton, and E. Moulines, “On subspace methods for blind identification of single-input multiple-output FIR systems,” IEEE Transactions on Signal Processing, vol. 45, no. 1, pp. 42–55, 1997.