Structure-Based Subspace Method for Multi-Channel Blind System Identification

Structure-Based Subspace Method for Multi-Channel Blind System Identification

Qadri Mayyala,  Karim Abed-Meraim,  and Azzedine Zerguine,  Q. Mayyala and A. Zerguine are with the Department of Electrical and Electronic Engineering, King Fahd University of Petroleum & Minerals, Saudi Arabia, e-mails: {qmayyala, azzedine}@kfupm.edu.sa.K. Abed-Meraim is with the PRISME lab, University of Orléans, France, e-mail: karim.abed-meraim@univ-orleans.frThe authors acknowledge the support provided by the Deanship of Scientific Research at KFUPM under Research Grant RG1414.
Abstract

In this work, a novel subspace-based method for blind identification of multichannel finite impulse response (FIR) systems is presented. Here, we exploit directly the impeded Toeplitz channel structure in the signal linear model to build a quadratic form whose minimization leads to the desired channel estimation up to a scalar factor. This method can be extended to estimate any predefined linear structure, e.g. Hankel, that is usually encountered in linear systems. Simulation findings are provided to highlight the appealing advantages of the new structure-based subspace (SSS) method over the standard subspace (SS) method in certain adverse identification scenarii.

Blind System Identification, Toeplitz Structure, Subspace method.

I Introduction

Blind system identification (BSI) is one of the fundamental signal processing problems that was initiated more than three decades ago. BSI refers to the process of retrieving the channel’s impulse response based on the output sequence only. As it has so different applications, such as mobile communication, seismic exploration, image restoration and other medical applications, it has drawn researchers’ attention and resulted in a plethora of methods. Since then, a class of subspace-based methods dedicated to BSI has been developed, including the standard subspace method (SS) [1, 2], the cross-relation (CR) method [3] and the two-step maximum likelihood (TSML) method [4]. According to the comparative studies which have been done early in [5] and [6], the SS method is claimed to be the most powerful one.

In this paper, we introduce another subspace-based method based on the channel’s Toeplitz structure which is employed directly to formulate our cost function. The Toeplitz structure is an inherent nature that exists in most of the linear systems due to their convolutive nature.

The paper presentation focuses at first on the development of the proposed structure-subspace (SSS) method. Then, we highlight the improvement that is obtained by the SSS method over SS in the case of channels with closely spaced roots. The SSS method sounds to be a promising technique, yet it has a higher computational complexity that needs to be addressed in a future work.

Notation: The invertible column vector-matrix mappings are denoted by and . is the Kronecker product. and denote the transpose and Hermitian transpose, respectively.

Ii Problem Formulation

Ii-a Multi-channel model

Multichannel framework is considered in this work. It is obtained either by oversampling the received signal or using an array of antennas or a combination of both [7]. To further develop the multi channel system model, consider the observed signal from a linear modulation over a linear channel with additive noise given by

(1)

where is the FIR channel impulse response, are the transmitted symbols and is the additive noise. If the received signal is oversampled or recorded with sensors, the signal model in (1) becomes -variate and expressed as

(2)

where , , . Define the system transfer function with . Consider the noise to be additive independent white circular noise with . Assume reception of a window of samples, by stacking the data into a vector/matrix representation, we get:

(3)

where , , is stacked in a similar way to as , and is an block Toeplitz matrix defined as

(4)

is the desired parameter vector containing all channels taps, i.e. . Using the observation data in (3), our objective is to estimate the different channels’ impulse responses, i.e, recover up to a possible scalar ambiguity. In the following subsection, we describe the subspace method, briefly.

Ii-B Subspace method revisited

For consistency and reader’s convenience, the SS method [1] which is also referred to as noise subspace method, shall be reviewed hereafter. The SS method implicitly exploits the Toeplitz structure of the filtering matrix . Let , where , be in the orthogonal complement space of the range space of such that

(5)

Using the block Toeplitz structure of , the above linear equation can be written in terms of the channel parameter as

(6)

The former equation can be used to estimate the channel vector provided that (6) has a unique solution. Moulines et al. [1] proposed the SS method which is based on the following theorem:

Theorem 1.

Assume that the components of have no common zeros, and . Let be a basis of the orthogonal complement of the column space of , then for any with we have

(7)

where is some scalar factor.

One of the encountered ways to estimate the orthogonal complement of , i.e. noise subspace, is the signal-noise subspace decomposition. From the multi-channel model and noise properties, the received signal covariance matrix is given as

(8)

The singular value decomposition of has the form

(9)

where , are the principal eigenvalues of the covariance matrix . Also, the columns of and span the so-called signal and noise subspaces (orthogonal complement), respectively. After having the basis of the noise subspace, the channel identification can be performed based on the following quadratic optimization criterion:

(10)

In brief, the SS method achieves the channel estimation by exploiting the subspace information (i.e. ideally, () as well as the block Sylvester (block-Toeplitz) structure of the channel matrix. More precisely, it enforces the latter matrix structure through the use of relations (5) and (6) and minimizes the subspace orthogonality error in (10). In the sequel, we propose a dual approach which enforces the subspace information (i.e. where refers to the principal subspace of the sample covariance matrix) while minimizing a cost function representing the deviation of from the Sylvester structure as indicated in Table I.

Method Toeplitz Structure Orthogonality
SS forced minimized
SSS minimized forced
TABLE I: Duality Table

Iii Structure-Based SS method (SSS)

In the proposed subspace method, one searches for the system matrix in the form so that the orthogonality criterion in (10) is set equal to zero, i.e. while is chosen in such a way the resulting matrix is close to the desired block Toepliz structure. This is done by minimizing w.r.t. the following structure-based cost function (informal Matlab notions are used):

(11)

where and refers to . The cost function in (11) is inspired and matched to the Toeplitz structure introduced in (4). It is a composite of three parts; seeks to force Toeplitz structure on the possibly non-zero entries, while and account for the zero entries in the first rows and first column, respectively.

Starting with , one can express it in a more compact way as follows:

(12)

where:
is the left identity square matrix with setting the last diagonal entries to zeros.
is the right identity square matrix with setting the last diagonal entry to zero.
is a square translation matrix with ones on the sub-diagonal and zeros elsewhere, i.e., .
is a square translation matrix with ones on the super-diagonal and zeros elsewhere, i.e., .

Now, using the Kronecker product property , one can write as follows:

(13)

where . In a similar way, can be expressed as

(14)

where is the sub-matrix of given by its first rows, and is the square identity matrix with setting the first diagonal entries to zero. Finally, can also be set up as

(15)

where is the sub-matrix of given by its last rows, and is the square diagonal matrix with one at the first diagonal entry and zeros elsewhere.

As a result of (13), (14) and (15) the optimization problem in (11) is reduced to the minimization of the following quadratic equation

(16)

where .

The optimal solution of (16), under unit norm constraint of , is the least eigenvector that corresponds to the smallest eigenvalue of . The square matrix can be constructed by reshaping the obtained solution from a vector into the matrix format, such that . Once matrix is obtained, the channel taps are estimated by averaging over the non-zero diagonal blocks of matrix .

Iv Discussion

In this section, we provide some insightful comments in order to highlight the advantages and drawbacks of the proposed subspace method.

  • As explained earlier the proposed approach consists of neglecting the subspace error (i.e. considering Range() as perfect in the sense one searches for the desired solution within that subspace) while minimizing the system matrix (Toeplitz) structure error. The motivation behind this choice resides in the fact that the subspace error at the first order is null as shown in [8] and hence it can be neglected at the first order in favor of more flexibility for searching the appropriate channel matrix. This explains the observed gain of the SSS over SS method in certain difficult scenarii including the case of closely spaced channels roots.

  • In the favorable cases where the channel matrix is well conditioned, the two subspace methods lead to similar performance as illustrated next by the simulation example of Fig. 1.

  • For the SS method to apply one needs that the noise subspace vectors generate a minimal polynomial basis of the rational subspace orthogonal to (see [1] for more details) and so the condition is considered to guarantee such requirement to hold. As the SSS does not explicitly rely on the orthogonality relation in (10), the latter condition might be relaxed as illustrated by the simulation example of Fig. 5.

  • The proposed subspace method has a higher numerical cost as compared to the SS method. However, the cost might be reduced by taken into account the Kronecker products involved in building matrix . This issue is still under investigation together with an asymptotic statistical performance analysis of SSS.

  • In the case , the solution of (16) can be shown to be unique (up to a constant) thanks to the identifiability result of Theorem 1. Indeed, if is another solution zeroing criterion (11), then the FIR filter associated to matrix satisfies all conditions of Theorem 1, which leads to or equivalently .

V Simulation Results

In this section, the devised SSS method will be compared to the standard SS method as a benchmark. Three different experiments will be examined to illustrate the behavior of SSS in different contexts.

Two FIR channels are considered, each has a second order impulse response given by [6]:

where is the absolute phase value of ’s zeros and indicates the angular distance between the zeros of the two channels on the unit circle. Small results into an ill-conditioned system. In all simulations, the excitation signal is a 4-QAM, each channel receives samples, and the noise is additive white Gaussian. Note that the SNR is defined as

The performance measure is the mean-square-error (MSE), given as

where refers to the number of Monte-Carlo runs and is the channel vector estimate at the -th run.

In the first experiment given by Fig. 1, we show that for a well-conditioned system (), both methods have a comparable performance. In the second one, we consider ill-conditioned systems (i.e. poor channel diversity). In that case, the devised SSS method outperforms the SS method at a moderate ill-conditioned system (), and its performance gain becomes more obvious at severely ill-conditioned case () as shown in Fig.’s 2 and 3, respectively. When the system is ill-conditioned, the difference becomes more pronounced at low and moderate SNR values. Also, at the severe ill-conditioned case, the SS methods becomes unresponsive to the changes in the signal’s SNR, as revealed in Fig. 3, since the effect of ill-conditioning becomes prominent at low SNR. Figure 4 depicts the consequence of varying on the MSE for SNR=10dB.

In the last experiment, the number of channels is and the number of taps in each channel is , the transfer function of the channels are given in [9]. In this experiment, we are primarily interested to look at the impact of the processing window length on the estimation performance. As can be seen from the results reported in Fig. 5, the performance of the SS method gets worse and degrades when the processing window length becomes less than the number of the channels’ taps , while our proposed SSS is weakly affected by the window length condition, i.e. . This allows us to reduce the dimension of the channel matrix with smaller window size values, especially for large dimensional systems where .

Fig. 1: Well-conditioned channels, .
Fig. 2: Ill-conditioned system, .
Fig. 3: Severely ill-conditioned system, .
Fig. 4: MSE versus ,
Fig. 5: MSE versus SNR for different window size .

Vi Conclusion

In this paper, we proposed a dual approach to the standard subspace method, whereby the channel matrix is forced to belong to the principal subspace of the data covariance matrix estimate while its deviation from Toeplitz structure is minimized. By doing so, we show that the channel estimation is significantly improved in the difficult context of weak channels diversity (i.e. channels with closely spaced roots). Interestingly, the principle of the proposed approach can be applied for estimation problems with other matrix structures where subspace method can be used.

References

  • [1] E. Moulines, P. Duhamel, J.-F. Cardoso, and S. Mayrargue, “Subspace methods for the blind identification of multichannel FIR filters,” IEEE Transactions on signal processing, vol. 43, no. 2, pp. 516–525, 1995.
  • [2] W. Kang and B. Champagne, “Subspace-based blind channel estimation: Generalization and performance analysis,” IEEE transactions on signal processing, vol. 53, no. 3, pp. 1151–1162, 2005.
  • [3] G. Xu, H. Liu, L. Tong, and T. Kailath, “A least-squares approach to blind channel identification,” IEEE Transactions on signal processing, vol. 43, no. 12, pp. 2982–2993, 1995.
  • [4] Y. Hua, “Fast maximum likelihood for blind identification of multiple FIR channels,” IEEE transactions on Signal Processing, vol. 44, no. 3, pp. 661–672, 1996.
  • [5] W. Qiu and Y. Hua, “Performance comparison of three methods for blind channel identification,” in Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on, vol. 5.   IEEE, 1996, pp. 2423–2426.
  • [6] ——, “Performance analysis of the subspace method for blind channel identification,” Signal Processing, vol. 50, no. 1, pp. 71–81, 1996.
  • [7] L. Tong and S. Perreau, “Multichannel blind identification: From subspace to maximum likelihood methods,” PROCEEDINGS-IEEE, vol. 86, pp. 1951–1968, 1998.
  • [8] Z. Xu, “Perturbation analysis for subspace decomposition with applications in subspace-based algorithms,” IEEE Transactions on Signal Processing, vol. 50, no. 11, pp. 2820–2830, 2002.
  • [9] K. Abed-Meraim, J.-F. Cardoso, A. Y. Gorokhov, P. Loubaton, and E. Moulines, “On subspace methods for blind identification of single-input multiple-output FIR systems,” IEEE Transactions on Signal Processing, vol. 45, no. 1, pp. 42–55, 1997.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
5087
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description