Abstract
This work studies the problem of simultaneously separating and reconstructing signals from compressively sensed linear mixtures. We assume that all source signals share a common sparse representation basis. The approach combines classical Compressive Sensing (CS) theory with a linear mixing model. It allows the mixtures to be sampled independently of each other. If samples are acquired in the time domain, this means that the sensors need not be synchronized. Since Blind Source Separation (BSS) from a linear mixture is only possible up to permutation and scaling, factoring out these ambiguities leads to a minimization problem on the so-called oblique manifold. We develop a geometric conjugate subgradient method that scales to large systems for solving the problem. Numerical results demonstrate the promising performance of the proposed algorithm compared to several state of the art methods.
Blind Source Separation with Compressively Sensed Linear Mixtures
[4mm] Martin Kleinsteuber and Hao Shen
[4mm] March 7, 2018
[9mm]
Index Terms
[2mm] Compressed sensing, blind source separation, oblique manifold, conjugate subgradient method.
1 Introduction
Recovering signals from only the mixed observations without knowing the priori information of both the source signals and the mixing process is often referred to as Blind Source Separation (BSS), cf. [2]. Different BSS methods are used in various challenging data analysis applications, such as functional Magnetic Resonance Imaging (fMRI) analysis and microarray analysis. In order to achieve reasonable performance, prominent methods, e.g. Independent Component Analysis (ICA), usually require a large number of observations [3]. Unfortunately, the availability of a large amount of data samples can not be guaranteed in many real applications, due to either cost or time issues.
The theory of compressed sensing (CS), cf. [4], shows that, when a signal is sparse (or compressible) with respect to some basis, only a small number of samples suffice for exact (or approximate) recovery. It is interesting to know that the concept of sparsity has also been used as a separation criterion in the context of BSS. In [5], for example, a family of efficient algorithms in the probabilistic framework are proposed. In [6], the authors investigate sparse representation of signals in an overcomplete basis in conjunction with BSS and motivate the usage of -norm for a sparsity measure.
The approach presented in this work can be regarded as a generalization of Morphological Component Analysis (MCA), cf. [7] and [8]. Roughly speaking, while MCA takes advantage of the sparse representations of the signals only for separation tasks, we additionally employ the sparsity assumption for reconstruction. The resulting cost function is thus very much related to the ones that typically arise in MCA. Nevertheless, the current scenario with compressively sensed samples has not been studied and thus differs from the existing MCA models. Although it has the potential of tackling the underdetermined BSS case, our numerical validation only considers the determined case, where the number of sources equals the number of mixtures.
By considering the classical result that the source signals can only be extracted with scaling ambiguity, we regularize our problem by restricting each column of the mixing matrix to have unit norm. In other words, the resulting optimization problem is defined on the so-called oblique manifold. Furthermore, as many potential applications of the present work lie in the area of image or audio signal processing, any promising algorithms have to be capable of scaling to large systems. It is well known that conjugate gradient-type methods provide a nice trade-off between large scale problems and good convergence behavior. In this work, we propose an approach that lifts the conjugate subgradient method proposed by Wolfe in [9] to the manifold case.
2 Problem Description
2.1 Notation and Setting
For the sake of convenience of presentation, we follow the notation in the compressive sensing literature and represent the signals as column vectors. The instantaneous linear BSS model is given by
(1) |
where denotes the data matrix of sources with samples (), is the mixing matrix of full rank, and represents the linear mixtures of . The task of standard BSS is to estimate the sources , given only the mixtures .
We assume that for all sources , there exists an (overcomplete) basis , , of , referred to as representation basis, such that with sparse . More compactly, this reads as
(2) |
where .
2.2 Compressively sensed BSS model
Now let us take one step further to compressively sample each mixture individually. To this end, we denote the sampling matrix for the mixture by for . Then, a compressively sensed observation of the -th mixture is given by
(3) |
We refer to (3) as the Compressively Sensed BSS (CS-BSS) model. To summarize, the task considered in this work is formulated as: Given the compressively sensed observations , for , together with their corresponding sampling matrices , estimate the mixing matrix and the sparse representations .
There are various measures of sparsity available in the literature, cf. [10]. In this work, we confine ourselves to the -norm as a measure of sparsity, because (i) it is suitable for numerical algorithms, in particular for large scale problems; (ii) it is an appropriate prior for many real signals, cf. [5]. This leads to the following optimization problem
(4) |
In real applications, it is unavoidable that the observations are contaminated by noise. Hence let denote the error radius for the observation . The optimization problem (4) turns into
(5) |
Following a standard approach, we formulate (5) in an unconstrained Lagrangian form, namely
(6) |
where the scalars weigh the reconstruction error of each mixture individually according to , and balance these errors against the sparsity term . It is well known that, in compressive sensing, inappropriate regularization parameters might lead to not only slow convergence but also local optima. To cope with this issue, we follow the adaptive updating strategy proposed in [11, 12]. A rigorous analysis of the updating strategy in our CS-BSS setting is beyond the scope of this work.
2.3 Regularized CS-BSS problem
Obviously, problem (6) is ill-posed. Indeed, an optimization procedure would let the norm of explode to drive to zero. In order to regularize the problem, we therefore restrict to the case where for all . Thus, together with the full rank condition, we restrict the mixing matrix onto the oblique manifold [13]
(7) |
where is the identity matrix, and forms a diagonal matrix, whose diagonal entries are those of . Note, that this is a common approach in many BSS scenarios, since it is well known [14] that the mixing matrix is identifiable only up to a column-wise scaling and permutation. The regularized problem that we will consider is hence given by
(8) |
The optimization problem is thus defined on the product manifold .
3 A Conjugate Subgradient type method for the CS-BSS problem
3.1 Conjugate Subgradient on Riemannian manifolds
In order to tackle the non-smooth problem (8), we propose a conjugate subgradient type approach on manifolds by generalizing the method proposed in [9]. A generalization of subgradient methods to the Riemannian manifold case has been studied in [15]. We refer to [16] for an introduction to nonsmooth analysis on manifolds, where the required concepts are introduced. Generally, for minimizing a nonsmooth function where is a Riemannian manifold and a subgradient of exists for all , we propose the following scheme, further referred to as Riemannian Conjugate Subgradient (CSG) method. Let be the tangent space at , let denote the Riemannian subdifferential of at , i.e. the set of all Riemannian subgradients at , and let denote the norm on which is induced by the Riemannian metric . Moreover, let
(9) |
denote the subgradient with smallest Riemannian norm. Let be an initial point for the algorithm. Denote , and the descent direction at the -th iteration by . By initializing , the algorithm now iteratively updates
(10) |
where denotes the Riemannian exponential and the scalar is the line search parameter at the -th iteration. Various line-search techniques for computing exist from which we choose backtracking line-search [17], cf. also [18] for the Riemmanian case, as it is conceptually simple and computationally cheap.
There are also several formulas
available that update the descent direction, e.g. Hestenes-Stiefel, Polak-Ribière, and Fletcher-Reeves, which can all be generalized straightforwardly to the manifold setting. Here, we restrict to a manifold adaption of Hestenes-Stiefel. Let and denote by the parallel transport of along the geodesic into the tangent space . The direction update is then given by
(11) |
3.2 The Riemannian subgradient for CS-BSS
The manifold is endowed with the Riemannian metric inherited from the surrounding Euclidean space, that is
(12) |
Let for and let
(13) |
The Riemannian subdifferential of at with respect to the Riemmanian metric (12) is given by
(14) |
where
(15) |
and
(16) |
Here, , and is an orthogonal projection operator.
Note, that is in fact a Riemannian gradient on the oblique manifold. Hence we may write . For the purpose of finding the subgradient with smallest Riemannian norm, each entry of can be minimized independently. Let us denote the second summand at the right-hand side of (15) by
(17) |
and define as the matrix with -entries
(18) |
Then the subgradient having smallest Riemmanian norm is given explicitly by
(19) |
3.3 A Conjugate Subgradient Algorithm
The development of a Riemannian Conjugate Subgradient method requires the concept of parallel transport along a geodesic. Since we consider the oblique manifold as a Riemannian submanifold of a product of unit spheres, the formulas for the parallel transport as well as the exponential mapping read accordingly.
Let
(20) |
be the parallel transport of a tangent vector along the great circle in the direction on the unit sphere , where
(21) |
Then the parallel transport of with respect to the Levi-Civita connection along the geodesic in the direction , i.e.
(22) |
is given by
(23) |
A conjugate subgradient algorithm for the problem as defined in (8) is summarized as follows.
Algorithm 1A conjugate subgradient CS-BSS algorithm |
---|
Step 1: Initialize and . |
Set . |
Step 2: Set , let and , |
and compute |
. |
Step 3: For : |
() Update , |
where |
; |
() Compute ; |
() Update , |
where is chosen according to (11); |
() If is small enough, go to Step 2. |
Step 4: If is small |
enough, stop. Otherwise, go to Step 2. |
4 Numerical Results
In our experiment, we investigate the performance of the proposed CSG algorithm compared with several state of the art algorithms. By noticing the similarity between our present work with the generalized MCA method in [8], we adapt the alternating soft-thresholding based method and the hard thresholding [19] (where is used as a sparsity measure) to our CS-BSS setting. We refer to them as Alt-IST and Alt-IHT, respectively. Similarly, we adapt the CSG algorithm in the alternating manner (referred to as Alt-CG-CSG). Finally, one alternative common approach to deal with the nondifferentiability of the cost function is to employ a smooth approximation. We then refer to the standard CG algorithm with certain smooth approximation as smoothing CG.
In the experiment, we generate three sources signals with samples. The number of nonzero entries is equal to , which are generated according to a Laplacian distribution. We randomly pick samples as the compressively sensed mixtures. The performance is measured in terms of both separation quality, measured by the Amari Error [20], and reconstruction quality in terms of Signal to Noise Ratio (SNR). Figure I illustrates the box plot of the results from 50 experiments of all methods. Among all five methods, the Alt-CG-CSG performs the worst on average in terms of both separation and reconstruction qualities, while the proposed CSG algorithm provides consistent and satisfactory results. Finally, the small variances of all CG/CSG type algorithms in terms of reconstruction also suggest their reliable performance compared with the thresholding based approaches.
5 Conclusion
In this work, the authors propose a method for separating linearly mixed sparse signals from compressively sensed mixtures. The proposed approach allows each mixture to be sensed individually. If sampling takes place in the time domain, this means that the sensors do not have to be synchronized. The arising optimization problem is tackled with a conjugate subgradient type method, which is based on a new concept for optimizing non-differentiable functions under smooth constraints.
Footnotes
- M. Kleinsteuber and H. Shen are with the Department
of Electrical Engineering and Information Technology, Technische
Universität München, München, Germany.
e-mail: (see http://www.gol.ei.tum.de).
Authors are listed in alphabetical order due to equal contribution.
Parts of this work have been presented at the Workshop Signal Processing with Adaptive Sparse Structured Representations (SPARS’11), Edinburgh, June 2011 [1].
This work has partially been supported by the Cluster of Excellence CoTeSys - Cognition for Technical Systems, funded by the Deutsche Forschungsgemeinschaft (DFG).
References
- M. Kleinsteuber and H. Shen, “Blind source separation of compressively sensed signals,” in Proceedings of the Workshop on Signal Processing with Adaptive Sparse Structured Representations, 2011, p. 80.
- S. Haykin, Unsupervised Adaptive Filtering. Wiley-Interscience, 2000, vol. 1: Blind Source Separation.
- S. Bermejo, “Finite sample effects in higher order statistics contrast functions for sequential blind source separation,” IEEE Signal Processing Letters, vol. 12, no. 6, pp. 481–484, 2005.
- D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, pp. 1289–1306, 2006.
- M. Zibulevsky and B. A. Pearlmutter, “Blind source separation by sparse decomposition in a signal dictionary,” Neural Computation, vol. 13, no. 4, pp. 863–882, 2001.
- Y. Li, A. Cichocki, and S.-i. Amari, “Analysis of sparse representation and blind source separation,” Neural Computation, vol. 16, pp. 1193–1234, 2004.
- M. Elad, J. Starck, P. Querre, and D. l. Donoho, “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Applied and Computational Harmonic Analysis, vol. 19, no. 3, pp. 340–358, 2005.
- J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Sparsity and morphological diversity in blind source separation,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2662 –2674, 2007.
- P. Wolfe, “A method of conjugate subgradients for minimizing nondifferentiable functions,” in Nondifferentiable Optimization, ser. Mathematical Programming Studies, 1975, vol. 3, pp. 145–173.
- N. Hurley and S. Rickard, “Comparing measures of sparsity,” IEEE Transactions on Information Theory, vol. 55, no. 10, pp. 4723 –4741, 2009.
- J. Bobin, J. Starck, J. M. Fadili, Y. Moudden, and D. L. Donoho, “Morphological component analysis: An adaptive thresholding strategy,” IEEE Transactions on Image Processing, vol. 16, no. 11, pp. 2675–2681, 2007.
- S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Transactions on Signal Processing, vol. 57, pp. 2479–2493, 2009.
- H. Shen and M. Kleinsteuber, “Complex blind source separation via simultaneous strong uncorrelating transform,” in Lecture Notes in Computer Science, Proceedings of the International Conference on Latent Variable Analysis and Signal Separation” (LVA/ICA 2010), vol. 6365. Berlin/Heidelberg: Springer-Verlag, 2010, pp. 287–294.
- P. Comon, “Independent component analysis, a new concept?” Signal Processing, vol. 36, no. 3, pp. 287–314, 1994.
- O. P. Ferreira and P. R. Oliveira, “Subgradient algorithm on Riemannian manifolds,” Journal of Optimization Theory and Applications, vol. 97, pp. 93–104, 1998.
- Y. S. Ledyaev and Q. J. Zhu, “Nonsmooth analysis on smooth manifolds,” Transactions of the American Mathematical Society, vol. 359, no. 8, pp. 3687–3732, 2007.
- J. Nocedal and S. J. Wright, Numerical Optimization, 2nd Ed. New York: Springer, 2006.
- P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization Algorithms on Matrix Manifolds. Princeton, NJ: Princeton University Press, 2008.
- T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265–274, 2009.
- S. Amari, A. Cichocki, and H. H. Yang, “A new learning algorithm for blind signal separation,” Advances in Neural Information Processing Systems, vol. 8, pp. 757–763, 1996.