Lévy NMF for robust nonnegative source separation

Lévy NMF for robust nonnegative source separation

Abstract

Source separation, which consists in decomposing data into meaningful structured components, is an active research topic in many areas, such as music and image signal processing, applied physics and text mining. In this paper, we introduce the Positive -stable (PS) distributions to model the latent sources, which are a subclass of the stable distributions family. They notably permit us to model random variables that are both nonnegative and impulsive. Considering the Lévy distribution, the only PS distribution whose density is tractable, we propose a mixture model called Lévy Nonnegative Matrix Factorization (Lévy NMF). This model accounts for low-rank structures in nonnegative data that possibly has high variability or is corrupted by very adverse noise. The model parameters are estimated in a maximum-likelihood sense. We also derive an estimator of the sources given the parameters, which extends the validity of the generalized Wiener filtering to the PS case. Experiments on synthetic data show that Lévy NMF compares favorably with state-of-the art techniques in terms of robustness to impulsive noise. The analysis of two types of realistic signals is also considered: musical spectrograms and fluorescence spectra of chemical species. The results highlight the potential of the Lévy NMF model for decomposing nonnegative data.

{IEEEkeywords}

Lévy distribution, Positive alpha-stable distribution, nonnegative matrix factorization, source separation.

\IEEEpeerreviewmaketitle

1 Introduction

\IEEEPARstart

Source separation consists in extracting underlying components called sources that add up to form an observable signal called mixture. This issue occurs in various fields such as data mining [1], face recognition [2] or applied physics [3].

A groundbreaking idea presented in [4] is to exploit the fact that the observations are often nonnegative, so that they should be decomposed as a sum of only nonnegative terms. This Nonnegative Matrix Factorization (NMF) has shown successful in many fields such as audio signal processing [5], computer vision [4], spectroscopy [6] and many others.

NMF, originally introduced as a rank-reduction method, approximates a nonnegative data matrix as a product of two low-rank nonnegative matrices and . The factorization can be obtained by optimizing a cost function measuring the error between and , such as the Euclidean, Kullback-Leibler (KL [4]) and Itakura-Saito (IS [7]) cost functions. This may often be framed in a probabilistic framework, where the cost function appears as the negative log-likelihood of the data, e.g. Gaussian [7, 8] or Poisson [9, 10]. Addressing the estimation problem in a Maximum A Posteriori (MAP) sense makes it possible to incorporate some prior distribution over the parameters and . This allows some regularization schemes enforcing desirable properties for the parameters such as harmonicity, temporal smoothness [11] or sparsity [12].

However, the above-mentioned distributions fail to provide good results when the data is very impulsive or contains outliers. This comes from their rapidly decaying tails, that cannot account for really unexpected observations. The family of heavy-tailed stable distributions [13] was thus found useful for robust signal processing [14, 15]. A subclass of this family, called the Symmetric -stable (SS) distributions, has been used in audio [16, 17] for modeling complex Short-Term Fourier Transforms (STFT). An estimation framework based on the Markov Chain Monte Carlo (MCMC) method has been proposed [18] to perform the separation of SS mixtures. The common ground of these methods is to assume all observations as independent but to impose low-rank constraints on the nonnegative dispersion parameters of the sources, and not on their actual outcomes.

In this paper, we address the problem of modeling and separating nonnegative sources from their mixture, while still constraining their dispersion to follow an NMF model. To do so, we consider another subclass of the stable family which models nonnegative random variables: the positive -stable (PS) distributions. They also benefit from being heavy-tailed and are thus expected to yield robust estimates. Since the Probability Density Function (PDF) of those PS distributions does not admit a closed-form expression in general, we study more specifically the Lévy case, which is a particular analytically tractable member of the family. We introduce the Lévy NMF model, where the dispersion parameters of the sources are structured through an NMF model and where realizations are necessarily nonnegative. The parameters are then estimated in a Maximum Likelihood (ML) sense by means of a Majorize-Minimization approach. We also derive an estimator of the sources which extends the validity of the generalized Wiener filtering to the PS case. Several experiments conducted on synthetic, audio and fluorescence spectroscopy signals show the potential of this model for a nonnegative source separation task and highlight its robustness to impulsive noise.

This paper is structured as follows. Section 2 introduces the Lévy NMF mixture model. Section 3 details the parameters estimation and presents an estimator of the sources. Section 4 experimentally demonstrates the denoising ability of the model and its potential in terms of source separation for both musical and chemometric applications.

2 Lévy NMF model

2.1 Positive -stable distributions

Stable distributions, denoted , are heavy-tailed distributions parametrized by four parameters: a shape parameter which determines the tails thickness of the distribution (the smaller , the heavier the tail of the PDF), a location parameter , a scale parameter measuring the dispersion of the distribution around its mode, and a skewness parameter . The symmetric -stable (SS) distributions, which are such that , are an important subclass of the stable family and a growing topic of interest, notably in audio [16, 18].

Such distributions are said ”stable” because of their additive property: a sum of independent stable random variables is also a stable random variable: , with and .

Stable distributions do not in general have a nonnegative support. However, it can be shown [19] that when and , the support of the distribution is . In this paper, we consider that thus the support is . The Positive -stable distributions are therefore such that with .

2.2 Lévy NMF mixture model

The only for which the PDF of a PS distribution can be expressed in closed form is the Lévy case :

(1)

We model all the entries of a data matrix as independent, and assume they are the sum of independent Lévy-distributed components . Note that all entries are independent, which allows us to extend the standard notation to matrices. Then, with , where denotes the element-wise power.

The scale parameters are structured by means of an NMF [4], which preserves the additive property of the model as in [17]: , with and . We then refer to this model as the Lévy NMF model.

3 Parameters estimation

The parameters are estimated in a Maximum Likelihood (ML) sense, which is natural in a probabilistic framework. The log-likelihood of the data is given by:

where denotes equality up to an additive constant which does not depend on the parameters and denotes the IS divergence [7]. We thus remark that maximizing the log-likelihood of the data in the Lévy NMF model is equivalent to minimizing the IS divergence between and , which boils down to minimizing the following cost function:

(2)

3.1 Naive multiplicative updates

The cost function (2) can be minimized with the same heuristic approach that has been pioneered in [4] and used in many NMF-related papers in the literature. The gradient of with respect to a parameter ( or ) is expressed as the difference between two nonnegative terms: , which leads to the multiplicative update rules (MUR):

(3)

where (resp. the fraction bar) denotes the element-wise matrix multiplication (resp. division). For Lévy NMF:

(4)

and

(5)

Provided and have been initialized as nonnegative, they remain so throughout iterations. However, these updates do not guarantee a non-increasing cost function , which motivates the research for a novel optimization approach.

3.2 Majorize-Minimization updates

An alternative way to derive update rules for estimating the parameters is to adopt a Majorize-Minimization (MM) approach [20]. The core idea of this strategy is to find an auxiliary function which majorizes the cost function :

(6)

Then, given some current parameter , we aim at minimizing in order to obtain a new parameter . This approach guarantees that the cost function will be non-increasing over iterations. Such an auxiliary function is obtained in a similar fashion as in [21, 22]. We have:

(7)

where

(8)

, and is an auxiliary parameter. Since , we can apply the Jensen’s inequality to the convex function :

(9)

which finally leads to the following majorization of the first term of in (2):

(10)

In a similar fashion, we majorize the second term in (2) ( is also a convex function) and therefore we obtain the following auxiliary function :

(11)

We then set the partial derivative of with respect to to zero, which leads to an update rule on . The update rule on is obtained in exactly the same way. Thus, the updates are:

(12)

where and are given by (4) and (5). A MATLAB implementation of this algorithm can be found at [23].

3.3 Estimator of the components

For a source separation task, it can be useful, once the model parameters are estimated, to derive an estimator of the isolated components . In a probabilistic framework, a natural estimator is given by the posterior expectation of the source given the mixture . It has been shown [16] that for SS random variables, such an estimator is provided by a generalized Wiener filtering. One contribution of this paper is to prove that this result still holds for the PS family. This finding extends -Wiener filter for the separation of nonnegative variables. Derivations may be found in the companion technical report for this paper [24]. For Lévy-distributed random variables (), we then have:

(13)

4 Experimental evaluation

4.1 Fitting impulsive noise

To test the ability of Lévy NMF to model impulsive noise, we have generated components’ pairs for and by taking the th power of random Gaussian noise, in order to obtain sparse components. The entries of the product , of dimensions , were then used as the scale parameters of independent PS random observations, for various values of in the range : small values of lead to very impulsive observations. We ran iterations of the Lévy, IS [7], KL [4] and Cauchy [17] NMFs, and RPCA [25, 26] algorithms with rank . To measure the quality of the estimation, we computed the KL divergence and the -dispersion, defined as a function of the data shape parameter :

(14)

where contains the synthetic parameters and contains the parameters estimated with the different algorithms. Results averaged over synthetic data runs are presented in Fig. 1. We observe that the Lévy NMF algorithm shows very similar results to those obtained using RPCA or Cauchy NMF, with slightly better results than these methods for very small values of . The reconstruction quality is considerably better than with ISNMF and KLNMF. Those results demonstrate the potential of the Lévy NMF model for fitting data which may be very impulsive.

Figure 1: Fitting impulsive noise by different algorithms, measured with the -dispersion and KL divergence.

4.2 Music spectrogram inpainting

We propose to test the denoising ability of the Lévy NMF model when the data is corrupted by very impulsive noise. When audio spectrograms are corrupted by such noise, the retrieval of the lost information is known as an audio inpainting task. We consider guitar songs from the IDMT-SMT-GUITAR [27] database, sampled at Hz. The data is obtained by taking the magnitude spectrogram of the STFT of the mixture signals, computed with a ms-long Hann window and overlap. The spectrograms are then corrupted with synthetic impulsive noise that represents of the data. We then run iterations of the algorithms with rank in order to estimate the clean spectrograms.

We present the obtained spectrograms in Fig. 2 (KLNMF and ISNMF lead to similar results). It appears that the traditional NMF techniques are not able to denoise the data: the estimation of the parameters is deteriorated by the presence of impulsive noise. Conversely, the noise has been entirely removed in the Lévy NMF estimate. This is confirmed in Table 1 which presents the quality of the estimation measured with the KL divergence between the original and estimated spectrograms averaged over the audio excerpts. The best results are obtained with Lévy and Cauchy NMFs.

Remarkably, none of the algorithms used above is informed with any prior knowledge about the location of the noise. As a complementary experiment, we informed ISNMF with this knowledge by incorporating a mask containing the position of the noise into the NMF, resulting into a weighted ISNMF [28]. The blind Lévy NMF still leads to better results than the informed ISNMF. It is thus promising for robust musical applications such as audio inpainting, since it does not require any additional noise modeling or detection technique.

Figure 2: Music spectrogram restoration.
ISNMF
KLNMF
Cauchy NMF
RPCA
Lévy NMF
Weighted ISNMF
Table 1: Average Log-KL divergence between clean and estimated music spectrograms.

4.3 Application to fluorescence spectroscopy

Fluorescence spectroscopy [6] consists in measuring the excitation-emission spectra of a mixture, which is modeled as a the weighted sum of the spectra of the pure species. Performing source separation on these data allows us to identify the pure species composing the mixture and their concentrations. NMF-based methods have shown promising results for this task [29], since the data is nonnegative and the NMF model conforms to the underlying assumption of additivity of pure spectra. In this framework, represents the spectra and is the concentration matrix.

We propose to apply the Lévy NMF model to such a task. We compare it with the NMF with Euclidean distance (EuNMF) [30] and with KL divergence [31]. Indeed, other techniques (ISNMF, Cauchy NMF and RPCA) focus on the modeling of complex-valued data, or do not enforce a nonnegative property of the parameters. Thus, it would not be theoretically justified to use those methods in this context.

Each algorithm uses iterations of MUR. The dataset is detailed in [29]. In a nutshell, it consists of emission spectra (with frequency channels) of mixtures of components (bound ferulic acid, free ferulic acid and p-coumaric acid) with unknown concentrations. Both and are learned directly from the mixtures. We also have access to the pure spectra of the components. As it is shown in Fig. 3, the Lévy NMF model seems to be an appropriate tool to learn pure fluorescence spectra from their mixtures. Globally, both Euclidean, KL and Lévy NMF approximate quite accurately the original pure spectra, though the Lévy NMF estimate seems to approach the spectra from above.

Figure 3: Pure components’ spectra learned with several NMF methods.

Finally, we estimate the isolated sources by means of (13) for the different methods. As a comparison reference, we also learn the concentration matrix when assuming the pure spectra are known (Oracle case) by means of EuNMF (results obtained with other NMFs are similar). The similarity between the Oracle sources and the estimated sources is measured by means of the correlation, the values of which are presented in Table 2. The best performance is obtained with the Lévy NMF method for all sources, which confirms the potential of such a model in various areas of research involving source separation of nonnegative data.

EuNMF KLNMF Lévy NMF
Bound ferulic acid
Free ferulic acid
p-coumaric acid
Table 2: Correlation (in ) between Oracle and estimated sources.

5 Conclusion

In this paper, we introduced the Lévy NMF model, which structures the dispersion parameters of PS distributed sources when . Experiments have shown the potential of this model for robustly decomposing realistic nonnegative data.

Such a model could be useful in many other fields where the source separation issue frequently occurs and where the Lévy distribution finds applications, such as optics [32] or geomagnetic field analysis [33]. Future work could focus on novel estimation techniques for the Lévy NMF model, using for instance a MAP estimator, which would permit us to incorporate some prior knowledge about the parameters [11]. Besides, drawing on [18], the family of techniques based on MCMC could be useful to estimate the parameters of any PS distribution. Alternatively, one could extend the Lévy NMF model to the family of inverted gamma (IG) of which it is a special case. Although it would be losing additivity and thus the theoretical foundation for -Wiener filtering, this would allow for convenient analytical derivations thanks to tractable likelihood functions. This strategy is reminiscent of recent work [34] where the tractable student-t distribution is used instead of the symmetric -stable one.

Acknowledgment

The authors would like to thank Cyril Gobinet from University of Reims Champagne-Ardenne, France, for providing them the fluorescence spectroscopy data used in this study.

References

  1. V. P. Pauca, F. Shahnaz, M. W. Berry, and R. J. Plemmons, “Text mining using non-negative matrix factorizations,” in Proc. SIAM international conference on data mining, Lake Buena Vista, Florida, USA, January 2004, pp. 452–456.
  2. D. Guillamet and J. Vitria, “Classifying faces with nonnegative matrix factorization,” in Proc. the Catalan conference for artificial intelligence, Castellón, Spain, October 2002, pp. 24–31.
  3. P. Sajda, S. Du, T. R. Brown, R. Stoyanova, D. C. Shungu, X. Mao, and L. C. Parra, “Nonnegative matrix factorization for rapid recovery of constituent spectra in magnetic resonance chemical shift imaging of the brain,” IEEE Transactions on Medical Imaging, vol. 23, no. 12, pp. 1453–1465, December 2004.
  4. D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999.
  5. P. Smaragdis and J. C. Brown, “Non-negative matrix factorization for polyphonic music transcription,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA, October 2003.
  6. P. Liu, X. Zhou, Y. Li, M. Li, D. Yu, and J. Liu, “The application of principal component analysis and non-negative matrix factorization to analyze time-resolved optical waveguide absorption spectroscopy data,” Analytical Methods, vol. 5, pp. 4454–4459, 2013.
  7. C. Févotte, N. Bertin, and J.-L. Durrieu, “Nonnegative matrix factorization with the Itakura-Saito divergence: With application to music analysis,” Neural computation, vol. 21, no. 3, pp. 793–830, March 2009.
  8. M. N. Schmidt and H. Laurberg, “Nonnegative Matrix Factorization with Gaussian Process Priors,” Computational Intelligence and Neuroscience, vol. 2008, no. 3, pp. 3:1–3:10, January 2008.
  9. T. Virtanen, A. T. Cemgil, and S. Godsill, “Bayesian extensions to non-negative matrix factorisation for audio signal modelling,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   Las Vegas, NV, USA: IEEE, May 2008, pp. 1825–1828.
  10. A. T. Cemgil, “Bayesian inference for nonnegative matrix factorisation models,” Computational Intelligence and Neuroscience, vol. 2009, 2009, article ID 785152, 17 pages.
  11. N. Bertin, R. Badeau, and E. Vincent, “Enforcing harmonicity and smoothness in Bayesian non-negative matrix factorization applied to polyphonic music transcription,” IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 3, pp. 538–549, March 2010.
  12. T. Virtanen, “Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 3, pp. 1066–1074, March 2007.
  13. G. Samoradnitsky and M. S. Taqqu, Stable non-Gaussian random processes: stochastic models with infinite variance.   CRC Press, 1994.
  14. S. Godsill and E. E. Kuruoglu, “Bayesian inference for time series with heavy-tailed symmetric -stable noise processes,” Applications of Heavy Tailed Distributions in Economics, Engineering and Statistics (Heavy Tails 99), pp. 3–5, 1999.
  15. N. Bassiou, C. Kotropoulos, and E. Koliopoulou, “Symmetric -stable sparse linear regression for musical audio denoising,” in Proc. International Symposium on Image and Signal Processing and Analysis (ISPA).   Trieste, Italy: IEEE, September 2013, pp. 382–387.
  16. A. Liutkus and R. Badeau, “Generalized Wiener filtering with fractional power spectrograms,” in Proc. the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, April 2015.
  17. A. Liutkus, D. Fitzgerald, and R. Badeau, “Cauchy Nonnegative Matrix Factorization,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA, October 2015.
  18. U. Simsekli, A. Liutkus, and A. T. Cemgil, “Alpha-stable matrix factorization,” IEEE Signal Processing Letters, vol. 22, no. 12, pp. 2289–2293, 2015.
  19. J. P. Nolan, Stable Distributions - Models for Heavy Tailed Data.   Boston: Birkhauser, 2015, in progress, Chapter 1 online at academic2.american.edu/jpnolan.
  20. D. R. Hunter and K. Lange, “A tutorial on MM algorithms,” The American Statistician, vol. 58, no. 1, pp. 30–37, 2004.
  21. C. Févotte and J. Idier, “Algorithms for nonnegative matrix factorization with the beta-divergence,” Neural Computation, vol. 23, no. 9, pp. 2421–2456, September 2011.
  22. C. Févotte, “Majorization-minimization algorithm for smooth Itakura-Saito nonnegative matrix factorization,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   Prague, Czech Republic: IEEE, May 2011, pp. 1980–1983.
  23. “Webpage,” ”http://www.perso.telecom-paristech.fr/magron/ressources/levyNMF.m”.
  24. P. Magron, R. Badeau, and A. Liutkus, “Generalized Wiener filtering for positive alpha-stable random variables,” Télécom ParisTech, Tech. Rep. D004, 2016.
  25. E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM (JACM), vol. 58, no. 3, p. 11, 2011.
  26. P.-S. Huang, S. D. Chen, P. Smaragdis, and M. Hasegawa-Johnson, “Singing-voice separation from monaural recordings using robust principal component analysis,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, March 2012, pp. 57–60.
  27. C. Kehling, A. Jakob, D. Christian, and S. Gerald, “Automatic tablature transcription of electric guitar recordings by estimation of score- and instrument-related parameters,” in Proc. the International Conference on Digital Audio Effects (DAFx), Erlangen, Germany, September 2014, p. 8.
  28. A. Limem, G. Delmaire, M. Puigt, G. Roussel, and D. Courcot, “Non-negative matrix factorization using weighted beta divergence and equality constraints for industrial source apportionment,” in Proc. the IEEE International Workshop on Machine Learning for Signal Processing (MLSP).   Southampton, United Kingdom: IEEE, September 2013, pp. 1–6.
  29. C. Gobinet, E. Perrin, and R. Huez, “Application of non-negative matrix factorization to fluorescence spectroscopy,” in Proc. European Signal Processing Conference (EUSIPCO), Vienna, Austria, September 2004.
  30. A.-S. Montcuquet, L. Herve, L. Guyon, J.-M. Dinten, and J. I. Mars, “Non-negative matrix factorization: A blind sources separation method to unmix fluorescence spectra,” in Proc. the Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS).   Grenoble, France: IEEE, August 2009, pp. 1–4.
  31. C. Gobinet, A. Elhafid, V. Vrabie, R. Huez, and D. Nuzillard, “About importance of positivity constraint for source separation in fluorescence spectroscopy,” in Proc. European Signal Processing Conference (EUSIPCO).   Antalya, Turkey: IEEE, September 2005, pp. 1–4.
  32. G. L. Rogers, “Multiple path analysis of reflectance from turbid media,” Journal of the Optical Society of America, vol. 25, no. 11, pp. 2879–2883, 2008.
  33. V. Carbone, L. Sorriso-Valvo, A. Vecchio, F. Lepreti, P. Veltri, P. Harabaglia, and I. Guerra, “Clustering of polarity reversals of the geomagnetic field,” Physical review letters, vol. 96, no. 12, March 2006, article no. 128501.
  34. K. Yoshii, K. Itoyama, and M. Goto, “Student’s t nonnegative matrix factorization and positive semidefinite tensor factorization for single-channel audio source separation,” in Proc. of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP).   IEEE, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
123504
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description