Effective Blind Source Separation Based on the Adam Algorithm

Effective Blind Source Separation Based on the Adam Algorithm

Michele Scarpiniti Department of Information Engineering, Electronics and Telecommunications (DIET),
“Sapienza” University of Rome,
Via Eudossiana 18, 00184, Rome.
Email: {michele.scarpiniti, simone.scardapane, danilo.comminiello, raffaele.parisi, aurelio.uncini}@uniroma1.it
   Simone Scardapane Department of Information Engineering, Electronics and Telecommunications (DIET),
“Sapienza” University of Rome,
Via Eudossiana 18, 00184, Rome.
Email: {michele.scarpiniti, simone.scardapane, danilo.comminiello, raffaele.parisi, aurelio.uncini}@uniroma1.it
   Danilo Comminiello Department of Information Engineering, Electronics and Telecommunications (DIET),
“Sapienza” University of Rome,
Via Eudossiana 18, 00184, Rome.
Email: {michele.scarpiniti, simone.scardapane, danilo.comminiello, raffaele.parisi, aurelio.uncini}@uniroma1.it
   Raffaele Parisi Department of Information Engineering, Electronics and Telecommunications (DIET),
“Sapienza” University of Rome,
Via Eudossiana 18, 00184, Rome.
Email: {michele.scarpiniti, simone.scardapane, danilo.comminiello, raffaele.parisi, aurelio.uncini}@uniroma1.it
   Aurelio Uncini Department of Information Engineering, Electronics and Telecommunications (DIET),
“Sapienza” University of Rome,
Via Eudossiana 18, 00184, Rome.
Email: {michele.scarpiniti, simone.scardapane, danilo.comminiello, raffaele.parisi, aurelio.uncini}@uniroma1.it
Abstract

In this paper, we derive a modified InfoMax algorithm for the solution of Blind Signal Separation (BSS) problems by using advanced stochastic methods. The proposed approach is based on a novel stochastic optimization approach known as the Adaptive Moment Estimation (Adam) algorithm. The proposed BSS solution can benefit from the excellent properties of the Adam approach. In order to derive the new learning rule, the Adam algorithm is introduced in the derivation of the cost function maximization in the standard InfoMax algorithm. The natural gradient adaptation is also considered. Finally, some experimental results show the effectiveness of the proposed approach.

Keywords:
Blind Source Separation, Stochastic Optimization, Adam algorithm, InfoMax algorithm, Natural gradient.

1 Introduction

Blind Source Separation (BSS) is a well-known and well-studied field in the adaptive signal processing and machine learning [6, 7, 10, 5, 16, 20]. The problem is to recover original and unknown sources from a set of mixtures recorded in an unknown environment. The term blind refers to the fact that both the sources and the mixing environment are unknown.

Several well-performing approaches exist when the mixing environment is instantaneous [7, 3], while some problems still arise in convolutive environments [2, 4, 17]. Different approaches were proposed to solve BSS in linear and instantaneous environment. Some of these approaches perform separation by using high order statistics (HOS) while others exploit information theoretic (IT) measures [6]. One of the well-know algorithms in this latter class is the InfoMax one proposed by Bell and Sejnowski in [3]. The InfoMax algorithm is based on the maximization of the joint entropy of the output of a single layer neural network and it is very efficient and easy to implement since the gradient of the joint entropy can be evaluated simply in a closed form. Moreover, in order to avoid numerical instability, a natural gradient modification to InfoMax algorithm has also been proposed [1, 6].

Unfortunately, all these solutions perform slowly when the number of the original sources to be separated is high and/or bad scaled. The separation becomes impossible if the number of sources is equal or greater than ten. In addition, the convergence speed problem worsen in the case of additive sensor noise to mixtures or when the mixing matrix is close to be ill-conditioned. However, specially when working with speech and audio signals, fast convergence speed is an important task to be performed. Many authors have tried to overcome this problem: some solutions consist in incorporating a momentum term in the learning rule [13], in a self-adjusting variable step-size [18] or in a scaled natural gradient algorithm [8].

Recently, a novel algorithm for gradient based optimization of stochastic cost functions has been proposed by Kingma and Ba in [12]. This algorithm is based on the adaptive estimates of the first and second order moments of the gradient, and for this reason has been called the Adaptive Moment Estimation (Adam) algorithm. The authors have demonstrated in [12] that Adam is easy to implement, computationally efficient, invariant to diagonal rescaling of the gradients and well suited for problems with large data and parameters.

The Adam algorithm combines the advantages of other state-of-the-art optimization algorithms, like AdaGrad [9] and RMSProp [19], outperforming the limitations of these algorithms. In addition, Adam can be related to the natural gradient (NG) adaptation [1], employing a preconditioning that adapts to the geometry of data.

In this paper we propose a modified InfoMax algorithm based on the Adam algorithm [12] for the solution of BSS problems. We derive the proposed modified algorithm by using the Adam algorithm instead of the standard stochastic gradient ascent rule. It is shown that the novel algorithm has a faster convergence speed with respect to the standard InfoMax algorithm and usually also reaches a better separation. Some experimental results, evaluated in terms of the Amari Performance index (PI) [6], show the effectiveness of the proposed idea.

The rest of the paper is organized as follows. In Section 2 we briefly introduce the problem of BSS. Then, we give some details on the Adam algorithm in Section 3. The main novelty of this paper, the extension of InfoMax algorithm with Adam is provided in Section 4. Finally, we validate our approach in Section 5. We conclude with some final remarks in Section 6.

2 The Blind Source Separation Problem

Let us consider a set of unknown and statistically independent sources denoted as , such that the components are zero-mean and mutually independent. Signals received by an array of sensors are denoted by and are called mixtures. For simplicity, here we consider the case of .

In the case of a linear and instantaneous mixing environment, the mixture can be described in a matrix form as

(1)

where the matrix collects the mixing coefficients , and is a noise vector, with correlation matrix and noise variance .

The separated signals are obtained by a separating matrix as described by the following equation

(2)

The transformation in (2) is such that has components , , that are as independent as possible.

Moreover, due to the well-known permutation and scaling ambiguity of the BSS problem, the output signals can be expressed as

(3)

where is an permutation matrix and is an diagonal scaling matrix.

Figure 1: Unknown mixing environment and the InfoMax network.

The weights can be adapted by maximizing or minimizing some suitable cost function [6, 5]. A particularly good approach is to maximize the joint entropy of a single layer neural network [3], as shown in Figure 1, leading to the Bell and Sejnowski InfoMax algorithm. In this network each output is a nonlinear transformation of each signal :

(4)

Each function is known as activation function (AF).

With reference to Figure 1, using the equation relating the probability density function (pdf) of a random variable and nonlinear transformation of it [14], the joint entropy of the network output can be evaluated as

(5)

where is the first derivative of the -th AF with respect its input .

Evaluating the gradient of (5) with respect to the separating parameters , after some not too complicated manipulations, leads to

(6)

where denotes the transpose of the inverse and is a vector collecting the terms , defined as the ratio of the second and the first derivatives of the AFs.

In order to avoid the numerical problems of the matrix inversion in (6) and the possibility to remain blocked in a local minimum, Amari has introduced the Natural Gradient (NG) adaptation [1] that overcomes such problems. The NG adaptation rule, can be obtained simply by right multiplying the stochastic gradient for the term . Hence, after multiplying (6) for this term, the NG InfoMax is simply

(7)

Regarding the selection of the AFs shape, there are several alternatives. However, especially in the case of audio and speech signals, a good nonlinearity is represented by the function. With this choice, the vector in (6) and (7) is simply evaluated as .

In summary, using the AF, the InfoMax and natural gradient InfoMax algorithms are described by the following learning rules:

(8)
(9)

where is the learning rate or step-size.

3 The Adam Algorithm

Le us denote with a noisy cost function to be minimized (or maximized) with respect to the parameters . The problem is considered stochastic for the random nature of data samples or for inherent function noise. In the following, the noisy gradient vector at time of the cost function with respect to the parameters , will be denoted with .

The Adam algorithm performs the gradient descent (or ascent) optimization by evaluating the moving averages of the noisy gradient and the square gradient [12]. These moment vectors are updated by using two scalar coefficients and that control the exponential decay rates:

(10)
(11)

where and denotes the element-wise multiplication, while and are initialized as zero vectors. These vectors represent the mean and the uncentered variance of the gradient vector . Since the estimates of and are biased towards zero, due to their initialization, a bias correction is computed on these moments

(12)
(13)

The vector represents an approximation of the diagonal of the Fisher information matrix [15]. Hence Adam can be related to the natural gradient algorithm [1].

Finally, the parameter vector at time , is updated by the following rule

(14)

where is the step size and is a small positive constant used to avoid the division for zero. In the gradient ascent, the minus sign in (14) is substituted with the plus sign.

4 Modified InfoMax algorithm

In this section, we introduce the modified Bell and Sejnowski InfoMax algorithm, based on the Adam optimization method. Since Adam algorithm uses a vector of parameters, we perform a vectorization of the gradient (6) or (7)

(15)

where is the vectorization operator, that forms a vector by stacking the columns of the matrix below one another. The gradient vector is evaluated on a number of blocks extracted from the signals.

At this point, the mean and variance vectors are evaluated from the knowledge of the gradient at time by using equations (10)–(13). Then, using (14), the gradient vector (15) is updated for the maximization of the joint entropy by

(16)

Finally, the vector is reshaped in matrix form, by

(17)

where reconstructs the matrix by unstacking the columns from the vector . The whole algorithm is in case repeated for a certain number of epochs .

The pseudo-code of the modified InfoMax algorithm with Adam, called here Adam InfoMax, is described in Algorithm 1.

Data: Mixture signals , , , , , , .
1 Initialization: , , , for t=1:P do
2       Extract the -th block from Update gradient in (6) or (7)
3 end for
Result: Separated signals:
Algorithm 1 Pseudo-code for the Adam InfoMax algorithm.

5 Experimental Results

In this section, we propose some experimental results to demonstrate the effectiveness of the proposed idea. We perform separation of mixtures of both synthetic and real-world data. The results are evaluated in terms of the Amari Performance Index (PI) [6], defined as

(18)

where are the elements of the matrix . This index is close to zero if the matrix is close to the product of a permutation matrix and a diagonal scaling matrix.

The performances of the proposed algorithm were also compared with the standard InfoMax algorithm [3] and the Momentum InfoMax described in [13]. In this last algorithm, the parameter is set to 0.5 in all experiments.

In a first experiment, we perform the separation of five mixtures obtained as linear combination of the following bad-scaled independent sources

where denotes a triangular waveform and is a uniform noise in the range . Each signal is composed of samples. The mixing matrix is a Hilbert matrix, which is extremely ill-conditioned. All simulations have been performed by MATLAB 2015a, on an Intel Core i7 3.10 GHz processor at 64 bit with 8 GB of RAM. Parameters of the algorithms have been found heuristically.

We perform separation by the Adam modification of the standard InfoMax algorithm, with gradient in (6). We use a block length of samples (hence ), while the other parameters are set as: , , , , and the learning rate of the standard InfoMax and the Momentum InfoMax to . Performance in terms of the PI in (18) is reported in Figure 2, that clearly shows the effectiveness of the proposed idea.

Figure 2: Performance Index (PI) of the first proposed experiment.

A second and third experiments are performed on speech audio signals sampled at 8 kHz. Each signal is composed of samples. In the second experiment a male and a female speech are mixed with a random matrix with entries uniformly distributed in the interval while in the third one, two male and two female speeches are mixed with a ill-conditioned Hilbert matrix. In addition, an additive white noise with 30 dB of SNR is added to the mixtures in both cases.

We perform separation by the Adam modification of the NG InfoMax algorithm, with gradient in (7). We use a block length of samples, while the other parameters are set as: , , , , and the learning rate of the standard InfoMax and the Momentum InfoMax to . Performances in terms of the PI, for the second and third experiments, are reported in Figures (a)a and (b)b, respectively, that clearly show also in these cases the effectiveness of the proposed idea. In particular, Figure (b)b confirms that the separation obtained by using the Adam InfoMax algorithm in the third experiment is quite satisfactory, while the standard and the Momentum InfoMax give worse solutions.

(a)
(b)
Figure 3: Performance Index (PI) of the: second (a) and third (b) proposed experiment.

Finally, a last experiment is performed on real data. We used an EEG signal recorded according the 10-20 system, consisting of 19 signals with artifacts. ICA is a common approach to deal with the problem of artifact removal from EEG [11]. We use a block length of samples, while the other parameters are set as: , , , , and the learning rate of the standard InfoMax and the Momentum InfoMax to . Since we used real data and the mixing matrix is not available, the PI cannot be evaluated. Hence, we decided to evaluate the performance by the norm of the gradient of the cost function. As it can be seen from Figure 4, also in this case the Adam InfoMax algorithm achieves better results in a smaller number of iterations with respect to the compared algorithms.

Figure 4: Norm of the gradient of the cost function in the fourth proposed experiment.

6 Conclusions

In this paper a modified InfoMax algorithm for the blind separation of independent sources, in a linear and instantaneous environment, has been introduced. The proposed approach is based on a novel and advanced stochastic optimization method known as Adam and it can benefit from the excellent properties of the Adam approach. In particular, it is easy to implement, computationally efficient, and it is well suited when the number of sources is high and bad-scaled, the mixing matrix is close to be ill-conditioned and some additive noise is considered. Some experimental results, evaluated in terms of the Amari Performance Index and compared with other state-of-the-art approaches, have shown the effectiveness of the proposed approach.

References

  • [1] Amari, S.: Natural gradient works efficiently in learning. Neural Computation 10(3), 251–276 (1998)
  • [2] Araki, S., Mukai, R., Makino, S., Nishikawa, T., Saruwatari, H.: The fundamental limitation of frequency domain blind source separation for convolutive mixtures of speech. IEEE Transactions on Speech and Audio Processing 11(2), 109–116 (March 2003)
  • [3] Bell, A.J., Sejnowski, T.J.: An information-maximisation approach to blind separation and blind deconvolution. Neural Computation 7(6), 1129–1159 (November 1995)
  • [4] Boulmezaoud, T.Z., El Rhabi, M., Fenniri, H., Moreau, E.: On convolutive blind source separation in a noisy context and a total variation regularization. In: Proc. of IEEE Eleventh International Workshop on Signal Processing Advances in Wireless Communications (SPAWC2010). pp. 1–5. Marrakech (June 20-23 2010)
  • [5] Choi, S., Cichocki, A., Park, H.M., Lee, S.Y.: Blind source separation and independent component analysis: a review. Neural Information Processing - Letters and Reviews 6(1), 1–57 (January 2005)
  • [6] Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing. John Wiley (2002)
  • [7] Comon, P., Jutten, C. (eds.): Handbook of Blind Source Separation. Springer (2010)
  • [8] Douglas, S.C., Gupta, M.: Scaled natural gradient algorithm for instantaneous and convolutive blind source separation. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2007). vol. 2, pp. 637–640 (2007)
  • [9] Duchi, J., E., H., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research 12(7), 2121–2159 (July 2011)
  • [10] Haykin, S. (ed.): Unsupervised Adaptive Filtering, vol. 2: Blind Source Separation. Wiley (2000)
  • [11] Inuso, G., La Foresta, F., Mammone, N., Morabito, F.C.: Wavelet-ICA methodology for efficient artifact removal from electroencephalographic recordings. In: Proc. of International Joint Conference on Neural Networks (IJCNN2007).
  • [12] Kingma, D.P., Ba, J.L.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR2015). pp. 1–13 (2015), http://arxiv.org/abs/1412.6980
  • [13] Liu, J.Q., Feng, D.Z., Zhang, W.W.: Adaptive improved natural gradient algorithm for blind source separation. Neural Computation 21(3), 872–889 (March 2009)
  • [14] Papoulis, A.: Probability, Random Variables and Stochastic Processes. McGraw-Hill (1991)
  • [15] Pascanu, R., Bengio, Y.: Revisiting natural gradient for deep networks. In: International Conference on Learning Representations (April 2014)
  • [16] Scarpiniti, M., Vigliano, D., Parisi, R., Uncini, A.: Generalized splitting functions for blind separation of complex signals. Neurocomputing 71(10-12), 2245–2270 (June 2008)
  • [17] Smaragdis, P.: Blind separation of convolved mixtures in the frequency domain. Neurocomputing 22(21–34) (1998)
  • [18] Thomas, P., Allen, G., August, N.: Step-size control in blind source separation. In: International Workshop on Independent Component Analysis and Blind Source Separation. pp. 509–514 (2000)
  • [19] Tieleman, T., Hinton, G.: Lecture 6.5 – RMSProp. Tech. rep., COURSERA: Neural Networks for Machine Learning (2012)
  • [20] Vigliano, D., Scarpiniti, M., Parisi, R., Uncini, A.: Flexible nonlinear blind signal separation in the complex domain. International Journal of Neural Systems 18(2), 105–122 (April 2008)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
25918
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description