On the Behavior of the Expectation-Maximization Algorithm for Mixture Models

On the Behavior of the Expectation-Maximization Algorithm for Mixture Models

Abstract

Finite mixture models are among the most popular statistical models used in different data science disciplines. Despite their broad applicability, inference under these models typically leads to computationally challenging non-convex problems. While the Expectation-Maximization (EM) algorithm is the most popular approach for solving these non-convex problems, the behavior of this algorithm is not well understood. In this work, we focus on the case of mixture of Laplacian (or Gaussian) distribution. We start by analyzing a simple equally weighted mixture of two single dimensional Laplacian distributions and show that every local optimum of the population maximum likelihood estimation problem is globally optimal. Then, we prove that the EM algorithm converges to the ground truth parameters almost surely with random initialization. Our result extends the existing results for Gaussian distribution to Laplacian distribution. Then we numerically study the behavior of mixture models with more than two components. Motivated by our extensive numerical experiments, we propose a novel stochastic method for estimating the mean of components of a mixture model. Our numerical experiments show that our algorithm outperforms the Naïve EM algorithm in almost all scenarios.

On the Behavior of the Expectation-Maximization Algorithm for Mixture Models

Babak Barazandeh   Meisam Razaviyayn
University of Southern California
Email : { barazand, razaviya}@usc.edu


Index Terms—  Finite mixture model, Gaussian/Laplacian mixture model, EM algorithm, non-convex optimization

I Introduction

The ability of finite mixture distributions [pearson1894contributions] to model the presence of subpopulations within an overall population has made them popular across almost all engineering and scientific disciplines [melnykov2010finite, zhang2015finite, titterington1985statistical, mclachlan2004finite]. While statistical identifiability for various mixture models has been widely studied [teicher1963identifiability, allman2009identifiability], Gaussian mixture model (GMM) has drawn more attention due to its wide applicability [day1969estimating, wolfe1970pattern]. Started by Dasgupta[dasgupta1999learning], there have been multiple efforts for finding algorithm with polynomial sample/time complexity for estimating GMM parameters [vempala2004spectral, arora2005learning, chaudhuri2008learning, dasgupta2007probabilistic, moitra2010settling, hsu2013learning, belkin2010polynomial]. Despite statistical guarantees, these methods are not computationally efficient enough for many large-scale problems. Moreover, these results assume that the data is generated from an exact generative model which never happens in reality. In contrast, methods based on solving maximum likelihood estimation (MLE) problem are very popular due to computational efficiency and robustness of MLE against perturbations of the generative model [donoho1988automatic]. Although MLE-based methods are popular in practice, the theory behind their optimization algorithms (such as EM method) is little understood. Most existing algorithms with theoretical performance guarantees are not scalable to the modern applications of massive size. This is mainly due to the combinatorial and non-convex nature of the underlying optimization problems.

Recent advances in the fields of non-convex optimization has led to a better understandings of the mixture model inference algorithms such as EM algrithm. For example, [balakrishnan2017statistical] proves that under proper initialization, EM algorithm exponentially converges to the ground truth parameters. However, no computationally efficient initialization approach is provided. [xu2016global] globally analyzes EM algorithm applied to the mixture of two equally weighted Gaussian distributions. While [daskalakis2016ten] provides global convergence guarantees for the EM algorithm, [jin2016local] studies the landscape of GMM likelihood function with more than 3 components and shows that there might be some spurious locals even for the simple case of the equally weighted GMM.

In this work, we revisit the EM algorithm under Laplacian mixture model and Gaussian mixture model. We first show that, similar to the Gaussian case, the maximum likelihood estimation objective has no spurious local optima in the symmetric Laplacian mixture model (LMM) with components. This Laplacian mixture structure has wide range of applications in medical image denoising, video retrieval and blind source separation [bhowmick2006laplace, klein2014fisher, mitianoudis2005overcomplete, amin2007application, rabbani2009wavelet]. For the case of mixture model with components, we propose a stochastic algorithm which utilizes the likelihood function as well as moment information of the mixture model distribution. Our numerical experiments show that our algorithm outperforms the Naïve EM algorithm in almost all scenarios.

Ii Problem Formulation

The general mixture model distribution is defined as

where is the number of mixture components; is the non-negative mixing weight with and = is the distribution’s parameter vector. Estimating the parameters of the mixture models is central in many applications. This estimation is typically done by solving MLE problem due to its intuitive justification and its robust behavior [donoho1988automatic].

The focus of our work is on the population likelihood maximization, i.e., when the number of samples is very large. When parameters w and are known, using the law of large numbers, MLE problem leads to the following population risk optimization problem [xu2016global, daskalakis2016ten, jin2016local]:

(1)

In this paper, we focus on the case of equally weighted mixture components, i.e., [xu2016global, daskalakis2016ten, jin2016local, srebro2007there]. We also restrict ourselves to two widely-used Gaussian mixture models and Laplacian mixture models [bhowmick2006laplace, klein2014fisher, mitianoudis2005overcomplete, amin2007application, rabbani2009wavelet, vempala2004spectral, arora2005learning, chaudhuri2008learning, dasgupta2007probabilistic]. It is worth mentioning that even in these restricted scenarios, the above MLE problem is non-convex and highly challenging to solve.

Iii EM for the case of

Recently, it has been shown that the EM algorithm recovers the ground truth distributions for equally weighted Gaussian mixture model with components [xu2016global, daskalakis2016ten]. Here we extend this result to single dimensional Laplacian mixture models.

Define the Laplacian distribution with the probability density function where and control the mean and variance of the distribution. Thus, the equally weighted Laplacian mixture model with two components has probability density function:

In the population level estimation, the overall mean of the data, i.e., can be estimated accurately. Hence, without loss of generality, we only need to estimate the normalized difference of the two means, i.e., . Under this generic assumption, our observations are drawn from the distribution

Our goal is to estimate the parameter from observations at the population level. Without loss of generality, and for simplicity of the presentation, we set , define and . Thus, the -th step of the EM algorithm for estimating the ground truth parameter is:

(2)

where is the estimation of in -th iteration; see [daskalakis2016ten, xu2016global, jin2016local] for the similar Gaussian case. In the rest of the paper, without loss of generality, we assume that . Further, to simplify our analysis, we define the mapping

It is easy to verify that , , , and . In other words, are the fixed points of the EM algorithm. Using symmetry, we can simplify as

(3)

Let us first establish few lemmas on the behavior of the mapping .

Lemma

The derivative of the mapping with respect to is positive, i.e., .

Proof

First notice that is equal to

We prove the lemma for the following two different cases separately:

Case 1) :

Case 2)

Lemma

For , we have

Proof

When , it is not hard to show that Hence,

where the last two inequalities are due to the facts that and .

Theorem

Without loss of generality, assume that . Then the EM iterate defined in (2) is a contraction, i.e., where , , and .

Theorem III shows that the EM iterates converge to the ground truth parameter which is the global optimum of the MLE.

Proof

First of all, according to the Mean Value Theorem, between and such that:

where the inequality is due to lemma  III. Thus, does not change sign during the algorithm. Consider two different regions: and . When , case 1 in Lemma III implies that

The last two inequalities are due to the fact that , and the fact that is a positive and decreasing function in with . On the other hand, when , the Mean Value Theorem implies that

where the last two inequalities are due to lemma  III and the facts that 1) does not change sign and 2) the function is positive and decreasing in with . Hence, Combining the above two cases will complete the proof.

Iv Modified EM for the case of

In [srebro2007there] it is conjectured that the local optima of the population level MLE problem for any equally weighed GMM is globally optimal. Recently, [jin2016local] has rejected this conjecture by providing a counter example with components. Moreover, they have shown that the local optima could be arbitrary far from ground truth parameters and there is a positive probability for the EM algorithm with random initialization to converge to these spurious local optima. Motivated by [jin2016local], we numerically study the performance of the EM algorithm in both GMMs and LMMs.

Fig. 1: Naïve EM fails to recover the ground truth parameter.

Numerical Experiment 1: Figure 1 presents the convergence plots of the EM algorithm with four different initializations. Two of these initializations converge to the global optima, while the other two fails to recover the ground truth parameters and they are trapped in spurious local optima. To understand the performance of the EM algorithm with random initialization, we ran the EM algorithm for different number of components  and dimensions . First we generate the -dimensional mean vectors , . These vectors are the mean values of different Gaussian components. For each Gaussian component, the variance is set to . Thus the vectors will completely characterize the distribution of the GMM. Then, samples are randomly drawn from the generated GMM and the EM algorithm is run with 1000 different initializations, each for iterations. The table in Figure 1 shows the percentage of the times that the EM algorithm converges to the ground truth parameters (global optimal point) for different values of and . As can be seen in this table, EM fails dramatically especially for larger values of .

By examining the spurious local optima in the previous numerical experiment, we have noticed that many of these local optima fail to satisfy the first moment condition. More specifically, we know that any global optimum of the MLE problem (1) should recover the ground truth parameter – up to permutations [teicher1963identifiability, allman2009identifiability]. Hence, any global optimum  has to satisfy the first moment condition Without loss of generality and by shifting all data points, we can assume that . Thus, must satisfy the condition

(4)

However, according to our numerical experiments, many spurious local optima fail to satisfy (4). To enforce condition (4), one can regularize the MLE cost function with the first order moment condition and solve

(5)

where is the regularization coefficient. To solve (5), we propose the following iterative algorithm:

(6)

where . This algorithm is based on the successive upper-bound minimization framework [razaviyayn2014parallel, razaviyayn2013unified, hong2016unified]. Notice that if we set in (6), we obtain the naïve EM algorithm. The following theorem establishes the convergence of the iterates in (6).

Theorem

Any limit point of the iterates generated by (6) is a stationary point of (5).

Proof Sketch Let be the objective function of  (5). Using Cauchy-Schwarz and Jensen’s inequality, one can show that . Moreover, . Thus the assumptions of [razaviyayn2013unified, Proposition 1] are satisfied. Furthermore, notice that the iterate (6) is obtained based on the update rule . Therefore, [razaviyayn2013unified, Theorem 1] implies that every limit point of the algorithm is a stationary point.

Numerical Experiment 2: To evaluate the performance of the algorithm defined in (6), we repeat the Numerical Experiment 1 with the proposed iterative method (6). Figure 2 shows the performance of the proposed iterative method (6). As can be seen from the table, the regularized method still fails to recover the ground truth parameters in many scenarios. More specifically, although the regularization term enforces (4), it changes the likelihood landscape and hence, it introduces some new spurious local optima.

Fig. 2: Modified EM based on regularized MLE (6).

In our numerical experiment 2, we observed that many of the spurious local optima are tied to a fixed value of . In other words, after getting stuck in a spurious local optimum point, changing the value of helps us escape from that local optimum. Notice that the global optimal parameter  is the solution of (5) for any value of . Motivated by this observation, we consider the following objective function:

(7)

where is some continuous distribution defined over . The idea behind this objective is that each sampled value of leads to different set of spurious local optima. However, if a point is a fixed point of EM algorithm for any value of , it must be a stationary point of the MLE function and also it should satisfy the first moment condition (4). Based on this objective function, we propose algorithm 1 for estimating the ground truth parameter.

1 Input: Number of iterations: , distribution , Initial estimate:
2 Output:
3 for  do
4       Sample for  do
5              
6       end for
7      
8 end for
return
Algorithm 1 Stochastic multi-objective EM

Numerical Experiment 3: To evaluate the performance of Algorithm 1, we repeat the data generating procedure in Numerical Experiment 1. Then we run Algorithm 1 on the generated data. Figure  3 shows the performance of this algorithm. As can be seen from this figure, the proposed method significantly improves the percentage of times that a random initialization converges to the ground truth parameter. For example, the proposed method converges to the global optimal parameter of the times for , while the naïve EM converges for of the initializations (comparing Fig. 1 and Fig. 3).

Remark: While the results in this section are only presented for GMM model, we have observed similar results in LMM model. These results are omitted due to lack of space.

Fig. 3: Performance of Stochastic multi-objective EM.

V Conclusion

In this paper, first the convergence behavior of the EM algorithm for equally weighted Laplacian mixture model with two components is studied. It is shown that the EM algorithm with random initialization converges to the ground truth distribution with probability one. Moreover, the landscape of the equally weighted mixture models with more than two components is revisited. Based on our numerical experiments, we proposed a modified EM approach which significantly improves the probability of recovering the ground truth parameters.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
283412
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description