Accelerated Flow for Probability distributions

Accelerated Flow for Probability distributions

Amirhossein Taghvaei & Prashant G. Mehta Coordinated Science Laboratory, University of Illinois at Urbana-Champaign
Abstract

This paper presents a methodology and numerical algorithms for constructing accelerated gradient flows on the space of probability distributions. In particular, we extend the recent variational formulation of accelerated gradient methods in [23] from vector valued variables to probability distributions. The variational problem is modeled as a mean-field optimal control problem. The maximum principle of optimal control theory is used to derive Hamilton’s equations for the optimal gradient flow. The Hamilton’s equation are shown to achieve the accelerated form of density transport from any initial probability distribution to a target probability distribution. A quantitative estimate on the asymptotic convergence rate is provided based on a Lyapunov function construction, when the objective functional is displacement convex. Two numerical approximations are presented to implement the Hamilton’s equations as a system of interacting particles. The continuous limit of the Nesterov’s algorithm is shown to be a special case with . The algorithm is illustrated with numerical examples.

1 Introduction

Optimization on the space of probability distributions is important to a number of machine learning models including variational inference [4], generative models [13, 2], and policy optimization in reinforcement learning [22]. A number of recent studies have considered solution approaches to these problems based upon a construction of gradient flow on the space of probability distributions [24, 17, 12, 8, 20, 6]. Such constructions are useful for convergence analysis as well as development of numerical algorithms.

In this paper, we propose a methodology and numerical algorithms that achieve accelerated gradient flows on the space of probability distributions. The proposed numerical algorithms are related to yet distinct from the accelerated stochastic gradient descent [15] and Hamiltonian Markov chain Monte-Carlo (MCMC) algorithms [19, 7]. The proposed methodology extends the variational formulation of [23] from vector valued variables to probability distributions. The original formulation of [23] was used to derive and analyze the convergence properties of a large class of accelerated optimization algorithms, most significant of which is the continuous-time limit of the Nesterov’s algorithm [21]. In this paper, the limit is referred to as the Nesterov’s ordinary differential equation (ODE).

The extension proposed in our work is based upon a generalization of the formula for the Lagrangian in [23]: (i) the kinetic energy term is replaced with the expected value of kinetic energy; and (ii) the potential energy term is replaced with a suitably defined functional on the space of probability distributions. The variational problem is to obtain a trajectory in the space of probability distributions that minimizes the action integral of the Lagrangian.

The variational problem is modeled as a mean-field optimal problem. The maximum principle of the optimal control theory is used to derive the Hamilton’s equations which represent the first order optimality conditions. The Hamilton’s equations provide a generalization of the Nesterov’s ODE to the space of probability distributions. A candidate Lyapunov function is proposed for the convergence analysis of the solution of the Hamilton’s equations. In this way, quantitative estimates on convergence rate are obtained for the case when the objective functional is displacement convex [18]. Table 1 provides a summary of the relationship between the original variational formulation in [23] and the extension proposed in this paper.

We also consider the important special case when the objective functional is the relative entropy functional defined with respect to a target probability distribution . In this case, the accelerated gradient flow is shown to be related to the continuous limit of the Hamiltonian Monte-Carlo algorithm [7] (Remark 1). The Hamilton’s equations are finite-dimensional for the special case when the initial and the target probability distributions are both Gaussian. In this case, the mean evolves according to the Nesterov’s ODE. For the general case, the Lyapunov function-based convergence analysis applies when the target distribution is log-concave.

As a final contribution, the proposed methodology is used to obtain a numerical algorithm. The algorithm is an interacting particle system that empirically approximates the distribution with a finite but large number of particles. The difficult part of this construction is the approximation of the interaction term between particles. For this purpose, two types of approximations are described: (i) Gaussian approximation which is asymptotically (as ) exact in Gaussian settings; and (ii) Diffusion map approximation which is computationally more demanding but asymptotically exact for a general class of distributions.

The outline of the remainder of this paper is as follows: Sec. 2 provides a brief review of the variational formulation in [23]. The proposed extension to the space of probability distribution appears in Sec. 3 where the main result is also described. The numerical algorithm along with the results of numerical experiments appear in Sec. 4. Comparisons with MCMC and Hamiltonian MCMC are also described. The conclusions appear in Sec. 5.

Notation: The gradient and divergence operators are denoted as and respectively. With multiple variables, denotes the gradient with respect to the variable . Therefore, the divergence of the vector field is . The space of absolutely continuous probability measures on with finite second moments is denoted by . The Wasserstein gradient and the Gâteaux derivative of a functional is denoted as and respectively (see Appendix C for definition). The probability distribution of a random variable is denoted as .

Vector Probability distribution
State-space
Objective function
Lagrangian
Lyapunov funct.
Table 1: Summary of the variational formulations for vectors and probability distributions.

2 Review of the variational formulation of [23]

The basic problem is to minimize a smooth convex function on . The standard form of the gradient descent algorithm for this problem is an ODE:

(1)

Accelerated forms of this algorithm are obtained based on a variational formulation due to [23]. The formulation is briefly reviewed here using an optimal control formalism. The Lagrangian is defined as

(2)

where is the time, is the state, is the velocity or control input, and the time-varying parameters satisfy the following scaling conditions: , , and where and are constants.

The variational problem is

(3)
Subject to

The Hamiltonian function is

(4)

where is dual variable and denotes the dot product between vectors and .

According to the Pontryagin’s Maximum Principle, the optimal control . The resulting Hamilton’s equations are

(5a)
(5b)

The system (5) is an example of accelerated gradient descent algorithm. Specifically, if the parameters are defined using , one obtains the continuous-time limit of the Nesterov’s accelerated algorithm. It is referred to as the Nesterov’s ODE in this paper.

For this system, a Lyapunov function is as follows:

(6)

where . It is shown in [23] that upon differentiating along the solution trajectory, . This yields the following convergence rate:

(7)

3 Variational formulation for probability distributions

3.1 Motivation and background

Let be a functional on the space of probability distributions. Consider the problem of minimizing . The (Wasserstein) gradient flow with respect to is

(8)

where is the Wasserstein gradient of .

An important example is the relative entropy functional where where is referred to as the target distribution. The gradient of relative entropy is given by . The gradient flow

(9)

is the Fokker-Planck equation [16]. The gradient flow achieves the density transport from an initial probability distribution to the target (here, also equilibrium) probability distribution ; and underlies the construction and the analysis of Markov chain Monte-Carlo (MCMC) algorithms. The simplest MCMC algorithm is the Langevin stochastic differential equation (SDE):

where is the standard Brownian motion in .

The main problem of this paper is to construct an accelerated form of the gradient flow (8). The proposed solution is based upon a variational formulation. As tabulated in Table 1, the solution represents a generalization of [23] from its original deterministic finite-dimensional to now probabilistic infinite-dimensional settings.

The variational problem can be expressed in two equivalent forms: (i) The probabilistic form is described next in the main body of the paper; and (ii) The partial differential equation (PDE) form appears in the Appendix. The probabilistic form is stressed here because it represents a direct generalization of the Nesterov’s ODE and because it is closer to the numerical algorithm.

3.2 Probabilistic form of the variational problem

Consider the stochastic process that takes values in and evolves according to:

where the control input also takes values in , and is the probability distribution of the initial condition . It is noted that the randomness here comes only from the random initial condition.

Suppose the objective functional is of the form . The Lagrangian is defined as

(10)

This formula is a natural generalization of the Lagrangian (2) and the parameters are defined exactly the same as in the finite-dimensional case. The stochastic optimal control problem is:

Minimize (11)
Subject to

where is the probability density function of the random variable .

The Hamiltonian function for this problem is given by [5, Sec. 6.2.3]:

(12)

where is the dual variable.

3.3 Main result

Theorem 1.

Consider the variational problem (11).

  • The optimal control where the optimal trajectory evolves according to the Hamilton’s equations:

    (13a)
    (13b)

    where is any convex function and .

  • Suppose also that the functional is displacement convex and is its minimizer. Define the energy along the optimal trajectory

    (14)

    where the map is the optimal transport map from to . Suppose also that the following technical assumption holds: . Then . Consequently, the following rate of convergence is obtained along the optimal trajectory:

    (15)
Proof sketch.

The Hamilton’s equations are derived using the standard mean-field optimal control theory [5]. The Lyapunov function argument is based upon the variational inequality characterization of a displacement convex function [1, Eq. 10.1.7]. The detailed proof appears in the Appendix. We expect that the technical assumption is not necessary. This is the subject of the continuing work. ∎

3.4 Relative entropy as the functional

In the remainder of this paper, we assume that the functional is the relative entropy where is a given target probability distribution. In this case the Hamilton’s equations are given by

(16a)
(16b)

where and . Moreover, if is convex (or equivalently is log-concave), then is displacement convex with the unique minimizer at and the convergence estimate is given by .

Remark 1.

The Hamilton’s equations (16) with the relative entropy functional is related to the under-damped Langevin equation [7]. The difference is that the deterministic term in (16) is replaced with a random Brownian motion term in the under-damped Langevin equation. More detailed comparison appears in the Appendix D.

3.5 Quadratic Gaussian case

Suppose the initial distribution and the target distribution are both Gaussian, denoted as and , respectively. This is equivalent to the objective function being quadratic of the form . Therefore, this problem is referred to as the quadratic Gaussian case. The following Proposition shows that the mean of the stochastic process evolves according to the Nesterov ODE (5):

Proposition 1.

(Quadratic Gaussian case) Consider the variational problem (11) for the quadratic Gaussian case. Then

  • The stochastic process is a Gaussian process. The Hamilton’s equations are given by:

    where and are the mean and the covariance of .

  • Upon taking the expectation of both sides, and denoting

    which is identical to Nesterov ODE (5).

4 Numerical algorithm

The proposed numerical algorithm is based upon an interacting particle implementation of the Hamilton’s equation (16). Consider a system of particles that evolve according to:

The interaction term is an empirical approximation of the term in (16). We propose two types of empirical approximations as follows:

1. Gaussian approximation: Suppose the density is approximated as a Gaussian . In this case, . This motivates the following empirical approximation of the interaction term:

(18)

where is the empirical mean and is the empirical covariance.

Even though the approximation is asymptotically (as ) exact only under the Gaussian assumption, it may be used in a more general settings, particularly when the density is unimodal. The situation is analogous to the (Bayesian) filtering problem, where an ensemble Kalman filter is used as an approximate solution for non-Gaussian distributions [11].

2. Diffusion map approximation: This is based upon the diffusion map approximation of the weighted Laplacian operator [9, 14]. For a function , the weighted Laplacian is defined as . Denote as the coordinate function on . It is a straightforward calculation to show that . This allows one to use the diffusion map approximation of the weighted Laplacian to approximate the interaction term as follows:

(19)

where the kernel is constructed empirically in terms of the Gaussian kernel . The parameter is referred to as the kernel bandwidth. The approximation is asymptotically exact as and . The approximation error is of order where the first term is referred to as the bias error and the second term is referred to as the variance error [14]. The variance error is the dominant term in the error for small values of , whereas the bias error is the dominant term for large values of (see Figure 3).

The resulting interacting particle algorithm is tabulated in Table 1. The symplectic method proposed in [3] is used to carry out the numerical integration. The algorithm is applied to two examples as described in the following sections.

0:  , , , , , , ,
0:  
  Initialize ,
  Compute with (18) or (19)
  for  to  do
     
     
     
     Compute with (18)or (19)
     
     
  end for
Algorithm 1 Interacting particle implementation of the accelerated gradient flow
Remark 2.

For the case where there is only one particle ( ), the interaction term is zero and the system (17) reduces to the Nesterov ODE (5).

Remark 3.

(Comparison with density estimation) The diffusion map approximation algorithm is conceptually different from an explicit density estimation-based approach. A basic density estimation is to approximate where is the Gaussian kernel. Using such an approximation, the interaction term is approximated as

(20)

Despite the apparent similarity of the two formulae, (19) for diffusion map approximation and (20) for density estimation, the nature of the two approximations is different. The difference arises because the kernel in (19) is data-dependent whereas the kernel in (20) is not. While both approximations are exact in the asymptotic limit as and , they exhibit different convergence rates. Numerical experiments presented in Figure 3-(d) show that the diffusion map approximation has a much smaller variance for intermediate values of . Theoretical understanding of the difference is the subject of continuing work.

4.1 Gaussian Example

Consider the Gaussian example as described in Sec. 3.5. The simulation results for the scalar () case with initial distribution and target distribution where and is depicted in Figure 1-(a)-(b). For this simulation, the numerical parameters are as follows: , , , , ,, and . The result numerically verifies the convergence rate derived in Theorem 1 for the case where the target distribution is Gaussian.

Figure 1: Simulation result for the Gaussian case (Example 4.1): (a) The time traces of the particles; (b) The KL-divergence as a function of time.
Figure 2: Simulation result for the non-Gaussian case (Example 4.2): (a) The time traces of the particles; (b) The KL-divergence as a function of time.

4.2 Non-Gaussian example

This example involves a non-Gaussian target distribution which is a mixture of two one-dimensional Gaussians with and . The simulation results are depicted in Figure 2-(a)-(b). The numerical parameters are same as in the Example 4.1. The interaction term is approximated using the diffusion map approximation with . The numerical result depicted in Figure 2-(a) show that the diffusion map algorithm converges to the mixture of Gaussian target distribution. The result depicted in Figure 2-(b) suggests that the convergence rate also appears to hold for this non-log-concave target distribution. Theoretical justification of this is subject of continuing work.

4.3 Comparison with MCMC and HMCMC

This section contains numerical experiment comparing the performance of the accelerated algorithm 1 using the diffusion map (DM) approximation (19) and the density estimation (DE)-based approximation  (20) with the Markov chain Monte-Carlo (MCMC) algorithm studied in [10] and the Hamiltonian MCMC algorithm studied in [7].

We consider the problem setting of the mixture of Gaussians as in example 4.2. All algorithms are simulated with a fixed step-size of for iterations. The performance is measured by computing the mean-squared error in estimating the expectation of the function . The mean-square error at the -th iteration is computed by averaging the error over runs:

(21)

The numerical results are depicted in Figure 3. Figure 3 depicts the m.s.e as a function of . It is observed that the accelerated algorithm 1 with the diffusion map approximation admits an order of magnitude better m.s.e for the same number of particles. It is also observed that the m.s.e decreases rapidly for intermediate values of before saturating for large values of , where the bias term dominates (see discussion following Eq. 19).

Figure 3 depicts the m.s.e as a function of the number of iterations for a fixed number of particles . It is observed that the accelerated algorithm 1 displays the quickest convergence amongst the algorithms tested.

Figure 3 depicts the average computational time per iteration as a function of the number of samples . The computational time of the diffusion map approximation scales as because it involves computing a matrix , while the computational cost of the MCMC and HMCMC algorithms scale as . The computational complexity may be improved by (i) exploiting the sparsity structure of the matrix ; (ii) sub-sampling the particles in computing the empirical averages; (iii) adaptively updating the matrix according to a certain error criteria.

Finally, we provide comparison between diffusion map approximation (20) and the density-based approximation  (20): Figure 3 depicts the m.s.e for these two approximations as a function of the kernel-bandwidth for a fixed number of particles . For very large and for very small values of , where bias and variance dominates the error, respectively, the two algorithms have similar m.s.e. However, for intermediate values of , the diffusion map approximation has smaller variance, and thus lower m.s.e.

5 Conclusion and directions for future work

The main contribution of this paper is to extend the variational formulation of [23] to obtain theoretical results and numerical algorithms for accelerated gradient flow in the space of probability distributions. In continuous-time settings, bounds on convergence rate are derived based on a Lyapunov function argument. Two numerical algorithms based upon an interacting particle representation are presented and illustrated with examples. As has been the case in finite-dimensional settings, the theoretical framework is expected to be useful in this regard. Some direction for future include: (i) removing the technical assumption in the proof of the Theorem 1; (ii) analysis of the convergence under the weaker assumption that the target distribution satisfies only a spectral gap condition; and (iii) analysis of the numerical algorithms in the finite- and in the finite cases.

Figure 3: Simulation-based comparison of the performance of the accelerated algorithm 1 using the diffusion map (DM) approximation (19), the density estimation (DE)-based approximation (20) with the MCMC and HMCMC algorithms: (a) the mean-squared error (m.s.e) (21) as a function of the number of samples ; (b) the m.s.e as a function of the number of iterations; (c) the average computational time per iteration as a function of the number of samples; (d) m.s.e comparison between the diffusion map and the density estimation-based approaches as a function of the kernel bandwidth .

References

  • [1] Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.
  • [2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
  • [3] Michael Betancourt, Michael I Jordan, and Ashia C Wilson. On symplectic optimization. arXiv preprint arXiv:1802.03653, 2018.
  • [4] David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 112(518):859–877, 2017.
  • [5] Rene Carmona and François Delarue. Probabilistic Theory of Mean Field Games with Applications I-II. Springer, 2017.
  • [6] Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, and Liqun Chen. A unified particle-optimization framework for scalable bayesian sampling. arXiv preprint arXiv:1805.11659, 2018.
  • [7] Xiang Cheng, Niladri S Chatterji, Peter L Bartlett, and Michael I Jordan. Underdamped langevin mcmc: A non-asymptotic analysis. arXiv preprint arXiv:1707.03663, 2017.
  • [8] Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. arXiv preprint arXiv:1805.09545, 2018.
  • [9] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006.
  • [10] Alain Durmus and Eric Moulines. High-dimensional bayesian inference via the unadjusted langevin algorithm. arXiv preprint arXiv:1605.01559, 2016.
  • [11] Geir Evensen. The ensemble kalman filter: Theoretical formulation and practical implementation. Ocean dynamics, 53(4):343–367, 2003.
  • [12] Charlie Frogner and Tomaso Poggio. Approximate inference with wasserstein gradient flows. arXiv preprint arXiv:1806.04542, 2018.
  • [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [14] Matthias Hein, Jean-Yves Audibert, and Ulrike von Luxburg. Graph laplacians and their convergence on random neighborhood graphs. Journal of Machine Learning Research, 8(Jun):1325–1368, 2007.
  • [15] Prateek Jain, Sham M Kakade, Rahul Kidambi, Praneeth Netrapalli, and Aaron Sidford. Accelerating stochastic gradient descent. arXiv preprint arXiv:1704.08227, 2017.
  • [16] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokker–planck equation. SIAM journal on mathematical analysis, 29(1):1–17, 1998.
  • [17] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances In Neural Information Processing Systems, pages 2378–2386, 2016.
  • [18] Robert J McCann. A convexity principle for interacting gases. Advances in mathematics, 128(1):153–179, 1997.
  • [19] Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11):2, 2011.
  • [20] Pierre H Richemond and Brendan Maginnis. On wasserstein reinforcement learning and the fokker-planck equation. arXiv preprint arXiv:1712.07185, 2017.
  • [21] Weijie Su, Stephen Boyd, and Emmanuel Candes. A differential equation for modeling nesterov’s accelerated gradient method: Theory and insights. In Advances in Neural Information Processing Systems, pages 2510–2518, 2014.
  • [22] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
  • [23] Andre Wibisono, Ashia C Wilson, and Michael I Jordan. A variational perspective on accelerated methods in optimization. Proceedings of the National Academy of Sciences, page 201614734, 2016.
  • [24] Ruiyi Zhang, Changyou Chen, Chunyuan Li, and Lawrence Carin. Policy optimization as wasserstein gradient flows. arXiv preprint arXiv:1808.03030, 2018.

Appendix A PDE formulation of the variational problem

An equivalent pde formulation is obtained by considering the stochastic optimal control problem (11) as a deterministic optimal control problem on the space of the probability distributions. Specifically, the process is a deterministic process that takes values in and evolves according to the continuity equation

where is now a time-varying vector field. The Lagrangian is defined as:

(22)

The optimal control problem is:

Minimize (23)
Subject to

The Hamiltonian function is

(24)

where is the dual variable and the inner-product

Appendix B Restatement of the main result and its proof

We restate Theorem 1 below which now includes the pde formulation as well.

Theorem 2.

Consider the variational problem (11)-(23).

  • For the probabilistic form (11) of the variational problem, the optimal control , where the optimal trajectory evolves according to the Hamilton’s odes:

    (25a)
    (25b)

    where is a convex function, and .

  • For the pde form (23) of the variational problem, the optimal control is , where the optimal trajectory evolves according to the Hamilton’s pdes:

    (26a)
    (26b)
  • The solutions of the two forms are equivalent in the following sense:

  • Suppose additionally that the functional is displacement convex and is its minimizer. Define

    (27)

    where the map is the optimal transport map from to . Suppose also that the following technical assumption holds: . Then . Consequently, the following rate of convergence is obtained along the optimal trajectory

Proof.
  • The Hamiltonian function defined in (12) is equal to

    after inserting the formula for the Lagrangian. According to the maximum principle in probabilistic form for (mean-field) optimal control problems (see [5, Sec. 6.2.3]), the optimal control law and the Hamilton’s equations are

    where are independent copies of . The derivatives

    It follows that

    where we used the definition and the identity [5, Sec. 5.2.2 Example 3]

  • The Hamiltonian function defined in (24) is equal to

    after inserting the formula for the Lagrangian. According to the maximum principle for pde formulation of mean-field optimal control problems (see [5, Sec. 6.2.4]) the optimal control vector field is and the Hamilton’s equations are:

    inserting the formula concludes the result.

  • Consider the defined from (26). The distribution is identified with a stochastic process such that and . Then define . Taking the time derivative shows that