Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning Both authors were supported by the NKFIH (National Research, Development and Innovation Office, Hungary) grant KH 126505 and the “Lendület” grant LP 2015-6 of the Hungarian Academy of Sciences. The authors thank Minh-Ngoc Tran for helpful discussions.

Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Learning 1

Abstract

Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) is a momentum version of stochastic gradient descent with properly injected Gaussian noise to find a global minimum. In this paper, non-asymptotic convergence analysis of SGHMC is given in the context of non-convex optimization, where subsampling techniques are used over an i.i.d dataset for gradient updates. Our results complement those of [RRT17] and improve on those of [GGZ18].

1 Introduction

Let be a probability space where all the random objects of this paper will be defined. The expectation of a random variable with values in a Euclidean space will be denoted by .

We consider the following optimization problem

(1)

and is a random element in some measurable space with an unknown probability law . The function is assumed continuously differentiable (for each ) but it can possibly be non-convex. Suppose that one has access to i.i.d samples drawn from , where is fixed. Our goal is to compute an approximate minimizer such that the population risk

is minimized, where the expectation is taken with respect to the training data and additional randomness generating .

Since the distribution of is unknown, we consider the empirical risk minimization problem

(2)

using the dataset

Stochastic gradient algorithms based on Langevin Monte Carlo have gained more attention in recent years. Two popular algorithms are Stochastic Gradient Langevin Dynamics (SGLD) and Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). First, we summarize the use of SGLD in optimization, as presented in [RRT17]. Consider the overdamped Langevin stochastic differential equation

(3)

where is the standard Brownian motion in and is the inverse temperature parameter. Under suitable assumptions on , the SDE (3) admits the Gibbs measure as its unique invariant distribution. In addition, it is known that for sufficiently big , the Gibbs distribution concentrates around global minimizers of . Therefore, one can use the value of from (3), (or from its discretized counterpart SGLD), as an approximate solution to the empirical risk problem, provided that is large and temperature is low.

In this paper, we consider the underdamped (second-order) Langevin diffusion

(4)
(5)

where model the position and the momentum of a particle moving in a field of force with random force given by Gaussian noise. It is shown that under some suitable conditions for , the Markov process is ergodic and has a unique stationary distribution

where is the normalizing constant

It is easy to observe that the -marginal distribution of is the invariant distribution of (3). We consider the first order Euler discretization of (4), (5), also called Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), given as follows

(6)
(7)

where is a step size parameter and is a sequence of i.i.d standard Gaussian random vectors in . The initial condition may be random, but independent of .

In certain contexts, the full knowledge of the gradient is not available, however, using the dataset , one can construct its unbiased estimates. In what follows, we adopt the general setting given by [RRT17]. Let be a measurable space, and such that for any ,

(8)

where is a random element in with probability law . Conditionally on , the SGHMC algorithm is defined by

(9)
(10)

where is a sequence of i.i.d. random elements in with law . We also assume from now on that are independent.

Our ultimate goal is to find approximate global minimizers to the problem (1). Let be the output of the algorithm (9),(10) after iterations, and be such that . The excess risk is decomposed as follows, see also [RRT17],

(11)

The remaining part of the present paper is about finding bounds for these errors. Section 2 summarizes technical conditions and the main results. Comparison of our contributions to previous studies is discussed in Section 3. Proofs are given in Section 4.

Notation and conventions. For , scalar product in is denoted by . We use to denote the Euclidean norm (where the dimension of the space may vary). denotes the Borel - field of . For any -valued random variable and for any , let us set . We denote by the set of with . The Wasserstein distance of order between two probability measures and on is defined by

(12)

where is the set of couplings of , see e.g. [Vil08]. For two -valued random variables and , we denote , where is the law of . We do not indicate in the notation and it may vary.

2 Asumptions and main results

The following conditions are required throughout the paper.

Assumption 2.1.

The function is continuously differentiable, takes non-negative values, and there are constants such that for any ,

Assumption 2.2.

There is such that, for each ,

Assumption 2.3 (Dissipative).

There exist constants such that

Assumption 2.4.

For each , it holds that and

Assumption 2.5.

There exists a constant such that for every ,

Assumption 2.6.

The law of the initial state satisfies

where is the Lyapunov function defined in (17) below.

Remark 2.7.

If the set of global minimizers is bounded, we can always redefine the function to be quadratic outside a compact set containing the origin while maintaining its minimizers. Hence, Assumption 2.3 can be satisfied in practice. Assumption 2.4 means that the estimated gradient is also Lipschitz when using the same training dataset. For example, at each iteration of SGHMC, we may sample uniformly with replacement a random minibatch of size . Then we can choose where are i.i.d random variables having distribution . The gradient estimate is thus

which is clearly unbiased and Assumption 2.4 will be satisfied whenever Assumptions 2.2 and 2.1 are in force. Assumption 2.5 controls the variance of the gradient estimate.

An auxiliary continuous time process is needed in the subsequent analysis. For a step size , denote by the scaled Brownian motion. Let be the solutions of

(13)
(14)

with initial condition where may be random but independent of .

Our first result tracks the discrepancy between the SGHMC algorithm (9), (10) and the auxiliary processes (13), (14).

Theorem 2.8.

Let . There exists a constant such that for all ,

(15)
Proof.

The proof of this theorem is given in Section 4.2. ∎

The following is the main result of the paper.

Theorem 2.9.

Let . Suppose that the SGHMC iterates are defined by (9), (10). The expected population risk can be bounded as

where

where are appropriate constants and is the metric defined in (20) below.

Proof.

The proof of this theorem is given in Section 4.3. ∎

Corollary 2.10.

Let We have

whenever

Proof.

From the proof of Theorem 2.9, or more precisely from (46), we need to choose and such that

First, we choose and so that and then

will hold for large enough. ∎

3 Related work and our contributions

Non-asymptotic convergence rate Langevin dynamics based algorithms for approximate sampling log-concave distributions are intensively studied in recent years. For example, overdamped Langevin dynamics are discussed in [WT11], [Dal17b], [DM16], [DK17], [DM17] and others. Recently, [BCM18] treats the case of non-i.i.d. data streams with a certain mixing property. Underdamped Langevin dynamics are examined in [CFG14], [Nea11], [CCBJ17], etc. Further analysis on HMC are discussed on [BBLG17], [Bet17]. Subsampling methods are applied to speed up HMC for large datasets, see [DQK17], [QKVT18].

The use of momentum to accelerate optimization methods are discussed intensively in literature, for example [AP16]. In particular, performance of SGHMC is experimentally proved better than SGLD in many applications, see [CDC15], [CFG14]. An important advantage of the underdamped SDE is that convergence to its stationary distribution is faster than that of the overdamped SDE in the -Wasserstein distance, as shown in [EGZ17].

Finding an approximate minimizer is similar to sampling distributions concentrate around the true minimizer. This well known connection gives rise to the study of simulated annealing algorithms, see [Hwa80], [Gid85], [Haj85], [CHS87], [HKS89], [GM91], [GM93]. Recently, there are many studies further investigate this connection by means of non asymptotic convergence of Langevin based algorithms and in stochastic non-convex optimization and large-scale data analysis, [CCG16], [Dal17a].

Relaxing convexity is a more challenging issue. In [CCAY18], the problem of sampling from a target distribution where is L-smooth everywhere and -strongly convex outside a ball of finite radius is considered. They provide upper bounds for the number of steps to be within a given precision level of the 1-Wasserstein distance between the HMC algorithm and the equilibrium distribution. In a similar setting, [MMS18] obtains bounds in both the and distances for overdamped Langevin dynamics with stochastic gradients. [XCZG18] studies the convergence of the SGLD algorithm and the variance reduced SGLD to global minima of nonconvex functions satisfying the dissipativity condition.

Our work continues these lines of research, the most similar setting to ours is the recent paper [GGZ18]. We summarize our contributions below:

  • Diffusion approximation. In Lemma 10 of [GGZ18], the upper bound for the 2-Wasserstein distance between the SGHMC algorithm at step and underdamped SDE at time is (up to constants) given by

    which depends on the number of iteration . Therefore obtaining a precision requires a careful choice of and even . By introducing the auxiliary SDEs (13), (14), we are able to achieve the rate

    see Theorem 2.8 for the case . This upper bound is better in the number of iterations and hence, improves Lemma 10 of [GGZ18]. Our analysis for variance of the algorithm is also different. The iteration does not accumulate mean squared errors, as the number of step goes to infinity.

  • Our proof for Theorem 2.8 is relatively simple and we do not need to adopt the techniques of [RRT17] which involve heavy functional analysis, e.g. the weighted Csiszár - Kullback - Pinsker inequalities in [BV05] is not needed.

  • If we consider the -Wasserstein distance for , in particular, when , Theorem 2.9 gives tighter bounds, compared to Theorem 2 of [GGZ18].

  • Dependence structure of the dataset in the sampling mechanism, can be arbitrary, see the proof of Theorem 2.8. The i.i.d assumption on dataset is used only for the generalization error. We could also incorporate non-i.i.d data in our analysis, see Remark 4.5, but this is left for further research.

4 Proofs

4.1 A contraction result

In this section, we recall a contraction result of [EGZ17]. First, it should be noticed that the constant and the function in their paper are and in the present paper, respectively. Here, the subscript stands for “contraction”. Using the upper bound of Lemma 5.1 for below, there exist constants small enough and such that

(16)

Therefore, Assumption 2.1 of [EGZ17] is satisfied, noting that and

We define the Lyapunov function

(17)

For any , we set

(18)
(19)

where are suitable positive constants to be fixed later and is continuous, non-decreasing concave function such that , is on for some constant with right-sided derivative and left-sided derivative and is constant on . For any two probability measures on , we define

(20)

Note that and are semimetrics but not necessarily metrics. A result from [EGZ17] is recalled below.

For a probability measure on , we denote by the law of when .

Theorem 4.1.

There exists a continuous non-decreasing concave function with such that for all probability measures on , and , we have

(21)

where the following relations hold:

The function is constant on , on with

and satisfies .

Proof.

From (5.15) of [EGZ17], we get

Furthermore, from the proof of Corollary 2.6 of [EGZ17], if ,

and if then

These bounds and Theorem 2.3 of [EGZ17] imply that

The proof is complete. ∎

It should be emphasized that , and consequently, contracts at the rate .

4.2 Proof of Theorem 2.8

Here, we summarize our approach. For a given step size , we divide the time axis into intervals of length . For each time step , we compare the SGHMC to the version with exact gradients relying on the Doob inequality, and then compare the later to the auxiliary continuous-time diffusion with the scaled Brownian motion. At this stage we reply on the contraction result from [EGZ17] and uniform boundedness of the Langevin diffusion and its discrete time versions. Since the auxiliary dynamics evolves slower than the original Langevin dynamics, or more precisely at the same speed as that of the SGHCM, our upper bounds do not accumulate errors and are independent from the number of iterations.

Proof.

For each , we define

Let be -valued random variables satisfying Assumption 2.6. For , we recursively define , and

(22)
(23)

Let . For each , and for each , we set

(24)

For each , it holds by definition that and the triangle inequality implies for ,

and

(25)

Denote . By Assumption 2.4, the estimation continues as follows

(26)

Using (25), one obtains

(27)

noting that Therefore, the estimation in (26) continues as

Applying the discrete-time version of Grönwall’s lemma and taking squares, noting also that yield

where

(28)

Taking conditional expectation with respect to , the estimation becomes

Since the random variables