A Stochastic Line Search Method with Convergence Rate Analysis

A Stochastic Line Search Method with Convergence Rate Analysis

Courtney Paquette Department of Industrial and Systems Engineering, Lehigh University, Harold S. Mohler Laboratory, 200 West Packer Avenue, Bethlehem, PA 18015-1582, USA. cop318@lehigh.edu. The work of this author was partially supported by NSF TRIPODS Grant 17-40796 and DMS 18-03289.    Katya Scheinberg Department of Industrial and Systems Engineering, Lehigh University, Harold S. Mohler Laboratory, 200 West Packer Avenue, Bethlehem, PA 18015-1582, USA. katyas@lehigh.edu. The work of this author was partially supported by NSF Grants CCF 16-18717 and TRIPODS 17-40796, and DARPA Lagrange award HR-001117S0039.
Abstract

For deterministic optimization, line-search methods augment algorithms by providing stability and improved efficiency. We adapt a classical backtracking Armijo line-search to the stochastic optimization setting. While traditional line-search relies on exact computations of the gradient and values of the objective function, our method assumes that these values are available up to some dynamically adjusted accuracy which holds with some sufficiently large, but fixed, probability. We show the expected number of iterations to reach a near stationary point matches the worst-case efficiency of typical first-order methods, while for convex and strongly convex objective, it achieves rates of deterministic gradient descent in function values.

1 Introduction

In this paper we consider the classical stochastic optimization problem

(1.1)

where is a random variable obeying some distribution. In the case of empirical risk minimization with a finite training set, is a random variable that is defined by a single random sample drawn uniformly from the training set. More generally may represents a sample or a set of samples drawn from the data distribution.

The most widely used method to solve (1.1) is the stochastic gradient descent (SGD) [16]. Due to its low iteration cost, SGD is often preferred to the standard gradient descent (GD) method for empirical risk minimization. Despite the prevalent use of SGD, it has known challenges and inefficiencies. First, the direction may not represent a descent direction, and second, the method is sensitive to the step-size (learning rate) which is often poorly overestimated. Various authors have attempted to address this last issue, see [8, 10, 12, 13]. Motivated by these facts, we turn to the deterministic optimization approach for adaptively selecting step sizes - GD with Armijo back-tracking line-search.

Related work.

In general, GD with back-tracking requires computing a full gradient and function evaluation - too expensive of an operation for the general problem (1.1). On the other hand, the per-iteration convergence rate for GD is superior to SGD making it an attractive alternative. Several works have attempted to transfer ideas from deterministic GD to the stochastic setting with the intent of diminishing the gradient computation, by using dynamic gradient sampling, e.g. [5, 9, 11]. However, these works address only convex setting. Moreover for them to obtain convergence rates matching those of GD in expectation, a small constant step-size must be known in advance and the sample size needs to be increased at a pre-described rate thus decreasing the variance of gradient estimates. Recently, in [4] an adaptive sample size selection strategy was proposed where sample size is selected based on the reduction of the gradient (and not pre-described). For convergence rates to be derived, however, an assumption has to be made that these sample sizes can be selected based on the size of the true gradient, which is, of course, unknown. In [18] a second-order method that subsamples gradient and Hessian is proposed, however, the sample size is simply assumed to be sufficiently large, so that essentially, the method behaves as a deterministic inexact method with high probability.

In [4] and [9] a practical back-tracking line search is proposed, combined with the their sample size selection. In both cases the backtracking is based on Armijo line search condition applied to function estimates that are computed on the same batch as the gradient estimates and is essentially a heuristic. A very different type of line-search based on probabilistic Wolfe condition is proposed in [14], however, it aims at improving step size selection for SGD and has no theoretical guarantees.

Our contribution.

In this work we propose an adaptive backtracking line-search method, where the sample sizes for gradient and function estimates are chosen adaptively using knowable quantities along with the step-size. We show that this method converges to the optimal solution with probability one and derive strong convergence rates that match those of the deterministic gradient descent methods in the nonconvex , convex , and strongly convex cases. This paper offers the first stochastic line search method with convergence rates analysis, and is the first to provide convergence rates analysis for adaptive sample size selection based on knowable quantities.

Background.

There are many types of (deterministic) line-search methods, see [15, Chapter 3], but all share a common philosophy. First, at each iteration, the method computes a search direction by e.g. the gradient or (quasi) Newton directions. Next, they determine how far to move in the direction through the univariate function, , to find the stepsize . Typical line-searches try out a sequences of potential values for the stepsize, accepting once some verifiable criteria becomes satisfied. One popular line-search criteria specifies an acceptable step length should give sufficient decrease in the objective function :

(1.2)

where the constant is chosen by the user and . Larger step sizes imply larger gains towards optimality and lead to fewer overall iterations. When step sizes get too small or worse , no progress is made and the algorithm stagnates. A popular way to systematically search the domain of while simultaneously preventing small step sizes is backtracking. Backtracking starts with an overestimate of and decreases it until (1.2) becomes true. Our exposition is on a stochastic version of backtracking using the stochastic gradient estimate as a search direction and stochastic function estimates in (1.2). In the remainder of the paper, all random quantities will be denoted by capitalized letters and their respective realizations by corresponding lower case letters.

2 Stochastic back-tracking line search method

We present here our main algorithm for GD with back-tracking line search. We impose the standard assumption on the objective function.

Assumption 2.1.

We assume that all iterates of Algorithm 1 satisfy where is a set in . Moreover, the gradient of is -Lipschitz continuous for all and that

2.1 Outline of method

At each iteration, our scheme computes a random direction via e.g. a minibatch stochastic gradient estimate or sampling the function itself and using finite differences. Then, we compute stochastic function estimates at the current iterate and prospective new iterate, resp. and . We check the Armijo condition [1] using the stochastic estimates

(2.1)

If (2.1) holds, the next iterate becomes and stepsize increases; otherwise and decreases, as is typical in (deterministic) back-tracking line searches.

Algorithm 1 describes our method.111We state the algorithm using the lower case notation to represent a realization of the algorithm Unlike classical back-tracking line search, there is an additional control, , which serves as a guess of the true function decrease and controls the accuracy of the function estimates. We discuss this further next.

Initialization: Choose constants , and . Pick initial point , for some , and .
Repeat for
  1. Compute a gradient estimate Based on compute a gradient estimate . Set the step

  2. Compute function estimates Based on , and obtain estimates of and of and respectively.

  3. Check sufficient decrease

Check if (2.2)
  • Successful step

  • If (2.2) set and .
    • Reliable step: If , then increase .

    • Unreliable step: If , then decrease .

  • Unsuccessful step

  • Otherwise, set , , and . Let .
    Algorithm 1 Line search method

    Challenges with randomized line-search.

    Due to the stochasticity of the gradient and/or function values, two major challenges result:

    • a series of erroneous unsuccessful steps cause to become arbitrarily small;

    • steps may falsely satisfy (2.1) leading to objective value at the next iteration arbitrarily larger than the current iterate.

    Convergence proofs for deterministic line searches rely on the fact that neither of the above problems arise. Our approach controls the probability with which the random gradients and function values are representative of their true counterparts. When this probability is large enough, the method tends to make successful steps when is sufficiently small, hence behaves like a random walk with an upward drift thus staying away from .

    Yet, even when the probability of good gradients/function estimates is near 1, it is not guaranteed that holds at each iteration due to the second issue - possible arbitrary increase of the objective. Since random gradient may not be representative of the true gradient the function estimate accuracy and thus the expected improvement needs to be controlled by a different quantity, . When the predicted decrease in the true function matches the expected function estimate accuracy (), we call the step reliable and increase the parameter for the next iteration; otherwise our prediction does not match the expectation and we decrease .

    Moreover, unlike the typical stochastic convergence rate analysis, which bounds expected improvement in either or after a given number of iteration, our convergence rate analysis bounds the total expected number of steps that the algorithm takes before either or is reached. Our results rely on a stochastic process framework introduced and analyzed in [3] to provide convergence rates for stochastic trust region method.

    2.2 Random gradient and function estimates

    Overview.

    At each iteration, we compute a stochastic gradient and stochastic function values. With probability , the random direction is close to the true gradient. We measure closeness or accuracy of the random direction using the current step length, which is a known quantity. This procedure naturally adapts the required accuracy as the algorithm progresses. As the steps get shorter (i.e. either the gradient gets smaller or the step-size parameter does), we require the accuracy to increase, but the probability of encountering a good gradient at any iteration is the same.

    A similar procedure applies to function estimates, and . The accuracy of the function estimates to the true function values at the points and are tied to the size of the step, . At each iteration, there is a probability of obtaining good function estimates. By choosing the probabilities of good gradient and estimates, we show Algorithm 1 converges. To formalize this procedure, we introduce the following.

    Notation and definitions.

    Algorithm 1 generates a random process , in what follows we will denote all random quantities by capital letters and their realization by small letters. Hence random gradient estimate is denoted by and its realizations - by . Similarly, let the random quantities (iterates), (stepsize), control size , and (step) denote their respective realizations. Similarly, we let denote estimates of and , with their realizations denoted by and . Our goal is to show that under some assumptions on and the resulting stochastic process convergences with probability one and at an appropriate rate. In particular, we assume that the estimates and and are sufficiently accurate with sufficiently high probability, conditioned on the past.

    To formalize the conditioning on the past, let denote the -algebra generated by the random variables and and let denote the -algebra generated by the random variables and . For completeness, we set . As a result, we have that for is a filtration. By construction of the random variables and in Algorithm 1, we see and for all .

    We measure accuracy of the gradient estimates and function estimates and using the following definitions.

    Definition 2.2.

    We say that a sequence of random directions is -probabilistically -sufficiently accurate for Algorithm 1 for the corresponding sequence , if there exists a constant , such that the events

    satisfy the conditions222Given a measurable set , we use as the indicator function for the set ; if and otherwise.

    In addition to sufficiently accurate gradients, we require estimates on the function values and to also be sufficiently accurate.

    Definition 2.3.

    A sequence of random estimates is said to be -probabilistically -accurate with respect to the corresponding sequence if the events

    satisfy the condition

    We note here that the filtration includes and ; hence the accuracy of the estimates is measured with respect to fixed quantities. Next, we state the key assumption on the nature of the stochastic information in Algorithm 1.

    Assumption 2.4.

    The following hold for the quantities in the algorithm:

    1. The sequence of random gradients generated by Algorithm 1 is -probabilistically -sufficiently accurate for some sufficiently large .

    2. The sequence of estimates generated by Algorithm 1 is -probabilistically -accurate estimates for some and sufficiently large .

    3. The sequence of estimates generated by Algorithm 1 satisfies a -variance condition for all 333We implicitly assume and are integrable for all ; thus it is straightforward to deduce and are integrable for all .,

      (2.3)
      and

    A simple calculation shows that under Assumption 2.4 the following hold

    Remark 1.

    We are interested in deriving convergence results for the case when may be large. For the rest of the exposition, without loss of generality . It clear if happens to be smaller, somewhat better bounds that the ones we derive here will result since the gradients give tighter approximations of the true gradient. We are interested in deriving bound for the case when is large. Equation (2.3) includes the maximum of two terms - one of the terms is unknown. When one posesses external knowledge of , one could use this value. This is particularly useful when is big since it allows large variance in the function estimates, for example assumption that implies that this variance does not have to be driven to zero, before the algorithm reaches a desired accuracy. Yet, for convergence and since a useful lower bound on may be unknown, we include the parameter as a way to adaptively control the variance. As such should be small, in fact, can be set equal to . The analysis can be performed for any other values of the above constants - the choices here are for simplicity and convenience.

    This assumption on the accuracy of the gradient and function estimates is key in our convergence rate analysis. We derive specific bounds on and under which these rates would hold. We note here that if then Assumption 2.4(iii) is not needed and condition is sufficient for the convergence results. This case can be considered as an extension of results in [6]. Before concluding this section, we state a result showing the relationship between the variance assumption on the function values and the probability of inaccurate estimates.

    Lemma 2.5.

    Let Assumption  2.4 hold. Suppose is a random process generated by Algorithm 1 and are -probabilistically accurate estimates. Then for every we have

    and
    Proof.

    We show the result for , but the proof for is the same. Using Holder’s inequality for conditional expectations, we deduce

    The result follows after noting by (2.3)

    2.3 Computing , , and to satisfy Assumption 2.4.

    Assuming that the variance of random function and gradient realizations is bounded as

    Assumption 2.4 can be made to hold if , and are computed using a sufficient number of samples. In particular, let be a sample of realizations , and . By using results e.g. in [18, 19] we can show that if

    (2.4)

    (where hides the log factor of ), then Assumption 2.4(i) is satisfied. While is not known when is chosen, one can design a simple loop by guessing the value of and increasing the number of samples until (2.4) is satisfied, this procedure is discussed in [6]. Similarly to satisfy Assumption 2.4(ii), it is sufficient to compute with

    (where hides the log factor of ) and to obtain analogously. Finally, it is easy to see that Assumption 2.4(iii) is simply satisfied if by standard properties of variance.

    We observe that:

    • unlike [5, 9], the number of samples for gradient and function estimation does not increase at any pre-defined rate, but is closely related to the progress of the algorithm. In particular if and increase then the sample sets sizes can decrease.

    • Also, unlike [18] where the number of samples is simply chosen large enough a priori for all so that the right hand side in Assumption 2.4(i) is bounded by a predefined accuracy , our algorithm can be applied without knowledge of .

    • Finally, unlike [4] where theoretical results require that depends on , which is unknown, our bounds on the sample set sizes all use knowable quantities, such as bound on the variance and quantities computed by the algorithm.

    We also point out can be arbitrarily big and depends only on the backtracking factor and is not close to ; hence the number of samples to satisfy Assumption 2.4(i) is moderate. On the other hand, will have to depend on ; hence a looser control of the gradient estimates results in tighter control, i.e. larger sample sets, for function estimates.

    Our last comment is that does not have to be an unbiased estimate of and does not need to be computed via gradient samples. Instead it can be computed via stochastic finite differences, as is discussed for example in [7].

    3 Renewal-Reward Process

    In this section, we define a general random process introduced in [3] and its stopping time which serve as a general framework for analyzing behavior of stochastic trust region method in [3] and stochastic line search in this paper. We state the relevant definitions, assumptions, and theorems and refer the reader to the proofs in [3].

    Definition 3.1.

    Given a discrete time stochastic process , a random variable is a stopping time for if the event .

    Let be a random process such that and for . Let us also define a biased random walk process, , defined on the same probability space as . We denote the -algebra generated by , where . In addition, obeys the following dynamics

    (3.1)

    We define to be a family of stopping times parameterized by . In [3] a bound on is derived under the following assumption on .

    Assumption 3.2.

    The following hold for the process .

    1. is a constant. There exists a constant and (for some ) such that for all .

    2. There exists a constant for some and , such that, the following holds for all ,

      where satisfies (3.1) with .

    3. There exists a nondecreasing function and a constant such that

    Assumption 3.2 (iii) states that conditioned on the event and the past, the random variable decreases by at each iteration. Whereas Assumption 3.2 (ii) says that once falls below the fixed constant , the sequence has a tendency to increase. Assumptions 3.2 (i) and (ii) together also ensures that belongs to the sequence of values taken by the sequence . As we will see this is a simple technical assumption that can be satisfied w.l.o.g.

    Remark 2.

    Computational complexity (in deterministic methods) measures the number of iterations until an event such as is small or is small, or equivalently, the rate at which the gradient/function values decreases as a function of the iteration counter . For randomized or stochastic methods, previous works tended to focus on the second definition, i.e. showing the expected size of the gradient or function values decreases like . Instead, here we bound the expected number of iterations until the size of the gradient or function values are small, which is the same as bounding the stopping times and , for a fixed .

    Remark 3.

    In the context of deterministic line search, when the stepsize falls below the constant , where is the Lipschitz constant of , the iterate always satisfies the sufficient decrease condition, namely . Thus never falls much below . To match the dynamics behind deterministic line search, we expect with and the constant . However, in the stochastic setting there is a positive probability of being arbitrarily small. Theorem 3.3, below, is derived by observing that on average occurs frequently due to the upward drift in the random walk process. Consequently, can be bounded by a negative fixed value (dependent on ) frequently; thus we can derive a bound on .

    The following theorem (Theorem 2.2 in [3]) bounds in terms of and .

    Theorem 3.3.

    Under Assumption 3.2,

    4 Convergence of Stochastic Line Search

    Our primary goal is to prove convergence of Algorithm 1 by showing a lim-inf convergence result, a.s. We that typical convergence results for stochastic algorithms prove either high probability results or that the expected gradient at an averaged point converges. Our result is slightly stronger than these results since we show a subsequence of the converges a.s. With this convergence result, stopping times based on either and/or are finite almost surely. Our approach for the liminf proof is twofold: (1) construct a function () whose expected progress decreases proportionally to and (2) the of the step sizes is strictly larger than a.s.

    4.1 Useful results

    Before delving into the convergence statement and proof, we state some lemmas similar to those derived in [6, 2, 7].

    Lemma 4.1 (Accurate gradients lower bound on ).

    Suppose is -sufficiently accurate. Then

    Proof.

    Because is -sufficiently accurate together with the triangle inequality implies

    Lemma 4.2 (Accurate gradients and estimates successful iteration).

    Suppose is -sufficiently accurate and are -accurate estimates. If

    then the trial step is successful. In particular, this means

    Proof.

    The -smoothness of and the -sufficiently accurate gradient immediately yield

    Since the estimates are -accurate, we obtain

    The result follows by noting . ∎

    Lemma 4.3 (Good estimates decrease in function).

    Suppose and are -accurate estimates. If the trial step is successful, then the improvement in function value is

    (4.1)

    If, in addition, the step is reliable, then the improvement in function value is

    (4.2)
    Proof.

    The iterate is successful and the estimates are accurate so we conclude

    where the last inequality follows because . The condition immediately implies (4.1). By noticing holds for reliable steps, we deduce (4.2). ∎

    Lemma 4.4.

    Suppose the iterate is successful. Then

    In particular, the inequality holds

    Proof.

    An immediate consequence of -smoothness of is . The result follows from squaring both sides and applying the bound, . To obtain the second inequality, we note that in the case is successful, . ∎

    Lemma 4.5 (Accurate gradients and estimates decrease in function).

    Suppose is -sufficiently accurate and are -accurate estimates where . If the trial step is successful, then

    (4.3)

    In addition, if the trial step is reliable, then

    (4.4)
    Proof.

    Lemma 4.1 implies

    (4.5)

    We combine this result with Lemma 4.3 to conclude the first result. For the second result, since the step is reliable, equation (4.5) improves to

    and again the result follows from Lemma 4.3. ∎

    4.2 Definition and analysis of process for Algorithm 1

    We base our proof of convergence on properties of the random function

    (4.6)

    for some (deterministic) and for all . The goal is to show that satisfies Assumption 3.2, in particular, that is expected to decrease on each iteration. Due to inaccuracy in function estimates and gradients, the algorithm may take a step that increases the objective and thus . We will show that such increase if bounded by a value proportional to . On the other hand, as we will show, on successful iteration with accurate function estimates, the objective decreases proportionally , while on unsuccessful steps, equation (4.6) is always negative because both and are decreased. The function is chosen to balance the potential increases and decreases in the objective with changes inflicted by unsuccessful steps.

    Theorem 4.6.

    Let Assumptions 2.1 and 2.4 hold. Suppose is the random process generated by Algorithm 1. Then there exist probabilities and a constant such that the expected decrease in is

    (4.7)

    In particular, the constant and probabilities satisfy

    (4.8)
    (4.9)
    (4.10)
    Upper bound on
    Accurate gradients Accurate functions w/ prob. Bad gradients Accurate functions w/ prob. Bad functions w/ prob.
    Success decrease increase increase
    Unsuccess