Improved Dynamic Regret for Non-degenerate Functions

Improved Dynamic Regret for Non-degenerate Functions

\nameLijun Zhang \emailzhanglj@lamda.nju.edu.cn
\addrNational Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China \AND\nameTianbao Yang \emailtianbao-yang@uiowa.edu
\addrDepartment of Computer Science
the University of Iowa, Iowa City, IA 52242, USA \AND\nameJinfeng Yi \emailjinfengy@us.ibm.com
\addrIBM Thomas J. Watson Research Center
Yorktown Heights, NY 10598, USA \AND\nameRong Jin \emailrongjin@cse.msu.edu
\addrAlibaba Group, Seattle, USA \AND\nameZhi-Hua Zhou \emailzhouzh@lamda.nju.edu.cn
\addrNational Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China
Abstract

Recently, there has been a growing research interest in the analysis of dynamic regret, which measures the performance of an online learner against a sequence of local minimizers. By exploiting the strong convexity, previous studies have shown that the dynamic regret can be upper bounded by the path-length of the comparator sequence. In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions. Specifically, we introduce the squared path-length, which could be much smaller than the path-length, as a new regularity of the comparator sequence. When multiple gradients are accessible to the learner, we first demonstrate that the dynamic regret of strongly convex functions can be upper bounded by the minimum of the path-length and the squared path-length. We then extend our theoretical guarantee to functions that are semi-strongly convex or self-concordant. To the best of our knowledge, this is the first time that semi-strong convexity and self-concordance are utilized to tighten the dynamic regret.

\Hy@breaklinkstrue
Improved Dynamic Regret for Non-degenerate Functions Lijun Zhang zhanglj@lamda.nju.edu.cn
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China
Tianbao Yang tianbao-yang@uiowa.edu
Department of Computer Science
the University of Iowa, Iowa City, IA 52242, USA
Jinfeng Yi jinfengy@us.ibm.com
IBM Thomas J. Watson Research Center
Yorktown Heights, NY 10598, USA
Rong Jin rongjin@cse.msu.edu
Alibaba Group, Seattle, USA
Zhi-Hua Zhou zhouzh@lamda.nju.edu.cn
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China

Keywords: Dynamic regret, Gradient descent, Damped Newton method

1 Introduction

Online convex optimization is a fundamental tool for solving a wide variety of machine learning problems (Shalev-Shwartz, 2011). It can be formulated as a repeated game between a learner and an adversary. On the -th round of the game, the learner selects a point from a convex set and the adversary chooses a convex function . Then, the function is revealed to the learner, who incurs loss . The standard performance measure is the regret, defined as the difference between the learner’s cumulative loss and the cumulative loss of the optimal fixed vector in hindsight:

(1)

Over the past decades, various online algorithms, such as the online gradient descent (Zinkevich, 2003), have been proposed to yield sub-linear regret under different scenarios (Hazan et al., 2007; Shalev-Shwartz et al., 2007).

Though equipped with rich theories, the notion of regret fails to illustrate the performance of online algorithms in dynamic setting, as a static comparator is used in (1). To overcome this limitation, there has been a recent surge of interest in analyzing a more stringent metric—dynamic regret (Hall and Willett, 2013; Besbes et al., 2015; Jadbabaie et al., 2015; Mokhtari et al., 2016; Yang et al., 2016), in which the cumulative loss of the learner is compared against a sequence of local minimizers, i.e.,

(2)

where . A more general definition of dynamic regret is to evaluate the difference of the cumulative loss with respect to any sequence of comparators (Zinkevich, 2003).

It is well-known that in the worst-case, it is impossible to achieve a sub-linear dynamic regret bound, due to the arbitrary fluctuation in the functions. However, it is possible to upper bound the dynamic regret in terms of certain regularity of the comparator sequence or the function sequence. A natural regularity is the path-length of the comparator sequence, defined as

(3)

that captures the cumulative Euclidean norm of the difference between successive comparators. For convex functions, the dynamic regret of online gradient descent can be upper bounded by (Zinkevich, 2003). And when all the functions are strongly convex and smooth, the upper bound can be improved to (Mokhtari et al., 2016).

In the aforementioned results, the learner uses the gradient of each function only once, and performs one step of gradient descent to update the intermediate solution. In this paper, we examine an interesting question: is it possible to improve the dynamic regret when the learner is allowed to query the gradient multiple times? Note that the answer to this question is no if one aims to promote the static regret in (1), according to the results on the minimax regret bound (Abernethy et al., 2008a). We however show that when coming to the dynamic regret, multiple gradients can reduce the upper bound significantly. To this end, we introduce a new regularity—the squared path-length:

(4)

which could be much smaller than when the local variations are small. For example, when for all , we have but . We advance the analysis of dynamic regret in the following aspects.

  • When all the functions are strongly convex and smooth, we propose to apply gradient descent multiple times in each round, and demonstrate that the dynamic regret is reduced from to , provided the gradients of minimizers are small. We further present a matching lower bound which implies our result cannot be improved in general.

  • When all the functions are semi-strongly convex and smooth, we show that the standard online gradient descent still achieves the dynamic regret. And if we apply gradient descent multiple times in each round, the upper bound can also be improved to , under the same condition as strongly convex functions.

  • When all the functions are self-concordant, we establish a similar guarantee if both the gradient and Hessian of the function can be queried multiple times. Specifically, we propose to apply the damped Newton method multiple times in each round, and prove an bound of the dynamic regret under appropriate conditions.111 and are modified slightly when functions are semi-strongly convex or self-concordant.

Application to Statistical Learning

Most studies of dynamic regret, including this paper do not make stochastic assumptions on the function sequence. In the following, we discuss how to interpret our results when facing the problem of statistical learning. In this case, the learner receives a sequence of losses , where ’s are instance-label pairs sampled from a unknown distribution, and measures the prediction error. To avoid the random fluctuation caused by sampling, we can set as the loss averaged over a mini-batch of instance-label pairs. As a result, when the underlying distribution is stationary or drifts slowly, successive functions will be close to each other, and thus the path-length and the squared path-length are expected to be small.

2 Related Work

The static regret in (1) has been extensively studied in the literature (Shalev-Shwartz, 2011). It has been established that the static regret can be upper bounded by , , and for convex functions, strongly convex functions, and exponentially concave functions, respectively (Zinkevich, 2003; Hazan et al., 2007). Furthermore, those upper bounds are proved to be minimax optimal (Abernethy et al., 2008a; Hazan and Kale, 2011).

The notion of dynamic regret is introduced by Zinkevich (2003). If we choose the online gradient descent as the learner, the dynamic regret with respect to any comparator sequence , i.e., , is on the order of . When a prior knowledge of is available, the dynamic regret can be upper bounded by (Yang et al., 2016). If all the functions are strongly convex and smooth, the upper bound of can be improved to (Mokhtari et al., 2016). The rate is also achievable when all the functions are convex and smooth, and all the minimizers ’s lie in the interior of (Yang et al., 2016).

Another regularity of the comparator sequence, which is similar to the path-length, is defined as

where is a dynamic model that predicts a reference point for the -th round. The advantage of this measure is that when the comparator sequence follows the dynamical model closely, it can be much smaller than the path-length . A novel algorithm named dynamic mirror descent is proposed to take into account, and the dynamic regret is on the order of (Hall and Willett, 2013). There are also some regularities defined in terms of the function sequence, such as the functional variation (Besbes et al., 2015)

(5)

or the gradient variation (Chiang et al., 2012)

(6)

Under the condition that and is given beforehand, a restarted online gradient descent is developed by Besbes et al. (2015), and the dynamic regret is upper bounded by and for convex functions and strongly convex functions, respectively.

The regularities mentioned above reflect different aspects of the learning problem, and are not directly comparable in general. Thus, it is appealing to develop an algorithm that adapts to the smaller regularity of the problem. Jadbabaie et al. (2015) propose an adaptive algorithm based on the optimistic mirror descent (Rakhlin and Sridharan, 2013), such that the dynamic regret is given in terms of all the three regularities (, , and ). However, it relies on the assumption that the learner can calculate each regularity incrementally.

In the setting of prediction with expert advice, the dynamic regret is also referred to as tracking regret or shifting regret (Herbster and Warmuth, 1998; Cesa-bianchi et al., 2012). The path-length of the comparator sequence is named as shift, which is just the number of times the expert changes. Another related performance measure is the adaptive regret, which aims to minimize the static regret over any interval (Hazan and Seshadhri, 2007; Daniely et al., 2015). Finally, we note that the study of dynamic regret is similar to the competitive analysis in the sense that both of them compete against an optimal offline policy, but with significant differences in their assumptions and techniques (Buchbinder et al., 2012).

3 Online Learning with Multiple Gradients

In this section, we discuss how to improve the dynamic regret by allowing the learner to query the gradient multiple times. We start with strongly convex functions, and then proceed to semi-strongly convex functions, and finally investigate self-concordant functions.

3.1 Strongly Convex and Smooth Functions

To be self-contained, we provide the definitions of strong convexity and smoothness.

Definition 1

A function is -strongly convex, if

Definition 2

A function is -smooth, if

Example 1

The following functions are both strongly convex and smooth.

  1. A quadratic form where , and ;

  2. The regularized logistic loss , where .

Following previous studies (Mokhtari et al., 2016), we make the following assumptions.

Assumption 1

Suppose the following conditions hold for each .

  1. is -strongly convex and -smooth over ;

  2. , .

When the learner can query the gradient of each function only once, the most popular learning algorithm is the online gradient descent:

where denotes the projection onto the nearest point in . Mokhtari et al. (2016) have established an bound of dynamic regret, as stated below.

Theorem 2

Suppose Assumption 1 is true. By setting in online gradient descent, we have

where .

0:  The number of inner iterations and the step size
1:  Let be any point in
2:  for  do
3:     Submit and the receive loss
4:     
5:     for  do
6:        
7:     end for
8:     
9:  end for
Algorithm 1 Online Multiple Gradient Descent (OMGD)

We now consider the setting that the learner can access the gradient of each function multiple times. The algorithm is a natural extension of online gradient descent by performing gradient descent multiple times in each round. Specifically, in the -th round, given the current solution , we generate a sequence of solutions, denoted by , where is a constant independent from , as follows:

Then, we set . The procedure is named as Online Multiple Gradient Descent (OMGD) and is summarized in Algorithm 1.

By applying gradient descent multiple times, we are able to extract more information from each function and therefore are more likely to obtain a tight bound for the dynamic regret. The following theorem shows that the multiple accesses of the gradient indeed help improve the dynamic regret.

Theorem 3

Suppose Assumption 1 is true. By setting and in Algorithm 1, for any constant , we have

When is small, Theorem 3 can be simplified as follows.

Corollary 4

Suppose , from Theorem 3, we have

In particular, if belongs to the relative interior of (i.e., ) for all , Theorem 3, as , implies

Compared to Theorem 2, the proposed OMGD improves the dynamic regret from to , when the gradients of minimizers are small. Recall the definitions of and in (3) and (4), respectively. We can see that introduces a square when measuring the difference between and . In this way, if the local variations (’s) are small, can be significantly smaller than , as indicated below.

Example 2

Suppose for all and , we have

In particular, when , we have .

is also closely related to the gradient variation in (6). When all the ’s belong to the relative interior of , we have for all and therefore

(7)

where the last inequality follows from the property of strongly convex functions (Nesterov, 2004). The following corollary is an immediate consequence of Theorem 3 and the inequality in (7).

Corollary 5

Suppose Assumption 1 is true, and further assume all the ’s belong to the relative interior of . By setting and in Algorithm 1, we have

In Theorem 3, the number of accesses of gradients is set to be a constant depending on the condition number of the function. One may ask whether we can obtain a tighter bound by using a larger . Unfortunately, according to our analysis, even if we take , which means is minimized exactly, the upper bound can only be improved by a constant factor and the order remains the same. A related question is whether we can reduce the value of by adopting more advanced optimization techniques, such as the accelerated gradient descent (Nesterov, 2004). This is an open problem to us, and will be investigated as a future work.

Finally, we prove that the bound is optimal for strongly convex and smooth functions.

Theorem 6

For any online learning algorithm , there always exists a sequence of strongly convex and smooth functions , such that

where is the solutions generated by .

Thus, the upper bound in Theorem 3 cannot be improved in general.

3.2 Semi-strongly Convex and Smooth Functions

During the analysis of Theorems 2 and 3, we realize that the proof is built upon the fact that “when the function is strongly convex and smooth, gradient descent can reduce the distance to the optimal solution by a constant factor” (Mokhtari et al., 2016, Proposition 2). From the recent developments in convex optimization, we know that a similar behavior also happens when the function is semi-strongly convex and smooth (Necoara et al., 2015, Theorem 5.2), which motivates the study in this section.

We first introduce the definition of semi-strong convexity (Gong and Ye, 2014).

Definition 3

A function is semi-strongly convex over , if there exists a constant such that for any

(8)

where is the set of minimizers of over .

The semi-strong convexity generalizes several non-strongly convex conditions, such as the quadratic approximation property and the error bound property (Wang and Lin, 2014; Necoara et al., 2015). A class of functions that satisfy the semi-strongly convexity is provided below (Gong and Ye, 2014).

Example 3

Consider the following constrained optimization problem

where is strongly convex and smooth, and is either or a polyhedral set. Then, is semi-strongly convex over with some constant .

Based on the semi-strong convexity, we assume the functions satisfy the following conditions.

Assumption 7

Suppose the following conditions hold for each .

  1. is semi-strongly convex over with parameter , and -smooth;

  2. , .

When the function is semi-strongly convex, the optimal solution may not be unique. Thus, we need to redefine and to account for this freedom. We define

where is the set of minimizers of over .

In this case, we will use the standard online gradient descent when the learner can query the gradient only once, and apply the online multiple gradient descent (OMGD) in Algorithm 1, when the learner can access the gradient multiple times. Using similar analysis as Theorems 2 and 3, we obtain the following dynamic regret bounds for functions that are semi-strongly convex and smooth.

Theorem 8

Suppose Assumption 7 is true. By setting in online gradient descent, we have

where , and .

Thus, online gradient descent still achieves an bound of the dynamic regret.

Theorem 9

Suppose Assumption 7 is true. By setting and in Algorithm 1, for any constant , we have

where , and .

Again, when the gradients of minimizers are small, in other words, , the proposed OMGD improves the dynamic regret form to .

3.3 Self-concordant Functions

We extend our previous results to self-concordant functions, which could be non-strongly convex and even non-smooth. Self-concordant functions play an important role in interior-point methods for solving convex optimization problems. We note that in the study of bandit linear optimization (Abernethy et al., 2008b), self-concordant functions have been used as barriers for constraints. However, to the best of our knowledge, this is the first time that losses themselves are self-concordant.

The definition of self-concordant functions is given below (Nemirovski, 2004).

Definition 4

Let be a nonempty open convex set in and be a convex function defined on . is called self-concordant on , if it possesses the following two properties:

  1. along every sequence converging, as , to a boundary point of ;

  2. satisfies the differential inequality

    for all and all , where

Example 4

We provide some examples of self-concordant functions below (Boyd and Vandenberghe, 2004; Nemirovski, 2004).

  1. The function is self-concordant on .

  2. A convex quadratic form where , , and , is self-concordant on .

  3. If is self-concordant, and , , then is self-concordant.

Using the concept of self-concordance, we make the following assumptions.

Assumption 10

Suppose the following conditions hold for each .

  1. is self-concordant on domain ;

  2. is non-degenerate on , i.e., , ;

  3. attains its minimum on , and denote .

Our approach is similar to previous cases except for the updating rule of . Since we do not assume functions are strongly convex, we need to take into account the second order structure when updating the current solution . Thus, we assume the learner can query both the gradient and Hessian of each function multiple times. Specifically, we apply the damped Newton method (Nemirovski, 2004) to update , as follows:

where

(9)

Then, we set . Since the damped Newton method needs to calculate the inverse of the Hessian matrix, its complexity is higher than gradient descent. The procedure is named as Online Multiple Newton Update (OMNU) and is summarized in Algorithm 2.

0:  The number of inner iterations in each round
1:  Let be any point in
2:  for  do
3:     Submit and the receive loss
4:     
5:     for  do
6:        
where is given in (9)
7:     end for
8:     
9:  end for
Algorithm 2 Online Multiple Newton Update (OMNU)

To analyze the dynamic regret of OMNU, we redefine the two regularities and as follows:

where . Compared to the definitions in (3) and (4), we introduce when measuring the distance between and . When functions are strongly convex and smooth, these definitions are equivalent up to constant factors. We then define a quantity to compare the second order structure of consecutive functions:

(10)

where computes the maximum eigenvalue of its argument. When all the functions are -strongly convex and -smooth, . Then, we have the following theorem regarding the dynamic regret of the proposed OMNU algorithm.

Theorem 11

Suppose Assumption 10 is true, and further assume

(11)

When , we choose in OMNU such that

(12)

For , we set in OMNU, then

The above theorem again implies the dynamic regret can be upper bounded by when the learner can access the gradient and Hessian multiple times. From the first property of self-concordant functions in Definition 4, we know that must lie in the interior of , and thus for all . As a result, we do not need the additional assumption that the gradients of minimizers are small, which has been used before to simplify Theorems 3 and 9.

Compared to Theorems 3 and 9, Theorem 11 introduces an additional condition in (11). This condition is required to ensure that lies in the feasible region of , otherwise, can be infinity and it is impossible to bound the dynamic regret. The multiple applications of damped Newton method can enforce to be close to . Combined with (11), we conclude that is also close to . Then, based on the property of the Dikin ellipsoid of self-concordant functions (Nemirovski, 2004), we can guarantee that is feasible for .

4 Analysis

In this section, we present proofs of all the theoretical results.

4.1 Proof of Theorem 2

For the sake of completeness, we include the proof of Theorem 2, which was proved by Mokhtari et al. (2016). We need the following property of gradient descent.

Lemma 5

Assume that is -strongly convex and -smooth, and . Let , where . We have

The constant in the above lemma is better than that in Proposition 2 of Mokhtari et al. (2016).

Since for any and any , we have

(13)

We now proceed to bound . By the triangle inequality, we have

(14)

Since

using Lemma 5, we have

(15)

From (14) and (15), we have

implying

(16)

We complete the proof by substituting (16) into (13).

4.2 Proof of Lemma 5

We first introduce the following property of strongly convex functions (Hazan and Kale, 2011).

Lemma 6

Assume that is -strongly convex, and . Then, we have

(17)

From the updating rule, we have

According to Lemma 6, we have

(18)

Since is -strongly convex, we have

(19)

On the other hand, the smoothness assumption implies

(20)

Combining (18), (19), and (20), we obtain

(21)

Applying Lemma 6 again, we have

(22)

We complete the proof by substituting (22) into (21) and rearranging.

4.3 Proof of Theorem 3

Since is -smooth, we have

Combining with the fact

for any , we obtain

Summing the above inequality over , we get

(23)

We now proceed to bound . We have

(24)

Recall the updating rule

From Lemma 5, we have

which implies