A Nearly-Linear Bound for Chasing Nested Convex Bodies1footnote 11footnote 1This work was done while Michael B. Cohen and Yin Tat Lee were at Microsoft Research. Part of this research was done at the Simons Institute for the Theory of Computing.

# A Nearly-Linear Bound for Chasing Nested Convex Bodies111This work was done while Michael B. Cohen and Yin Tat Lee were at Microsoft Research. Part of this research was done at the Simons Institute for the Theory of Computing.

C.J. Argue    Sébastien Bubeck    Michael B. Cohen    Anupam Gupta    Yin Tat Lee
###### Abstract

Friedman and Linial [FL93] introduced the convex body chasing problem to explore the interplay between geometry and competitive ratio in metrical task systems. In convex body chasing, at each time step , the online algorithm receives a request in the form of a convex body and must output a point . The goal is to minimize the total movement between consecutive output points, where the distance is measured in some given norm.

This problem is still far from being understood. Recently Bansal et al. [BBE17] gave an -competitive algorithm for the nested version, where each convex body is contained within the previous one. We propose a different strategy which is -competitive algorithm for this nested convex body chasing problem. Our algorithm works for any norm. This result is almost tight, given an lower bound for the norm [FL93].

## 1 Introduction

We consider the convex body chasing problem. This is a problem in online algorithms: the input is a starting point , and the request sequence consists of convex bodies . Upon seeing request (but before seeing ) we must choose a feasible point within this body. The cost at the step is , the distance moved at this time step, where is some fixed norm. The objective is to minimize the total distance moved by the algorithm:

 T∑t=0∥xt+1−xt∥.

We consider the model of competitive analysis, i.e., we want to bound the worst-case ratio between the algorithm’s cost and the optimal cost to serve the request sequence.

This problem was introduced by Friedman and Linial [FL93], with the goal of understanding the interplay between geometry and competitive ratio in online problems on metric spaces, many of which can be modeled using the metrical task systems framework (MTS) [BLS92]. In MTS one considers an arbitrary metric space , and the request sequence is given by functions which specify the cost to serve the request at each point in . The goal is to minimize the movement plus service costs. If and the functions are zero within the convex body (and outside), one gets the convex body chasing problem.

The role of the metric geometry is poorly understood both for the general MTS problem, as well as for the special case of convex body chasing. The latter problem is trivial for , and Friedman and Linial gave a competitive algorithm for . However currently there is no known algorithm with a finite competitive ratio for . Friedman and Linial gave a lower bound of when (the norm) based on chasing faces of the hypercube; and for a lower bound of follows from [BN07] (both results hold in the nested version of the problem). We also know positive results for general when the convex bodies are lower-dimensional objects (e.g., lines or planes or affine subspaces) [FL93, Sit14, ABN16], but these ideas do not seem to generalize to chasing full-dimensional objects.

In this paper we restrict to nested instances of convex body chasing, where the bodies are contained within each other, i.e., . In this case, the optimal offline algorithm at time is to move to some point in the final body, , and hence the optimal value is .

Bansal et al. [BBE17] gave a -competitive algorithm for chasing nested convex bodies in Euclidean space with . Our main theorem improves on their result:

###### Theorem 1.1 (Main Theorem).

For any norm, there is an -competitive algorithm for nested convex body chasing.

This result is almost tight unless we make further assumptions on the norm, since there is an lower bound for the -norm [FL93].

#### Our Technique.

The high-level idea behind our algorithm is simple: we would like to stay “deep” inside the feasible region so that when this point becomes infeasible, the feasible region shrinks considerably. One natural candidate for such a “deep” point is the centroid, i.e., the center of mass of the feasible region. Indeed, it is known by a theorem of Grünbaum that any hyperplane passing through the centroid of a convex body splits it into two pieces each containing a constant fraction of the volume. At first, this seems promising, since after steps the volume would drop by —now if the feasible region is well-rounded then it would halve the diameter and we would have made progress. The problem is that the feasible region may have some skinny directions and other fat ones, so that the diameter may not have shrunk even though the volume has dropped. Indeed, Bansal et al. [BBE17] give examples showing that this naïve centroid algorithm — and a related Ellipsoid-based algorithm — is not competitive. While our algorithm is also based on the centroid, it avoids the pitfalls illustrated by these examples.

Our main idea is that if we have a very skinny dimension—say the body started off looking like a sphere and now looks like a pancake—we have essentially lost a dimension! The body can be thought of as lying in a space with one fewer dimension, and we should act accordingly. Slightly more precisely, we restrict to the skinny directions and solve the problem in that subspace recursively. Our cost is tiny because these directions are skinny. Once there are no points in this subspace, we can find a hyperplane that cuts along the fat directions (i.e., parallel to the skinny directions), which makes progress towards reducing the diameter.

#### The Greedy Algorithm.

We also bound the competitive ratio of the simplest algorithm for this problem, namely the greedy algorithm. This algorithm, at time , outputs the point obtained by moving to the closest feasible point at each step. We observe that a slightly better result than the Bansal et al. result can be obtained for this algorithm as well.

The analysis of the greedy algorithm follows from a result of Manselli and Pucci [MP91] on self-contracted curves. A rectifiable curve is self-contracted if for every such that has a tangent vector at , the sub-curve is contained in the half- space .

###### Theorem 1.2 (Manselli and Pucci [Mp91]).

Let be a self-contracted curve in and be a bounded convex set containing . Then the length of is at most , where denotes the -dimensional surface area of the -sphere.

###### Theorem 1.3 (Greedy).

The greedy algorithm is -competitive for nested convex body chasing with the Euclidean norm.

###### Proof.

Let be the piecewise affine extension of the greedy algorithm’s path . Essentially by definition is a self-contracted curve. For each point in , the distance is decreasing. In particular, if is the optimal solution with cost , is contained in the ball of radius centered at . Theorem 1.2 implies a competitiveness of . Now using that the last ratio is at most gives the claim. ∎

In upcoming joint work by the second author with O. Angel and F. Nazarov, we also show that greedy’s competitive ratio is for some (and for some ). Thus Theorem 1.1 gives a provably exponential improvement over the greedy algorithm.

### 1.1 Other Related Work

Motivated by problems in power management in data servers, Andrew et al. [LWR12, ABL13] studied the smoothed online convex optimization (SOCO) problem (not to be confused with online convex optimization in online learning). SOCO (also called chasing convex functions) is a special case of MTS, where the metric space is equipped with some norm, and the cost functions are convex; hence it generalizes convex body chasing. An -competitive algorithm is known for  [ABL13, BGK15]. Antoniadis et al. [ABN16] gave an intuitive algorithm for chasing lines and affine subspaces, and they also showed reductions between convex body chasing and SOCO. Finally the online primal-dual framework [BN07] can also be viewed as chasing nested covering constraints, i.e., for , with the metric (i.e., ).

### 1.2 Reductions

We recall some simple reductions between convex body chasing and two other closely related problems, which allow for a guess-and-double approach. This allows us to move between the original convex body chasing problem and its variants where one plays in a convex body until its diameter falls by a constant factor.

###### Claim 1.4.

For some function , the following three propositions are equivalent:

• (General) There exists a -competitive algorithm for nested convex body chasing.

• (-Bounded) For any , and assuming that , there exists an algorithm for nested convex body chasing with total movement .

• (-Tightening) For any , and assuming that , there exists an algorithm for nested convex body chasing that incurs total movement cost until the first time at which is contained in some ball of radius .

###### Proof.

The implications are clear. To get , we iteratively run the -Bounded algorithm starting at and doubling in each run. Specifically, let be the distance from the starting point to the first convex body ; this will be our initial guess for the optimal cost. (Without loss of generality, we can assume that and hence , else we can drop from the sequence.) In the run, we execute the -Bounded problem algorithm with parameter on the truncated sets . If for some , then we know that , and hence the optimal cost is strictly more than . In that case we move back to , and begin the run of our algorithm (i.e., start playing the -Bounded problem on the current truncated set ).

By our assumption, the algorithm’s cost for run is , plus for the final cost of moving back to at the end of the iteration. If the algorithm requires iterations, the optimal cost is at least (because the run ended) whereas our total cost is at most

 T∑k=1O(f(d)+1))rk=T∑k=1O(f(d)+1))2k−TrT<2⋅O(f(d)+1)rT,

which implies a competitive ratio of . (This reduction also appears as [BBE17, Lemma 6].)

Finally, to show , we first run the -Tightening algorithm until is contained in some ball of radius , then move to the center of the smaller ball and use the -Tightening algorithm on that ball, and so on. The new center is at distance from the old center , and hence the cost for the first ball is . Now the cost is reduced by a factor of for each successive iteration, thus the total cost is at most . ∎

## 2 Preliminaries

We gather here notation and classical convex geometry results that will be useful in our analysis.

### 2.1 Notation

Given a convex body , its centroid (also called its center of mass/gravity) is

 μ(K):=1Vol(K)∫x∈Kxdx=EX∼K[X].

Given a unit vector , the directional width of a set in the direction is

 w(K,v)=maxx,y∈Kv⊺(x−y). (1)

We denote for the minimum directional width of over all unit vectors . We define to be the projection of the set X on the subspace , that is

 ΠLX:={x+y∈L:x∈X,y∈L⊥}.

In the following we fix a norm in , and use to denote a ball of radius centered at . Furthermore we will assume that the norm satisfies for all :

 ∥x∥2≤∥x∥≤√d∥x∥2\leavevmode\nobreak . (2)

Indeed, given a full-dimension convex body which is symmetric about the origin, John’s theorem guarantees the existence of an ellipsoid such that (see, e.g. [Bal92]). Take to be the unit ball, and by applying a linear transformation we may assume that is the unit Euclidean ball. We see that (2) holds true, so we make this assumption without loss of generality.

### 2.2 Convex geometry reminders

We use the following theorems from convex geometry in our analysis. Let denote a general convex body . Some definitions used here were given in Section 2.1.

###### Theorem 2.1 (Grünbaum’s Theorem [Grü60]).

For any half-space containing the centroid of one has

 Vol(K∩H)≥1eVol(K).
###### Theorem 2.2 ([Llv17, Lemma 6.1]).

Recall that is the minimum directional width of over all directions. For any subspace one has

 Vol(ΠLK)≤(d(d+1)δ(K))d−dimL⋅Vol(K).
###### Theorem 2.3 ([Kls95, Theorem 4.1]).

Let denote the centroid of . Let be the covariance matrix of , and let be the ellipsoid defined by . Then one has

 μ+√d+1dE⊂K⊂μ+√d(d+1)E.
###### Lemma 2.4.

Any convex body contains a Euclidean ball of radius . Furthermore, for any halfspace containing the centroid on its boundary (i.e., for some ), one has

 δ(H∩K)≥δ(K)/(2d).
###### Proof.

Let be the ellipsoid defined in Theorem 2.3. Since we have that the minimum width of the scaled ellipsoid is larger than , which in turn implies that contains a Euclidean ball of radius centered at . Thus using that we get that contains a Euclidean ball of radius centered at . The second statement follows since a half-ball of radius contains a ball of radius . ∎

## 3 A Centroid-Based Algorithm

We present a centroid-based algorithm and the analysis that it is competitive. By the reductions in Claim 1.4 and by scaling, it suffices to give an algorithm for the -Tightening version problem, which starts off with the convex body being contained in a unit ball, terminates with the final body lying in a ball of radius , and pays at most .

### 3.1 Overview of the Bounded and Tighten Algorithms

To simplify notation we ignore the time indexing , and we denote for the current requested convex body. Thus at the start of the algorithm one has , and every time the algorithm moves to a new point in the adversary updates to the next convex body in the input sequence that does not contain .

Both the Bounded and Tighten algorithms take as input an affine subspace . They each output points until their respective end conditions are met. The two algorithms only differ essentially in their end conditions: Bounded terminates when is empty while Tighten terminates when is “skinny” in every direction.

In the next section we will define the Tighten algorithm. Recall that using the reductions of Section 1.2, we then also get an algorithm for the Bounded version of the problem. The algorithm Tighten makes calls to Bounded only in lower-dimensional subspaces so there is no circular reference.

### 3.2 The Tighten Algorithm

Due to the recursive nature of our algorithm we will consider solving the Tighten problem in an affine subspace . More precisely, while is a convex body in , we are only interested in its “shadow” . Given the guarantee that at the start is contained in a unit ball (in ), the goal is to output points within until lies inside some ball of radius at most .

Algorithm

1. Let be the dimension of ; if run the greedy algorithm.

2. Let and for some small enough constant .

3. Let be a subspace of “skinny” directions obtained by choosing a maximal set of orthogonal directions such that the directional width is smaller than , i.e., .

4. While

1. Let be the centroid of . Move to any point such that .

2. Call the procedure .

3. Let . While there is a direction such that , .

4. , and .

Observe that when stops, the body has directional width at most for any direction in some basis of (since ), and thus it is contained in a Euclidean ball of radius . This means that the diameter in the norm is at most (recall (2)). In particular this is a valid stopping time for the Tighten problem. Thus we only have to analyze the movement cost of .

#### An Example.

Suppose at the beginning of some iteration , is a unit-radius pancake with height centered at the origin, with the major axes in the - plane and the short dimension along the -axis. The subspace is such that is skinny along directions in this subspace, in this case suppose is the -direction. The algorithm takes the projection of onto the non-skinny directions (the - plane), finds its centroid and chooses that projects to this centroid . In this case the centroid of the projection is the origin, and then is any point in the pancake with .

The algorithm then recurses with being the -axis, i.e., the subspace . When this recursive call terminates, has an empty intersection with this affine subspace (i.e., with the -axis). This means there exists a hyperplane that separates the -axis from the new —in particular, the new lies within some “half-pancake”. This operation not only reduces ’s volume, but also makes a substantial reduction in its width along some direction in the - plane.

### 3.3 Cost Analysis

Each iteration of induces a movement of at most when we move to the centroid (recall that by assumption is of diameter at most ) plus the movement in the recursive call. The latter movement is tiny, since in the recursion the body has a small diameter (). So the main part of the analysis is to bound the number of iterations; the bound on the total movement is then proved in Theorem 3.2.

In the following we denote for the value of at the beginning of iteration .

###### Lemma 3.1.

The procedure Tighten terminates in at most iterations.

###### Proof.

We bound the number of iterations of the algorithm via the potential

 Vol(ΠS⊥kXk).

At a high level, the proof shows that the hyperplane cuts cause this projected volume to decrease rapidly (since we are making cuts along the non-skinny directions). And when grows, we show that the projected volume does not increase too much.

The key observation is that is contained in the intersection of with a halfspace passing through its centroid . Indeed after the recursive call to Bounded in iteration , one has that ; recall that at that time. But implies that and can be separated by some hyperplane.

Let denote the intersection . By Grünbaum’s theorem (Theorem 2.1),

 Vol(Yk)≤(1−1e)Vol(ΠS⊥kXk). (3)

Now the construction of ensures that there are no directions orthogonal to that are skinny. Hence the minimum width of is at least . By Lemma 2.4, the minimum width of is at least . Now, we apply Theorem 2.2 to and use to get:

 Vol(ΠS⊥k+1Yk)≤(2d2(d+1)δ)dimSk+1−dimSkVol(Yk). (4)

The containments and imply:

 Vol(ΠS⊥k+1Xk)≤Vol(ΠS⊥k+1Yk) (5)

Combining (3), (4), and (5), we have that after steps,

 Vol(ΠS⊥TXT) ≤(2d2(d+1)δ)dimST⋅(1−1e)T⋅Vol(X0) ≤(2d2(d+1)δ)d⋅(1−1e)T⋅O(1)d (6)

where we used that is contained in the unit ball in and hence has volume at most .

Now let be the last iteration where has non-zero number of dimensions (i.e., the step just before the procedure ends). At this point has minimum width at least . Lemma 2.4 shows that it contains a ball of radius , and hence has volume at least . This gives a lower bound on the potential.

Combining this lower bound with (6) which upper bounds the potential after steps, we have that using our choice of . ∎

###### Theorem 3.2.

The total movement of the Tighten procedure is , assuming that at the start is contained in a unit ball.

###### Proof.

We induct on the number of dimensions : the base case is when or , in which case the claim is immediate. Hence consider . Let be the total movement of the algorithm Tighten and be the total movement of the algorithm Bounded. By the reduction in Claim 1.4,

 fB(d)≤O(fT(d)).

Next, in each iteration, the diameter of is at most in (by the triangle inequality) Therefore, the cost per each iteration is at most

 O(1)+O(dδ⋅fB(d−1))=O(1+dδ⋅fT(d−1)),

where the first term of comes from the fact that is shrinking and hence is always contained in an unit ball. The second term comes from the recursion on a body of diameter in dimensions. In the second term, we bound by .

Since there are at most iterations from Lemma 3.1,

 fT(d) ≤O(dlogd)⋅O(1+dδfT(d−1)) =O(dlogd+d2logd⋅δ⋅fT(d−1)).

We choose for a small enough constant , and use from the inductive hypothesis. This gives , which proves the theorem. ∎

Acknowledgments. We thank Nikhil Bansal, Niv Buchbinder, Guru Guruganesh, and Kirk Pruhs for useful conversations. C.J.A. and A.G. thank Sunny Gakhar for discussions in the initial stages of this work. This work was supported in part by NSF awards CCF-1536002, CCF-1540541, CCF-1617790, and CCF-1740551, and the Indo-US Virtual Networked Joint Center on Algorithms under Uncertainty.

## References

• [ABL13] Lachlan L. H. Andrew, Siddharth Barman, Katrina Ligett, Minghong Lin, Adam Meyerson, Alan Roytman, and Adam Wierman, A tale of two metrics: Simultaneous bounds on competitiveness and regret, COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA, 2013, pp. 741–763.
• [ABN16] Antonios Antoniadis, Neal Barcelo, Michael Nugent, Kirk Pruhs, Kevin Schewior, and Michele Scquizzato, Chasing convex bodies and functions, LATIN 2016: theoretical informatics, Lecture Notes in Comput. Sci., vol. 9644, Springer, Berlin, 2016, pp. 68–81. MR 3492519
• [Bal92] Keith Ball, Ellipsoids of maximal volume in convex bodies, Geom. Dedicata 41 (1992), no. 2, 241–250. MR 1153987
• [BBE17] Nikhil Bansal, Martin Böhm, Marek Eliáš, Grigorios Koumoutsos, and Seeun William Umboh, Nested convex bodies are chaseable, Proceedings of the ACM-SIAM symposium on Discrete algorithms (SODA 2018) (2017).
• [BGK15] Nikhil Bansal, Anupam Gupta, Ravishankar Krishnaswamy, Kirk Pruhs, Kevin Schewior, and Cliff Stein, A 2-competitive algorithm for online convex optimization with switching costs, APPROX, vol. 40, Schloss Dagstuhl., 2015, pp. 96–109. MR 3441958
• [BLS92] Allan Borodin, Nathan Linial, and Michael E. Saks, An optimal on-line algorithm for metrical task system, J. Assoc. Comput. Mach. 39 (1992), no. 4, 745–763. MR 1187210
• [BN07] Niv Buchbinder and Joseph (Seffi) Naor, The design of competitive online algorithms via a primal-dual approach, Found. Trends Theor. Comput. Sci. 3 (2007), no. 2-3, 93–263. MR 2506496 (2010h:68239)
• [FL93] Joel Friedman and Nathan Linial, On convex body chasing, Discrete Comput. Geom. 9 (1993), no. 3, 293–321. MR 1204785
• [Grü60] B. Grünbaum, Partitions of mass-distributions and of convex bodies by hyperplanes, Pacific J. Math. 10 (1960), 1257–1261. MR 0124818
• [KLS95] Ravi Kannan, László Lovász, and Miklós Simonovits, Isoperimetric problems for convex bodies and a localization lemma, Discrete & Computational Geometry 13 (1995), no. 1, 541–559.
• [LLV17] Ilan Lobel, Renato Paes Leme, and Adrian Vladu, Multidimensional binary search for contextual decision-making, Proceedings of the 2017 ACM Conference on Economics and Computation, EC ’17, Cambridge, MA, USA, June 26-30, 2017, 2017, p. 585.
• [LWR12] Minghong Lin, Adam Wierman, Alan Roytman, Adam Meyerson, and Lachlan L. H. Andrew, Online optimization with switching cost, SIGMETRICS Performance Evaluation Review 40 (2012), no. 3, 98–100.
• [MP91] Paolo Manselli and Carlo Pucci, Maximum length of steepest descent curves for quasi-convex functions, Geom. Dedicata 38 (1991), no. 2, 211–227. MR 1104346
• [Sit14] René Sitters, The generalized work function algorithm is competitive for the generalized 2-server problem, SIAM J. Comput. 43 (2014), no. 1, 96–125. MR 3158795
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters