Oracle-Based Robust Optimization via Online Learning

# Oracle-Based Robust Optimization via Online Learning

Aharon Ben-Tal
Technion
abental@ie.technion.ac.il
Technion
ehazan@ie.technion.ac.il
Tomer Koren
Technion
tomerk@technion.ac.il
Shie Mannor
Technion
shie@ee.technion.ac.il
###### Abstract

Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved is a min-max problem where a solution is judged according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution of the robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, and in some cases even NP-hard. For example, solving a robust conic quadratic program, such as those arising in robust SVM, ellipsoidal uncertainty leads in general to a semidefinite program. In this paper we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where in every stage a standard (non-robust) optimization program is solved. Our algorithms find an approximate robust solution using a number of calls to an oracle that solves the original (non-robust) problem that is inversely proportional to the square of the target accuracy.

## 1 Introduction

The Robust Optimization (RO; see Ben-Tal and Nemirovski (2002); Ben-Tal et al. (2009); Bertsimas et al. (2011)) framework addresses a fundamental problem of many convex optimization problems: slight inaccuracies in data give rise to significant fluctuations in the solution. While there are different approaches to handle uncertainty in the parameters of an optimization problem, the RO approach choose a solution that performs best against the worst possible parameter. When the objective function is convex in the parameters, and concave in the uncertainty, and when the uncertainty set is convex the overall optimization problem is convex.

Despite its theoretical and empirical success, a significant hinderance of adopting RO to large scale problems is the increased computational complexity. In particular, robust counterpart of an optimization problem is often more difficult, albeit usually convex, mathematical problems. For example, the robust counterpart of conic quadratic programming with ellipsoidal uncertainty constraints becomes a semi-definite program, for which we currently have significantly slower solvers.

RO has recently gained traction as a tool for analyzing machine learning algorithms and for devising new ones. In a sequence of papers, Xu, Caramanis and Mannor show that several standard machine learning algorithms such as Lasso and norm regularized support vector machines have a RO interpretation (Xu et al., 2009, 2010). Beyond these works, robustness is a desired property for many learning algorithms. Indeed, making standard algorithms robust to outliers or to perturbation in the data has been proposed in several works; see Lanckriet et al. (2003); Bhattacharyya et al. (2004a, b); Shivaswamy et al. (2006); Trafalis and Gilbert (2007). However in these cases, the problem eventually solved is more complicated than the original problem. For example, in Trafalis and Gilbert (2007) the original problem is a standard support vector machine, but when robustifying it to input uncertainty, one has to solve a second-order conic program (in the non-separable case). Another example is Shivaswamy et al. (2006) where the uncertainty is a probability distribution over inputs. In that case, the original SVM becomes a second-order conic program as well.

The following question arrises: can we (approximately) solve a robust counterpart of a given optimization problem using only an algorithm for the original optimization formulation? In this paper we answer this question on the affirmative: we give two meta-algorithms that receive as input an oracle to the original mathematical problem and approximates the robust counterpart by invoking the oracle a polynomial number of times. In both approaches, the number of iterations to obtain an approximate robust solution is a function of the approximation guarantee and the complexity of the uncertainty set, and does not directly depend on the dimension of the problem. Our methods differ on the assumptions regarding the uncertainty set and the dependence of the constraints on the uncertainty. The first method allows any concave function of the noise terms but is limited to convex uncertainty sets. The second method allows arbitrary uncertainty sets as long as a “pessimization oracle” (as termed by Mutapcic and Boyd 2009) exists— an oracle that finds the worst case-noise for a given feasible solution. Our methods are formally described as template, or meta-algorithms, and are general enough to be applied even if the robust counterpart is NP-hard111Recall that we are only providing an approximate solution, and thus our algorithms formally constitute a “polynomial time approximation scheme” (PTAS)..

Our approach for achieving efficient oracle-based RO is to reduce the robust formulation to a zero-sum game, which we solve by a primal-dual technique based on tools from online learning. Such primal-dual methods originated from the study of approximation algorithms for linear programs (Plotkin et al., 1995) and were recently proved invaluable in understanding Lagrangian relaxation methods (see e.g. Arora et al. 2012) and in sublinear-time optimization techniques (Clarkson et al., 2012; Hazan et al., 2011; Garber and Hazan, 2011). We show how to apply this methodology to oracle-based RO. Along the way, we contribute some extensions to the existing online learning literature itself, notably giving a new Follow-the-Perturbed-Leader algorithm for regret minimization that works with (additive) approximate linear oracles.

Finally, we demonstrate examples and applications of our methods to various RO formulation including linear, semidefinite and quadratic programs. The latter application builds on recently developed efficient linear-time algorithms for the trust region problem (Hazan and Koren, 2014).

#### Related work.

Robust optimization is by now a field of study by itself, and the reader is referred to Ben-Tal et al. (2009); Bertsimas et al. (2011) for further information and references. The computational bottleneck associated with robust optimization was addressed in several papers. Calafiore and Campi (2004) propose to sample constraints from the uncertainty set, and obtain an “almost-robust” solution with high probability with enough samples. The main problem with their approach is that the number of samples can become large for a high-dimensional problem.

For certain types of discrete robust optimization problems, Bertsimas and Sim (2003) propose solving the robust version of the problem via (the dimension) solutions of the original problem. Mutapcic and Boyd (2009) give an iterative cutting plane procedure for attaining essentially the same goal as us, and demonstrate impressive practical performance. However, the overall running time for their method can be exponential in the dimension.

#### Oraganization.

The rest of the paper is organized as follows. In Section 2 we present the model and set the notations for the rest of the paper. In Section 3.1 we describe the simpler of our two meta-algorithms: a meta algorithm for approximately solving RO problems that employs dual subgradient steps, under the assumption that the robust problem is convex with respect to the noise variables. In Section 3.2 we remove the latter convexity assumption and only assume that the problem of finding the worst-case noise assignment can be solved by invoking a “pessimization oracle”. This approach is more general than the subgradient-based method and we exhibit perhaps our strongest example of solving robust quadratic programs using this technique in Section 4. Section 4 also contains examples of application of our technique for robust linear and semi-definite programming. We conclude in Section 5.

## 2 Preliminaries

We start this section with the standard formulation of RO. We then recall some basic results from online learning.

### 2.1 Robust Optimization

Consider a convex mathematical program in the following general formulation:

 minimizef0(x)subject tofi(x,ui)≤0,i=1,…,m ,x∈D . (1)

Here are convex functions, is a convex set in Euclidean space, and is a fixed parameter vector. The robust counterpart of this formulation is given by

 minimizef0(x)subject tofi(x,ui)≤0,∀ ui∈U,  i=1,…,m ,x∈D , (2)

where the parameter vector is constrained to be in a set called the uncertainty set. It is without loss of generality to assume that the uncertainty set has this specific form of a cartesian product, see e.g. Ben-Tal and Nemirovski (2002). Here we also assume that the uncertainty set is symmetric (that is, its projection onto each dimension is the same set). This assumption is only made for simplifying notations and can be relaxed easily.

The following observation is standard: we can reduce the above formulation to a feasibility problem via a binary search over the optimal value of (1), replacing the objective with the constraint with being our current guess of the optimal value (of course, assuming the range of feasible values is known a-priory). For ease of notation, we rename by shifting it by , and can write the first constraint as simply . With these observations, we can reduce the robust counterpart to the feasibility problem

 ∃? x∈D  :   fi(x,ui)≤0 ,∀ ui∈U,  i=1,…,m . (3)

We say that is an -approximate solution to this problem if meets each constraint up to , that is, if it satisfies for all ().

### 2.2 Online Convex Optimization and Regret minimization

Our derivations below use tools from online learning, namely algorithms for minimizing regret in the general prediction framework of Online Convex Optimization (OCO). In OCO222Here we present OCO as the problem of online maximization of concave reward functions rather than online minimization of convex cost functions. While the latter is more common, both formulations are equivalent. , the online predictor iteratively produces a decision from a convex decision set . After a decision is generated, a concave reward function is revealed, and the decision maker suffers a loss of . The standard performance metric in online learning is called regret, given by

 RT = maxx∗∈KT∑t=1ft(x∗) − T∑t=1ft(xt) .

The reward function is not known to the decision maker before selecting and it is, in principal, arbitrary and even possibly chosen by an adversary. We henceforth make crucial use of this robustness against adversarial choice of reward functions: the reward functions we shall use will be chosen by a dual optimization problem, thereby directing the entire algorithm towards a correct solution. We refer the reader to (Cesa-Bianchi and Lugosi, 2006; Hazan, 2011; Shalev-Shwartz, 2012) for more details on online learning and online convex optimization.

Two regret-minimization algorithms that we shall use henceforth (at least in spirit) are Online Gradient Descent (Zinkevich, 2003) and Follow the Perturbed Leader (Kalai and Vempala, 2005).

In OGD, the decision maker predicts according to the rule

 xt+1 ← ΠK[xt+η∇ft(xt)] , (4)

where is the Euclidean projection operator onto the set . Hence, the OGD algorithm takes a projected step in the direction of the gradient of the current reward function. Even thought the next reward function can be arbitrary, it can be shown that this algorithm achieves a sublinear regret.

###### Lemma 1 (Zinkevich 2003).

For any sequence of reward functions, let be the sequence generated by (4). Then, setting we obtain

 maxx∗∈K∑tft(x∗) − T∑t=1ft(xt) ≤ GD√T ,

where is an upper bound on the norm of the gradients of the reward functions, and is an upper bound on the diameter of .

The FPL algorithm works in a similar setting as OGD, but with two crucial differences:

1. The set does not need to be convex. This is a significant advantage of the FPL approach, which we make use of in our application to robust quadratic programming (see Section 4.2).

2. FPL assumes that the reward functions are linear, i.e. with .

Kalai and Vempala (2005) suggest the following method for online decision making that relies on a linear optimization procedure over the set that computes for all . FPL chooses by first drawing a perturbation uniformly at random, and computing:

 xt+1 = M(∑tτ=1fτ+pt) . (5)

The regret of this algorithm is bounded as follows.

###### Lemma 2 (Kalai and Vempala 2005).

For any sequence of reward vectors , let be the sequence of decisions generated by (5) with parameter . Then

 maxx∗∈KT∑t=1ft⋅x∗ − \bf E[T∑t=1ft⋅xt] ≤ 2√DRAT ,

where is an upper bound on the magnitude of the rewards, is an upper bound on the norm of the reward vectors, and is an upper bound on the diameter of .

For our purposes, and in order to be able to work with an approximate optimization oracle to the original mathematical program, we need to adapt the original FPL algorithm to work with noisy oracles. This adaptation is made precise in Section 3.2.

## 3 Oracle-Based Robust Optimization

In this section we formally state and prove our first (and simpler) result: an oracle-based approximate robust optimization algorithm that is based on subgradient descent.

Throughout the section we assume the availability of an optimization oracle for the original optimization problem of the form given in Figure 1, which we denote by . Such an optimization oracle approximately solves formulation (3) for any fixed noise , in the sense that it either returns an -feasible solution (that meets each constraint up to ) or correctly declares that the problem is infeasible.

In this section we assume that for all :

1. For all , the function is concave in ;

2. The set is convex.

Under these assumptions, the robust formulation is in fact a convex-concave saddle-point problem that can be solved in polynomial time using interior-point methods. However, recall that our goal is to solve the robust problem by invoking a solver of the original (non-robust) optimization problem.

In the setting of this section, we shall make use of the following definitions. Let be an upper bound over the diameter of , that is . Let be a constant such that for all and .

With the above assumptions and definitions, we can now present an oracle-based robust optimization algorithm, given in Algorithm 1. The algorithm is comprised of primal-dual iterations, where the dual part of the algorithm updates the noise terms according to the current primal solution, via a low-regret update. For this algorithm, we prove:

###### Theorem 3.

Algorithm 1 either returns an -approximate solution to the robust program (3) or correctly concludes that it is infeasible. The algorithm terminates after at most calls to the oracle .

###### Proof.

First, suppose that the algorithm returns “infeasible”. By the definition of the oracle , this happens if for some , there does not exists such that

 fi(x,uti)≤0 ,i=1,…,m .

This implies that the robust counterpart (3) cannot be feasible, as there exists an admissible perturbation that makes the original problem infeasible.

Next, suppose that a solution is returned. The premise of the oracle implies that for all and (otherwise, the algorithm would have returned “infeasible”), whence

 ∀ i∈[m] ,1TT∑t=1fi(xt,uti) ≤ ϵ . (6)

On the other hand, from the regret guarantee of the Online Gradient Descent algorithm we have

 ∀ i∈[m] ,maxui∈U1TT∑t=1fi(xt,ui) − 1TT∑t=1fi(xt,uti) ≤ GD√T ≤ ϵ . (7)

Combining (10) and (12), we conclude that for all ,

 ϵ ≥ 1TT∑t=1fi(xt,uti) ≥ maxui∈U1TT∑t=1fi(xt,ui)−ϵ ≥ maxui∈Ufi(¯x,ui)−ϵ ,

where the final inequality follows from the convexity of the functions with respect to . Hence, for every we have

 fi(¯x,ui) ≤ 2ϵ,∀ ui∈U ,

implying that is an -approximate robust solution. ∎

### 3.2 Dual-Perturbation Meta-Algorithm

We now give our more general and intricate oracle-based approximation algorithm for RO. In contrast to the previous simple subgradient-based method, in this section we do not need the uncertainty structure to be convex. Instead, in addition to an oracle to solve the original mathematical program, we also assume the existence of an efficient “pessimization oracle” (as termed by Mutapcic and Boyd 2009), namely an oracle that approximates the worst-case noise for any given feasible solution . Formally, assume that for all the following hold:

1. For all , the function is linear in , i.e. can be written as for some functions and ;

2. There exists a linear optimization procedure that given a vector , computes a vector such that

On the surface, the linearity assumption seems very strong. However, note that we do not assume the convexity of the set . This means that the dual subproblem (that amounts to finding the worst-case noise for a given ) is not necessarily a convex program. Nevertheless, our approach can still approximate the robust formulation as long as a procedure is available.

In the rest of the section we use the following notations. Let be an upper bound over the diameter of , that is . Let and be constants such that and for all and .

We can now present our second oracle-based meta-algorithm, described in Algorithm 2. Similarly to our dual-subgradient method, the algorithm is based on primal-dual iterations. However, in the dual part we now rely on the approximate pessimization oracle for updating the noise terms. This algorithm provides the following convergence guarantee.

###### Theorem 4.

With probability at least , Algorithm 2 either returns an -approximate solution to the robust program (3) or correctly concludes that it is infeasible. The algorithm terminates after at most calls to the oracle .

We begin by analyzing the dual part of the algorithm, namely, the rule by which the variables are updated. While this rule is essentially an FPL-like update, we cannot apply Lemma 2 directly for two crucial reasons. First, the update uses an approximate linear optimization procedure instead of an exact one as required by FPL. Second, the reward vectors being observed by the dual algorithm are random variables that depend on its internal randomization (i.e., on the random variables ). Nevertheless, by analyzing a noisy version of the FPL algorithm (in Section 3.3 below) we can prove the following bound.

###### Lemma 5.

For each , with probability at least we have that

 maxui∈UT∑t=1gi(xt)⋅ui − T∑t=1gi(xt)⋅uti ≤ 2√DFGT+2F√Tlog1δ+2ϵT .
###### Proof.

Fix some . Note that the distribution from which the dual algorithm draws is a deterministic function of the primal variables . Hence, we can apply Lemma 4.1 of Cesa-Bianchi and Lugosi (2006), together with the regret bound of Theorem 6 (see Section 3.3 below), and obtain that

 maxui∈UT∑t=1gi(xt)⋅ui − T∑t=1\bf Et[gi(xt)⋅uti] ≤ 2√DFGT+2ϵT , (8)

where denotes the expectation conditioned on . Next, note that the random variables for form a martingale differences sequence with respect to , and

 |Zt| ≤ ∣∣gi(xt)⋅uti∣∣+\bf Et[∣∣gi(xt)⋅uti∣∣] ≤ 2F .

Hence, by Azuma’s inequality (see e.g., Lemma A.7 in Cesa-Bianchi and Lugosi 2006), with probability at least ,

 T∑t=1\bf Et[gi(xt)⋅uti] − T∑t=1gi(xt)⋅uti ≤ 2F√Tlog1δ . (9)

Summing inequalities (8) and (9), we obtain the lemma. ∎

Equipped with the above lemma, we can now prove Theorem 4.

###### Proof of Theorem 4.

First, suppose that the algorithm returns “infeasible”. By the definition of the oracle , this happens if for some , there does not exists such that

 fi(x,uti)≤0 ,i=1,…,m .

This implies that the robust counterpart (3) cannot be feasible.

Next, suppose that a solution is returned (note that must lie in the set as we assume that is convex). This ensures that for all and (otherwise, the algorithm would have returned “infeasible”), whence

 ∀ i∈[m] ,1TT∑t=1fi(xt,uti) ≤ ϵ . (10)

On the other hand, Lemma 5 implies that for each we have

 maxui∈U1TT∑t=1gi(xt)⋅ui − 1TT∑t=1gi(xt)⋅uti ≤ 2√DFGT+2F√logmδT+2ϵ

with probability at least . Recalling that for all and applying a union bound, we obtain that with probability at least ,

 ∀ i∈[m],maxui∈U1TT∑t=1fi(xt,ui) − 1TT∑t=1fi(xt,uti) ≤ 2√DFGT+2F√logmδT+2ϵ . (11)

Using our choice of now gives that with probability at least ,

 ∀ i∈[m],maxui∈U1TT∑t=1fi(xt,ui) − 1TT∑t=1fi(xt,uti) ≤ 3ϵ . (12)

Combining (10) and (12), we conclude that with probability at least , for all ,

 ϵ ≥ 1TT∑t=1fi(xt,uti) ≥ maxui∈U1TT∑t=1fi(xt,ui)−3ϵ ≥ maxui∈Ufi(¯x,ui)−3ϵ ,

where the final inequality follows from the convexity of the functions with respect to . Hence, with probability at least , for every we have

 fi(¯x,ui) ≤ 4ϵ,∀ ui∈U ,

implying that is an -approximate robust solution. ∎

As mentioned above, in our analysis we require a noisy version of the FPL algorithm, namely a variant capable of using an approximate linear optimization procedure over the decision domain rather than an exact one. Here we analyze such a variant and prove Theorem 6 being used in the proof of Lemma 5 above.

Assume we have a procedure for -approximating linear programs over a (not necessarily convex) domain , that is, for all the output of satisfies

 f⋅Mϵ(f) ≥ maxx∈Kf⋅x − ϵ

for some constant . We analyze the following version of the FPL algorithm: at round choose by first choosing a perturbation uniformly at random, and computing:

 xt+1 = Mϵ(∑tτ=1ft+pt) . (13)

We show that the error introduced by the noisy optimization procedure does not harm the regret too much. Formally, we prove:

###### Theorem 6.

For any sequence of reward vectors , let be the sequence of decisions produced by (13) with parameter . Then

 maxx∗∈KT∑t=1ft⋅x∗ − \bf E[T∑t=1ft⋅xt] ≤ 2√DRAT+2ϵT ,

where is an upper bound on the magnitude of the rewards, is an upper bound on the norm of the reward vectors, and is an upper bound on the diameter of .

Throughout this section we use the notation as a shorthand for the sum . Following the analysis of Kalai and Vempala (2005), we first prove that being the approximate leader yields approximately zero regret.

###### Lemma 7.

For any sequence of vectors ,

 T∑t=1Mϵ(f1:t)⋅ft ≥ Mϵ(f1:T)⋅f1:T−ϵT .
###### Proof.

The proof is by induction on . For the claim is trivial. Next, assuming correctness for some value of we have

 ∑T+1t=1Mϵ(f1:t)⋅ft ≥ Mϵ(f1:T)⋅f1:T+Mϵ(f1:T+1)⋅fT+1−ϵT ≥ Mϵ(f1:T+1)⋅f1:T−ϵ+Mϵ(f1:T+1)⋅fT+1−ϵT = Mϵ(f1:T+1)⋅f1:T+1−ϵ(T+1) ,

which completes the proof. ∎

Next, we bound the regret of a hypothetical algorithm that on round uses the unobserved function for predicting .

###### Lemma 8.

For any vectors () and it holds that

 T∑t=1Mϵ(f1:t+p)⋅ft ≥ maxx∈Kf1:t⋅x−Dη−2ϵT ,

where is an upper bound on the diameter of .

###### Proof.

Imagine a fictitious round in which a reward vector is observed. Then, using Lemma 7 we can write

 T∑t=1Mϵ(f1:t+p)⋅ft = T∑t=0Mϵ(f0:t)⋅ft−Mϵ(f0)⋅f0 ≥ Mϵ(f0:t)⋅f0:t−Mϵ(f0)⋅f0−ϵT .

Using the guarantee of , we can bound the first term on the right hand side as

 Mϵ(f0:t)⋅f0:t ≥ Mϵ(f1:t)⋅f0:t−ϵ = Mϵ(f1:t)⋅f1:t+Mϵ(f1:t)⋅f0−ϵ ≥ maxx∈Kf1:t⋅x+Mϵ(f1:t)⋅f0−2ϵ .

Putting things together, for we have

 T∑t=1Mϵ(f1:t+p)⋅ft ≥ maxx∈Kf1:t⋅x+(Mϵ(f1:t)−Mϵ(f0))⋅f0−(T+2)ϵ ≥ maxx∈Kf1:t⋅x−Dη−2ϵT ,

where the final inequality follows from Hölder’s inequality, since and . ∎

Our final lemma bounds the expected difference in quality between the prediction made by the hypothetical algorithm to the one made by the approximate FPL algorithm.

###### Lemma 9.

For any sequence of reward vectors and distributed uniformly in the cube we have

 \bf E[Mϵ(f1:t−1+p)⋅ft] − % \bf E[Mϵ(f1:t+p)⋅ft] ≥ −ηRA ,

where and .

###### Proof.

Lemma 3.2 in Kalai and Vempala (2005) shows that the cubes and overlap in at least fraction. On this intersection, the random variables and are identical. Otherwise, they can differ by at most . This gives the claim. ∎

We can now prove our regret bound.

###### Proof of Theorem 6.

Since we are bounding the expected regret, we can simply assume that with uniformly distributed in the cube . Combining the above lemmas, we see that

 \bf E[T∑t=1Mϵ(f1:t−1+pt)⋅ft] = \bf E[T∑t=1Mϵ(f1:t−1+p)⋅ft] ≥ \bf E[T∑t=1Mϵ(f1:t+p)⋅ft]−ηRAT ≥ maxx∈Kf1:t⋅x−Dη−ηRAT−2ϵT .

The claimed regret bound now follows from our choice of . ∎

## 4 Examples and Applications

In this section we provide several examples for the applicability of our results. All the problems we consider are stated as feasibility problems. For concreteness, we focus on ellipsoidal uncertainty sets, being the most common model of data uncertainty.

### 4.1 Robust Linear Programming

A linear program (LP) in the standard form is given by

 ∃? x∈Rn s.t. a⊤ix−bi≤0 ,i=1,…,m ,

The robust counterpart of this optimization problem is a second-order conic program (SOCP) that can be solved efficiently, see e.g. Ben-Tal et al. (2009); Bertsimas et al. (2011). In many cases of interest there exist highly efficient solvers for the original LP problem, as in the important case of network flow problems where the special combinatorial structure allows for algorithms that are much more efficient than generic LP solvers. However, this combinatorial structure is lost for its corresponding robust network flow problem. Hence, solving the robust problem using an oracle-based approach might be favorable in these cases. For the same reason, our technique is relevant even in the case of polyhedral uncertainty, where the robust counterpart remains an LP but possibly without the special structure of the original formulation.

In the discussion below, we assume that the feasible domain of the LP is inscribed in the Euclidean unit ball (this can be ensured via standard scaling techniques). Notice this also implies that the feasible domain of the corresponding robust formulation is inscribed in the same ball.

A robust linear program with ellipsoidal noise is given by:

 ∃? x∈Rn s.t. (ai+Pui)⊤x−bi≤0 ,∀ ui∈U,i=1,…,m , (14)

where is a matrix controlling the shape of the ellipsoidal uncertainty, are the nominal parameter vectors, and is the -dimensional Euclidean unit ball.

The robust linear program (4.1) is amenable to our OGD-based meta-algorithm (Algorithm 1), as the constraints are linear with respect to the noise terms . In this case we have , so that in each iteration of the algorithm, the update of the variables takes the simple form

 uti ← ut−1i+ηP⊤xmax{∥ut−1i+ηP⊤x∥2,1} .

Specializing Theorem 3 to the case of robust LPs, we obtain the following.

###### Corollary 10.

Algorithm 1 returns an -approximate robust solution to (4.1) after at most calls to the LP oracle, where is the maximal magnitude of the noise.

###### Proof.

Note that for all and ,

 ∥∇ufi(x,u)∥22 = x⊤PP⊤x ≤ ∥P∥2F⋅∥x∥22 ≤ σ2 .

Setting and in Theorem 3, we obtain the statement. ∎

#### Dual-Perturbation Algorithm.

Since the constraints of the robust LP are linear in the uncertainties , we can also apply our FPL-based meta-algorithm to the problem (4.1). Using the notations of Section 3.2, we have . Hence, the computation of the noise variables can be done in closed-form, as follows:

 uti ← max∥u∥2≤1 u⊤(P⊤∑t−1τ=1xτ+pt) = P⊤∑t−1τ=1xτ+pt∥P⊤∑t−1τ=1xτ+pt∥2 .

In this case, Theorem 4 implies:

###### Corollary 11.

With high probability, Algorithm 2 returns an -approximate robust solution to (4.1) after at most calls to the LP oracle, where is the maximal magnitude of the noise.

###### Proof.

Using the notations of Section 3.2 with , we have

 D = maxu,v∈U∥u−v∥1 ≤ 2√K , G = max∥x∥2≤1∥P⊤x∥1 ≤ √Kmax∥x∥2≤1∥P⊤x∥2 ≤ √Kσ , F = max∥x∥2,∥u∥2≤1|x⊤Pu| ≤ ∥x∥2⋅∥P∥F⋅∥u∥2 ≤ σ .

Making the substitutions into the guarantees in Theorem 4 completes the proof. ∎

We see that the asymptotic performance of Algorithm 2 is factor- worse than that of Algorithm 1, in the case of robust LP problems.

 ∃? x∈Rn s.t. ∥Aix∥22−b⊤ix−ci≤0 ,i=1,…,m ,

with , , . As in the case of LPs, we assume that the feasible domain of the above program is inscribed in the Euclidean unit ball. The robust counterpart of this optimization problem is a semidefinite program (Ben-Tal et al., 2009; Bertsimas et al., 2011). Current state-of-the-art QP solvers can handle two to three orders of magnitude larger QPs than SDPs, motivating our results. Indeed, our approach avoids reducing the robust program into an SDP and approximates it using a QP solver.

A robust QP with ellipsoidal uncertainties is given by333For simplicity, the uncertainties we consider here are only in the matrices (and not in the vectors ). In a similar, albeit more technical way we can also analyze our algorithm with general ellipsoidal uncertainties.

 ∃? x∈Rn s.t. ∥∥(Ai+∑Kk=1ui,kPk)x∥∥22−b⊤ix−ci≤0 ,∀ ui∈U,i=1,…,m , (15)

where are fixed matrices and . Here denotes the ’th entry of the noise vector .

Notice that Algorithm 1 does not apply to formulation (4.2), as the constraints are certainly not concave with respect to the noise terms (in fact, they are convex in , as we show below). This motivates the need for our FPL-based meta-algorithm.

#### Dual-Perturbation Algorithm.

We now show that the problem (4.2) falls in the scope of Section 3.2, and the assumptions required there hold for this program. We let denote the total magnitude of the admissible noise, and assume that the Frobenius norms of the nominal matrices are upper bounded by .

The following lemma shows that the ’th constraint is in fact a convex quadratic in .

###### Lemma 12.

The ’th constraint in (4.2) can be written as

 u⊤iQxui+2r⊤xui+sx ≤ 0 ,

where , and does not depend on . The matrix is a positive semidefinite with , and .

###### Proof.

Define and for . We have

 ∥∥(Ai+∑Kk=1ui,kPk)x∥∥22 = x⊤(A⊤iAi+2∑Kk=1ui,kA⊤iPk+∑Kk,l=1ui,kui,lP⊤kPl)x = y⊤0y0+2∑Kk=1ui,k