Improved Complexities of Conditional GradientType Methods with Applications to Robust Matrix Recovery Problems
Abstract
Motivated by robust matrix recovery problems such as Robust Principal Component Analysis, we consider a general optimization problem of minimizing a smooth and strongly convex loss function applied to the sum of two blocks of variables, where each block of variables is constrained or regularized individually. We study a Conditional GradientType method which is able to leverage the special structure of the problem to obtain faster convergence rates than those attainable via standard methods, under a variety of assumptions. In particular, our method is appealing for matrix problems in which one of the blocks corresponds to a lowrank matrix since it avoids prohibitive fullrank singular value decompositions required by most standard methods. While our initial motivation comes from problems which originated in statistics, our analysis does not impose any statistical assumptions on the data.
1 Introduction
In this paper we consider the following general convex optimization problem
(1) 
where E is a finitedimensional normed vector space over the reals, is assumed to be continuously differentiable and strongly convex, while and are proper, lower semicontinuous and convex functions which can be thought of either as regularization functions, or indicator functions^{1}^{1}1An indicator function of a set is defined to be in the set and outside. of certain closed and convex feasible sets and .
Problem (1) captures several important problems of interest, perhaps the most wellstudied is that of Robust Principal Component Analysis (PCA) [3, 14, 11], in which the goal is to (approximately) decompose an input matrix into the sum of a lowrank matrix and a sparse matrix . The underlying optimization problem for Robust PCA can be written as (see for instance [11])
(2) 
where denotes the Frobenius norm, denotes the nuclear norm, i.e., the sum of singular values, which is a highly popular convex surrogate for lowrank penalty, and is the entrywise norm, which is a wellknown convex surrogate for entrywise sparsity.
Other variants of interest of Problem (2) are when the data matrix is a corrupted covariance matrix, in which case it is reasonable to further constrain to be positive semidefinite, i.e., use the constraints and . In the case that is assumed to have several fully corrupted rows or columns, a popular alternative to the norm regularizer on the variable is to use either the norm (sum of norm of rows) in case of corrupted rows, or the norm (sum of norm of columns) in case of corrupted columns, as a regularizer/constraint [15]. Finally, moving beyond Robust PCA, a different choice of interest for the loss could be , where is a linear sensing operator such that is positive definite (so is strongly convex).
In this paper we present an algorithm and analyses that build on the special structure of Problem (1), which improve upon stateoftheart complexity bounds, under several different assumptions. A common key to all of our results is the ability to exploit the strong convexity of to obtain improved complexity bounds. Here it should be noted that while is assumed to be strongly convex, Problem (1) is in general not strongly convex in . This can already be observed when choosing , and , where . In this case, denoting the overall objective as , it is easily observed that the Hessian matrix of is given by , and hence is not fullrank.
The fastest known convergence rate for firstorder methods applicable to Problem (1), is achievable by accelerated gradient methods such as Nesterov’s optimal method [12] and FISTA [2], which converge at a rate of . However, in the context of lowrank matrix optimization problems such as Robust PCA, these methods require to compute a fullrank singular value decomposition on each iteration to update the lowrank component, which is often prohibitive for large scale instances. A different type of firstorder methods is the Conditional Gradient (CG) Method (a.k.a FrankWolfe algorithm) and variants of [6, 7, 8, 9, 10, 17]. In the context of lowrank matrix optimization, the CG method simply requires to compute an approximate leading singular vector pair of the negative gradient at each iteration, i.e., a rankone SVD. Hence, in this case, the CG method is much more scalable, than projection/proximal based methods. However, the rate of convergence is slower, e.g., if both and are indicator functions of certain closed and convex sets and , then the convergence rate of the conditional gradient method is of the form , where and denote the Euclidean diameter of the corresponding feasible sets and , where the diameter of a subset of is defined by .
Recently, two variants of the conditional gradient method for lowrank matrix optimization were suggested, which enjoy faster convergence rates when the optimal solution has low rank (which is indeed a key implicit assumption in such problems), while requiring to compute only a single lowrank SVD on each iteration [5, 1]. However, both of these new methods require the objective function to be strongly convex, which as we discussed above, does not hold in our case. Nevertheless, both our algorithm and our analysis are inspired by these two works. In particular, we generalize the lowrank SVD approach of [1] to nonstronglyconvex problems of the form of Problem (1), which include arbitrary regularizers or constraints.
In another recent related work [11], which also serves as a motivation for this current work, the authors considered a variant of the conditional gradient method tailored for lowrank and robust matrix recovery problems such as Problem (2), which combines standard conditional gradient updates of the lowrank variable (i.e., rankone SVD) and proximal gradient updates for the sparse noisy component. However, both the worstcase convergence rate and running time do not improve over the standard conditional gradient method. Combining conditionalgradient and proximalgradient updates for lowrank models was also considered in [4] for solving a convex optimization problem related to temporal recommendation systems.
Finally, it should be noted that while developing efficient algorithms is an active subject (see e.g., [13, 16]), such works fall short in two aspects: (a) they are not flexible as the general model (1), which allows for instance to impose a PSD constraint on the lowrank component or to consider various sparsitypromoting regularizers for the sparse component , and (b) all provable guarantees are heavily based on assumptions on the input matrix (such as incoherence of the singular value decomposition of the lowrank component or assuming certain patterns of the sparse component), which can be quite limiting in practice. This work, on the other hand, is completely free of such assumptions.
To overcome the shortcomings of previous methods applicable to Problem (1), in this paper we present a firstorder method, which combines two wellknown ideas, for tackling Problem (1). In particular we show that under several assumptions of interest, despite the fact that the objective in Problem (1) is in general not strongly convex, it is possible to leverage the strong convexity of towards obtaining better complexity results, while applying update steps that are scalable to large scale problems. Informally speaking, our main improved complexity bounds are as follows:

In the case that both and are indicators of compact and convex sets (as in Problem (2)), we obtain convergence rate of . In particular when is constrained, for example, via a lowrank promoting constraint, such as the nuclearnorm, our method requires on each iteration only a SVD computation of rank=, where is part of certain optimal solution . This result improves (in terms of running time), in a wide regime of parameters, mainly when , over the conditional gradient method which converges with rate , and over accelerated gradient methods which require, in the context of lowrank matrix optimization problems, a fullrank SVD computation on each iteration.

In the case that is an indicator of a strongly convex set (e.g., an norm ball for ), our method achieves a fast convergence rate of . As in the previous case, if is constrained/regularized via the nuclear norm, then our method only requires a SVD computation of rank=. To the best of our knowledge, this is the first result that combines an convergence rate and lowrank SVD computations in this setting. In particular, in the context of Robust PCA, such a result allows us to replace a traditional sparsitypromoting constraint of the form with , for some small constant . Using the norm instead of the norm gives rise to a strongly convex feasible set and, as we demonstrate empirically in Section 3.2, may provide a satisfactory approximation to the norm constraint in terms of sparsity.

In the case that either or are strongly convex (though not necessarily differentiable), our method achieves a linear convergence rate. In fact, we show that even if only one of the variables is regularized by a strongly convex function, then the entire objective of Problem (1) becomes strongly convex in . Here also, in the case of a nuclear norm constraint/regularization on one of the variables, we are able to leverage the use of only lowrank SVD computations. In the context of Robust PCA such a natural strongly convex regularizer may arise by replacing the norm regularization on with the elastic net regularizer, which combines both the norm and the squared norm, and serves as a popular alternative to the norm regularizer in LASSO.
A quick summary of the above results in the context of Robust PCA problems, such as Problem (2), is given in Table 1. See Section 3.2 in the sequel for a detailed discussion.
Cond. Grad.[10]  FISTA [2]  Algorithm 1  

setting  rate  SVD  rate  SVD  rate  SVD 
rank  rank  rank  
(“high SNR regime”)  
(“low SNR regime”)  
2 Preliminaries
Throughout the paper we let E denote an arbitrary finitedimensional normed vector space over where and denote the primal and dual norms over E, respectively.
2.1 Smoothness and strong convexity of functions and sets
Definition 1 (smooth function).
Let be a continuously differentiable function over a convex set . We say that is smooth over with respect to , if for all it holds that .
Definition 2 (strongly convex function).
Let be a continuously differentiable function over a convex set . We say that is strongly convex over with respect to , if it satisfies for all that .
The above definition combined with the firstorder optimality condition implies that for a continuously differentiable and strongly convex function , if , then for any it holds that .
This last inequality further implies that the magnitude of the gradient of at point , is at least of the order of the squareroot of the objective value approximation error at , that is, . Indeed, this follows since
where the second inequality follows from Holder’s inequality and the third from the convexity of . Thus, at any point , it holds that
(3) 
Definition 3 (strongly convex set).
We say that a convex set is strongly convex with respect to if for any , any and any vector such that , it holds that . That is, contains a ball of radius induced by the norm centered at .
For more details on strongly convex sets, examples and connections to optimization, we refer the reader to [6].
3 Algorithm and Results
As discussed in the introduction, in this paper we study efficient algorithms for the minimization model (1), where, throughout the paper, our blanket assumption is as follows
Assumption 1.

is smooth and strongly convex.

and are proper, lower semicontinuous and convex functions.
It should be noted that since (similarly for ) is assumed to be extendedvalued function, it allows the inclusion of constraint through the indicator function of the corresponding constraint set. Indeed, in this case one will consider , where is a nonempty, closed and convex.
We now present the main algorithmic framework, which will be used to derive all of our results.
Algorithm 1 is based on three wellknown corner stones in continuous optimization: alternating minimization, conditional gradient, and proximal gradient. Since Problem (1) involves two variables and , we update each one of them separately and differently in an alternating fashion. Indeed, the variable is first updated using a conditional gradient step (see step (4)) and then the alternating idea comes into a play and we use the updated information in order to update the variable using a proximal gradient step (see step (5))^{2}^{2}2We note that a practical implementation of Algorithm 1 for a specific problem, such as Problem (2), may require to account for approximation errors in the computation of or , since exact computation is not always practically feasible. Such considerations which can be easily incorporated both into Algorithm 1 and our corresponding analyses (see examples in [10, 5, 1]), are beyond the scope of this current paper, and for the simplicity and clarity of presentation, we assume all such computations are precise..
3.1 Outline of the main results
Let us denote by the optimal value of the optimization Problem (1). In the sequel we prove the following three theorems on the performance of Algorithm 1. For clarity, below we present a concise and simplified version of the results. In section 4, in which we provide complete proofs for these theorems, we also restate them with complete detail. In all three theorems we assume that Assumption 1 holds true, and we bound the convergence rate of the sequence produced by Algorithm 1 with a suitable choice of stepsizes .
Theorem 1.
Assume that where is a nonempty, closed and convex subset of E. There exists a choice of stepsizes such that Algorithm 1 converges with rate .
Remark 1.
Note that since and are in principle interchangeable, Theorem 1 implies a rate of . This improves over the rate of achieved by standard analyses of projected/proximal gradient methods and the conditional gradient method.
Theorem 2.
Remark 2.
While a rate of for the conditional gradient method over strongly convex sets was recently showed to hold in [6], it should be noted that it does not apply in the case of Theorem 2, since only the set is assumed to be strongly convex. In particular, both the set of sums and the product set need not be strongly convex.
Theorem 3.
Assume that is strongly convex. Then, there exists a fixed stepsize such that Algorithm 1 converges with rate .
3.2 Putting our results in the context of Robust PCA problems
As discussed in the Introduction, this work is mostly motivated by lowrank matrix optimization problems such as Robust PCA (see Problem (2)). Thus, towards better understanding of our results for this setting, we now briefly detail the applications to Problem (2). As often standard in such problems, we assume that there exists an optimal solution such that the signal matrix is of rank at most , where ^{3}^{3}3Our results could be easily extended to the case in which is nearly of rank , i.e., of distance much smaller than the required approximation accuracy to a rank matrix, however for the sake of clarity we simply assume that is of lowrank..
3.2.1 Using lowrank SVD computations
Note that the computation of in Algorithm 1, which is used to update the estimate , simply requires that satisfies and
(4) 
where we use the short notation . Since is assumed to have rank at most , it follows that
(5) 
where . The solution to the minimization problem on the RHS of (5) is given simply by computing the rank singular value decomposition of the matrix , and projecting the resulted vector of singular values onto the norm ball of radius (which can be done in time). Thus, indeed the time to compute the update for on each iteration of Algorithm 1 is dominated by a single rank SVD computation. This observation holds for all the following discussions in this section as well. This lowrank SVD approach was already suggested in the recent work [1], that studied smooth and strongly convex minimization over the nuclearnorm ball (which differs from our setting).
3.2.2 Improved complexities for low/high SNR regimes
In case that is an (say, unique) optimal solution to Problem (2), which satisfies , i.e., a high signaltonoise ratio regime, we expect that , where and are the Euclidean diameters of the nuclear norm ball and the norm ball, respectively. In this case, the result of Theroem 1 is appealing since the convergence rate depends only on and not on as standard algorithms/analyses. In the opposite case, i.e., , which naturally corresponds to a low signaltonoise ratio regime, since and are interchangeable in our setting, we can reverse their roles in the optimization and get via Theorem 1 dependency only on . Moreover, now the nuclearnorm constrained variable (assuming the role of in Algorithm 1) is only updated via a conditional gradient update, i.e., requires only a rankone SVD computation on each iteration. In particular, statistical recovery results such as the seminal work [3], show that under suitable assumptions on the data, exact recovery is possible in both of these cases, even for instance, when .
3.2.3 Replacing the constraint with a constraint
The norm is traditionally used in Robust PCA to constrain/regularize the sparse noisy component. The standard geometric intuition is that since the boundary of the norm ball becomes sharp near the axes, this choice promotes sparse solutions. This property also holds for an norm ball where is sufficiently close to 1. Thus, it might be reasonable to replace the norm constraint on with a norm constraint for some small constant , which results in a strongly convex feasible set for the variable (see [6]). Using Theorem 2, we will obtain an improved convergence rate of instead of , practically without increasing the computational complexity per iteration (since is updated via a conditional gradient update and linear optimization over a norm ball can be carriedout in linear time [6]).
In order to demonstrate the plausibility of using the norm instead of norm, in Table 2 we present results on synthetic data (similar to those done in [3]), which show that already for a moderate value of we obtain quite satisfactory recovery results.
3.2.4 Replacing the norm regularizer with an elastic net regularizer
In certain cases it may be beneficial to replace the norm constraint (or regularizer) of the variable in Problem (2) with an elastic net regularizer, i.e., to take , for some . The elastic net is a popular alternative to the standard norm regularizer for problems such as LASSO (see, for instance [18]). As opposed to the norm regularizer, the elastic net is strongly convex (though not differentiable). Thus, with such a choice for , by invoking Theorem 3, Algorithm 1 guarantees a linear convergence rate. We note that when using the elastic net regularizer, the computation of on each iteration of Algorithm 1 requires to solve the optimization problem:
where we again use the short notation . In the optimization problem above, the RHS admits a wellknown closedform solution given by the shrinkage/softthresholding operator, which can be computed in linear time (i.e., time), see for instance [2].
4 Rate of Convergence Analysis
In this section we provide the proofs for Theorems 1, 2, and 3. Throughout this section and for the simplicity of the yet to come developments we denote, for all , , , and . Note that, using these notations we obviously have that . Similarly, for an optimal solution of Problem (1) we denote .
We will need the following technical result which forms the basis for the proofs of all stated theorems.
Proposition 1.
Let be a sequence generated by Algorithm 1. Then, for all , we have that
(6) 
Proof.
Fix . Observe that by the update rule of Algorithm 1 (see step 6 of the algorithm), it holds that
Thus, since is smooth, it follows that
Using the above inequality we can write,
(7) 
where (a) follows from the convexity of and , while (b) follows from the choice of . Using the inequality which holds true for all , we obtain
(8) 
where the last equality follows from the definitions of and .
We now prove Theorem 1. For convenience, we first state the theorem in full details.
Theorem 4.
Assume that where is a nonempty, closed and convex subset of E. Let be a sequence generated by Algorithm 1 with the following stepsizes:
(9) 
where , for satisfying . Then, for all it holds that
Proof.
From the choice of we have that
(10) 
Now, using this in Proposition 1, we get for all , that
(11) 
where the last inequality follows from the fact that . On the other hand, from the strong convexity of we obtain that
Therefore, by combining these two inequalities we derive that
Subtracting from both sides of the inequality and by denoting , we obtain that holds true for all . The result now follows from simple induction arguments and the choice of stepsizes detailed in the theorem (for details see Lemma 1 in the appendix below). ∎
Before proving Theorem 2 we would like to comment about the constant , which was used in the result above and appears in the stepsize.
Remark 3.
The constant , even though appears in the stepsize of the algorithm, can be easily bounded from above as we describe now. Suppose, we are setting the points and to be used in our algorithm as follows:
and
For these choices we obviously have (using optimality conditions) that
Hence, using the gradient inequality on the function , yields that
The obtained bound does not depend on the optimal solution and therefore can be computed explicitly.
It should be noted that in the case of Robust PCA (e.g., Problem (2)), we have that and . In this case, computing the matrices and is computationally very efficient, since it requires to compute a single leading singular vectors pair, and solving a single linear minimization problem over an ball, respectively.
Now, we turn to prove Theorem 2. Again, we first state the theorem in full details.
Theorem 5.
Assume that where is a nonempty, closed and convex subset of E and , where is an strongly convex and closed subset of E. Let be a sequence produced by Algorithm 1 using the stepsize for all . Then, for all it holds that
Moreover, if there exists such that , then using a fixed stepsize for all , guarantees that
Proof.
Fix some iteration and define the point where . Note that since is an strongly convex set, it follows from Definition 3 that . Moreover, from the optimal choice of we have that . Thus, we have that