A variational approach to stable principal component pursuit
We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into low-rank and sparse representations. For numerical solutions of our SPCP formulation, we first develop a convex variational framework and then accelerate it with quasi-Newton methods. We show, via synthetic and real data experiments, that our approach offers advantages over the classical SPCP formulations in scalability and practical parameter selection.
A variational approach to stable principal component pursuit
Aleksandr Aravkin T. J. Watson Center IBM Research Yorktown Heights, NY Stephen Becker T. J. Watson Center IBM Research Yorktown Heights, NY Volkan Cevher ††thanks: Author’s work is supported in part by the European Commission under the grants MIRG-268398 and ERC Future Proof, and by the Swiss Science Foundation under the grants SNF 200021-132548, SNF 200021-146750 and SNF CRSII2-147633. LIONS EPFL Lausanne, Switzerland Peder Olsen T. J. Watson Center IBM Research Yorktown Heights, NY
Linear superposition is a useful model for many applications, including nonlinear mixing problems. Surprisingly, we can perfectly distinguish multiple elements in a given signal using convex optimization as long as they are concise and look sufficiently different from one another. Popular examples include robust principal component analysis (RPCA) where we decompose a signal into low rank and sparse components and stable principal component pursuit (SPCP), where we also seek an explicit noise component within the RPCA decomposition. Applications include alignment of occluded images (Peng et al., 2012), scene triangulation (Zhang et al., 2011), model selection (Chandrasekaran et al., 2012), face recognition, and document indexing (Candès et al., 2011).
The SPCP formulation can be mathematically stated as follows. Given a noisy matrix , we decompose it as a sum of a low-rank matrix and a sparse matrix via the following convex program
where the 1-norm and nuclear norm are given by where is the vector of singular values of . In (), the parameter controls the relative importance of the low-rank term vs. the sparse term , and the parameter accounts for the unknown perturbations in the data not explained by and .
When , () is the “robust PCA” problem as analyzed by Chandrasekaran et al. (2009); Candès et al. (2011), and it has perfect recovery guarantees under stylized incoherence assumptions. There is even theoretical guidance for selecting a minimax optimal regularization parameter (Candès et al., 2011). Unfortunately, many practical problems only approximately satisfy the idealized assumptions, and hence, we typically tune RPCA via cross-validation techniques. SPCP further complicates the practical tuning due to the additional parameter .
To cope with practical tuning issues of SPCP, we propose the following new variant called “max-SPCP”:
Cross-validation with () to estimate is significantly easier than estimating in (). For example, given an oracle that provides an ideal separation , we can use in both cases. However, while we can estimate , it is not clear how to choose from data. Such cross validation can be performed on a similar dataset, or it could be obtained from a probabilistic model.
Our convex approach for solving () generalizes to other source separation problems (Baldassarre et al., 2013) beyond SPCP. Both () and () are challenging to solve when the dimensions are large. We show in this paper that these problems can be solved more efficiently by solving a few (typically 5 to 10) subproblems of a different functional form. While the efficiency of the solution algorithms for () relies heavily on the efficiency of the 1-norm and nuclear norm projections, the efficiency of our solution algorithm () is preserved for arbitrary norms. Moreover, () allows a faster algorithm in the standard case, discussed in Section 6.
2 A Primer on Spcp
The theoretical and algorithmic research on SPCP formulations (and source separation in general) is rapidly evolving. Hence, it is important to set the stage first in terms of the available formulations to highlight our contributions.
The solutions of () and () are related to the solutions of () and () via the Pareto frontier by Aravkin et al. (2013a, Theorem 2.1). If the constraint is tight at the solution, then there exist corresponding parameters and , for which the optimal value of () and () is , and the corresponding optimal solutions and are also optimal for () and ().
For completeness, we also include the Lagrangian formulation, which is covered by our new algorithm:
Problems () and () can be solved using projected gradient and accelerated gradient methods. The disadvantage of some of these formulations is that it may not be clear how to tune the parameters. Surprisingly, an algorithm we propose in this paper can solve () and () using a sequence of flipped problems that specifically exploits the structured relationship cited in Remark 2.1. In practice, we will see that better tuning also leads to faster algorithms, e.g., fixing ahead of time to an estimated ‘noise floor’ greatly reduces the amount of required computation if parameters are to be selected via cross-validation.
Finally, we note that in some cases, it is useful to change the term to where is a linear operator. For example, let be a subset of the indices of a matrix. We may only observe restricted to these entries, denoted , in which case we choose . Most existing RPCA/SPCP algorithms adapt to the case but this is due to the strong properties of the projection operator . The advantage of our approach is that it seamlessly handles arbitrary linear operators . In fact, it also generalizes to smooth misfit penalties, that are more robust than the Frobenius norm, including the Huber loss. Our results also generalize to some other penalties on besides the 1-norm.
The paper proceeds as follows. In Section 3, we describe previous work and algorithms for SPCP and RPCA. In Section 4, we cast the relationships between pairs of problems (), () and (), () into a general variational framework, and highlight the product-space regularization structure that enables us solve the formulations of interest using corresponding flipped problems. We discuss computationally efficient projections as optimization workhorses in Section 5, and develop new accelerated projected quasi-Newton methods for the flipped and Lagrangian formulations in Section 6. Finally, we demonstrate the efficacy of the new solvers and the overall formulation on synthetic problems and a real cloud removal example in Section 7, and follow with conclusions in Section 8.
3 Prior Art
While problem () with has several solvers (e.g., it can be solved by applying the widely known Alternating Directions Method of Multipliers (ADMM)/Douglas-Rachford method (Combettes & Pesquet, 2007)), the formulation assumes the data are noise free. Unfortunately, the presence of noise we consider in this paper introduces a third term in the ADMM framework, where the algorithm is shown to be non-convergent (Chen et al., 2013). Interestingly, there are only a handful of methods that can handle this case. Those using smoothing techniques no longer promote exactly sparse and/or exactly low-rank solutions (Aybat et al., 2013). Those using dual decomposition techniques require high iteration counts. Because each step requires a partial singular value decomposition (SVD) of a large matrix, it is critical that the methods only take a few iterations.
As a rough comparison, we start with related solvers that solve () for . Wright et al. (2009a) solves an instance of () with and a system in hours. By switching to the (Lag-SPCP) formulation, Ganesh et al. (2009) uses the accelerated proximal gradient method (Beck & Teboulle, 2009) to solve a matrix in under one hour. This is improved further in Lin et al. (2010) which again solves () with using the augmented Lagrangian and ADMM methods and solves a system in about a minute. As a prelude to our results, our method can solve some systems of this size in about seconds (c.f., Fig. 1).
In the case of () with , Tao & Yuan (2011) propose the alternating splitting augmented Lagrangian method (ASALM), which exploits separability of the objective in the splitting scheme, and can solve a system in about five minutes.
The partially smooth proximal gradient (PSPG) approach of Aybat et al. (2013) smooths just the nuclear norm term and then applies the well-known FISTA algorithm (Beck & Teboulle, 2009). Aybat et al. (2013) show that the proximity step can be solved efficiently in closed-form, and the dominant cost at every iteration is that of the partial SVD. They include some examples on video, lopsided matrices: or so, in about 1 minute). solving formulations in under half a minute.
The nonsmooth adaptive Lagrangian (NSA) algorithm of Aybat & Iyengar (2014) is a variant of the ADMM for (), and makes use of the insight of Aybat et al. (2013). The ADMM variant is interesting in that it splits the variable , rather than the sum or residual . Their experiments solve a 1500 1500 synthetic problems in between 16 and 50 seconds (depending on accuracy) .
Shen et al. (2014) develop a method exploiting low-rank matrix factorization scheme, maintaining . This technique has also been effectively used in practice for matrix completion (Lee et al., 2010; Recht & Ré, 2011; Aravkin et al., 2013b), but lacks a full convergence theory in either context. The method of (Shen et al., 2014) was an order of magnitude faster than ASALM, but encountered difficulties in some experiments where the sparse component dominated the low rank component in some sense. Mansour & Vetro (2014) attack the formulation using a factorized approach, together with alternating solves between and . Non-convex techniques also include hard thresholding approaches, e.g. the approach of Kyrillidis & Cevher (2014). While the factorization technique may potentially speed up some of the methods presented here, we leave this to future work, and only work with convex formulations.
4 Variational Framework
Classic formulations assume to be the Frobenius norm; however, this restriction is not necessary, and we consider to be smooth and convex. In particular, can be taken to be the robust Huber penalty (Huber, 2004). Even more importantly, this formulation allows pre-composition of a smooth convex penalty with an arbitrary linear operator , which extends the proposed approach to a much more general class of problems. Note that a simple operator is already embedded in both formulations of interest:
Projection onto a set of observed indices is also a simple linear operator that can be included in . Operators may include different transforms (e.g., Fourier) applied to either or .
The problem class (4.1) falls into the class of problems studied by van den Berg & Friedlander (2008, 2011) for and by Aravkin et al. (2013a) for arbitrary convex . Making use of this framework, we can define a value function
and use Newton’s method to find a solution to . The approach is agnostic to the linear operator (it can be of the simple form (4.2); include restriction in the missing data case, etc.).
For both formulations of interest, is a norm defined on a product space , since we can write
In particular, both and are gauges. For a convex set containing the origin, the gauge is defined by
For any norm , the set defining it as a gauge is simply the unit ball . We introduce gauges for two reasons. First, they are more general (a gauge is a norm only if is bounded with nonempty interior and symmetric about the origin). For example, gauges trivially allow inclusion of non-negativity constraints. Second, definition (4.6) and the explicit set simplify the exposition of the following results.
Given , can be written in closed form using (Aravkin et al., 2013a, Theorem 5.2), which simplifies to
with denoting the polar gauge to . The polar gauge is precisely , with
The key to RPCA is that the regularization functional is a gauge over the product space used to decompose into summands and . This makes it straightforward to compute polar results for both and .
Theorem 4.1 (Max-Sum Duality for Gauges on Product Spaces).
Let and be gauges on and , and consider the function
Then is a gauge, and its polar is given by
Let and denote the canonical sets corresponding to gauges and . It immediately follows that is a gauge for the set , since
By (Rockafellar, 1970, Corollary 15.1.2), the polar of the gauge of is the support function of , which is given by
This theorem allows us to easily compute the polars for and in terms of the polars of and , which are the dual norms, the spectral norm and infinity norm, respectively.
where denotes the spectral norm (largest eigenvalue of ).
This result was also obtained by (van den Berg & Friedlander, 2011, Section 9), but is stated only for norms. Theorem 4.1 applies to gauges, and in particular now allows asymmetric gauges, so non-negativity constraints can be easily modeled.
We now have closed form solutions for in (4.8) for both formulations of interest. The remaining challenge is to design a fast solver for (4.3) for formulations () and (). We focus on this challenge in the remaining sections of the paper. We also discuss the advantage of () from this computational perspective.
In this section, we consider the computational issues of projecting onto the set defined by . For this is straightforward since the set is just the product set of the nuclear norm and norm balls, and efficient projectors onto these are known. In particular, projecting an matrix (without loss of generality let ) onto the nuclear norm ball takes operations, and projecting it onto the -ball can be done on operations using fast median-finding algorithms (Brucker, 1984; Duchi et al., 2008).
For , the projection is no longer straightforward. Nonetheless, the following lemma shows this projection can be efficiently implemented.
(van den Berg & Friedlander, 2011, Section 5.2) Projection onto the scaled -ball, that is, for some , can be done in time.
The proof of the proposition follows by noting that the solution can be written in a form depending only on a single scalar parameter, and this scalar can be found by sorting followed by appropriate summations. We conjecture that fast median-finding ideas could reduce this to in theory, the same as the optimal complexity for the -ball.
Armed with the above proposition, we state an important lemma below. For our purposes, we may think of as a vector in rather than a matrix in .
(van den Berg & Friedlander, 2011, Section 9.2) Let and , and let be any ordering of the elements of . Then the projection of onto the ball is , where is the projection onto the scaled -ball .
Sketch of proof.
We need to solve
The inner minimization is equivalent to projecting onto the nuclear norm ball, and this is well-known to be soft-thresholding of the singular values. Since it depends only on the singular values, recombining the two minimization terms gives exactly a joint projection onto a scaled -ball. ∎
All the references to the -ball can be replaced by the intersection of the -ball and the non-negative cone, and the projection is still efficient. As noted in Section 4, imposing non-negativity constraints is covered by the gauge results of Theorem 4.1 and Corollary 4.2. Therefore, both the variational and efficient computational framework can be applied to this interesting case.
6 Solving the Sub-Problem via Projected Quasi-Newton Methods
In order to accelerate the approach, we can use quasi-Newton (QN) methods since the objective has a simple structure.111 We use “quasi-Newton” to mean an approximation to a Newton method and it should not be confused with methods like BFGS The main challenge here is that for the term, it is tricky to deal with a weighted quadratic term (whereas for , we can obtain a low-rank Hessian and solve it efficiently via coordinate descent).
We wish to solve (). Let be the full variable, so we can write the objective function as . To simplify the exposition, we take to be the matrix, but the presented approach applies to general linear operators (including terms like ). The matrix structure of and is not important here, so we can think of them as vectors instead of matrices.
The gradient is . For convenience, we use and
The Hessian is . We cannot simultaneously project onto their constraints with this Hessian scaling (doing so would solve the original problem!), since the Hessian removes separability. Instead, we use to approximate the cross-terms.
The true function is a quadratic, so the following quadratic expansion around is exact:
The coupling of the second order terms, shown in bold, prevents direct 1-step minimization of , subject to the nuclear and 1-norm constraints. The FISTA (Beck & Teboulle, 2009) and spectral gradient methods (SPG) (Wright et al., 2009b) replace the Hessian with the upper bound , which solves the coupling issue, but potentially lose too much second order information. After comparing FISTA and SPG, we use the SPG method for solving (). However, for () (and for (Lag-SPCP), which has no constraints but rather non-smooth terms, which can be treated like constraints using proximity operators), the constraints are uncoupled and we can take a “middle road” approach, replacing
The first term is decoupled, allowing us to update , and then this is plugged into the second term in a Gauss-Seidel fashion. In practice, we also scale this second-order term with a number slightly greater than but less than (e.g., ) which leads to more robust behavior. We expect this “quasi-Newton” trick to do well when is similar to .
7 Numerical Results
The numerical experiments are done with the algorithms suggested in this paper as well as code from PSPG (Aybat et al., 2013), NSA (Aybat & Iyengar, 2014), and ASALM (Tao & Yuan, 2011)222PSPG, NSA and ASALM available from the experiment package at http://www2.ie.psu.edu/aybat/codes.html. We modified the other software as needed for testing purposes. PSPG, NSA and ASALM all solve (), but ASALM has another variant which solves (Lag-SPCP) so we test this as well. All three programs also use versions of PROPACK from Larsen (1998); Becker & Candès (2008) to compute partial SVDs. Since the cost of a single iteration may vary among the solvers, we measure error as a function of time, not iterations. When a reference solution is available, we measure the (relative) error of a trial solution as . The benchmark is designed so the time required to calculate this error at each iteration does not factor into the reported times. Since picking stopping conditions is solver dependent, we show plots of error vs time, rather than list tables. All tests are done in Matlab and the dominant computational time was due to matrix multiplications for all algorithms; all code was run in the same quad-core GHz i7 computer.
For our implementations of the (), () and (Lag-SPCP), we use a randomized SVD (Halko et al., 2011). Since the number of singular values needed is not known in advance, the partial SVD may be called several times (the same is true for PSPG, NSA and ASALM). Our code limits the number of singular values on the first two iterations in order to speed up calculation without affecting convergence. Unfortunately, the delicate projection involved in () makes incorporating a partial SVD to this setting more challenging, so we use Matlab’s dense SVD routine.
7.1 Synthetic test with exponential noise
We first provide a test with generated data. The observations with and were created by first sampling a rank matrix with random singular vectors (i.e., from the Haar measure) and singular values drawn from a uniform distribution with mean , and then adding exponential random noise (with mean equal to one tenth the median absolute value of the entries of ). This exponential noise, which has a longer tail than Gaussian noise, is expected to be captured partly by the term and partly by the term.
Given , the reference solution was generated by solving (Lag-SPCP) to very high accuracy; the values and were picked by hand tuning to find a value such that both and are non-zero. The advantage to solving (Lag-SPCP) is that knowledge of allows us to generate the parameters for all the other variants, and hence we can test different problem formulations.
With these parameters, was rank 17 with nuclear norm , had 54 non-zero entries (most of them positive) with norm , the normalized residual was , and , , , and .
Results are shown in Fig. 1. Our methods for () and (Lag-SPCP) are extremely fast, because the simple nature of these formulations allows the quasi-Newton acceleration scheme of Section 6. In turn, since our method for solving () uses the variational framework of Section 4 to solve a sequence of () problems, it is also competitive (shown in cyan in Figure 1). The jumps are due to re-starting the sub-problem solver with a new value of , generated according to (4.7).
Our proximal gradient method for (), which makes use of the projection in Lemma 5.2, converges more slowly, since it is not easy to accelerate with the quasi-Newton scheme due to variable coupling, and it does not make use of fast SVDs. Our solver for (), which depends on a sequence of problems (), converges slowly.
The ASALM performs reasonably well, which was unexpected since it was shown to be worse than NSA and PSPG in Aybat et al. (2013); Aybat & Iyengar (2014). The PSPG solver converges to the wrong answer, most likely due to a bad choice of the smoothing parameter ; we tried choosing several different values other than the default but did not see improvement for this test (for other tests, not shown, tweaking helped significantly). The NSA solver reaches moderate error quickly but stalls before finding a highly accurate solution.
7.2 Synthetic test from Aybat & Iyengar (2014)
We show some tests from the test setup of Aybat & Iyengar (2014) in the case. The default setting of was used, and then the NSA solver was run to high accuracy to obtain a reference solution . From the knowledge of , one can generate , but not and , and hence we did not test the solvers for (Lag-SPCP) in this experiment. The data was generated as , where was sampled by multiplication of and normal Gaussian matrices, had randomly chosen entries uniformly distributed within , and was white noise chosen to give a SNR of dB. We show three tests that vary the rank from and the sparsity ranging from . Unlike Aybat & Iyengar (2014), who report error in terms of a true noiseless signal , we report the optimization error relative to .
For the first test (with and ), had rank and nuclear norm ; had of its elements nonzero and norm , and . The other parameters were , , , and . An interesting feature of this test is that while is low-rank, is nearly low-rank but with a small tail of significant singular values until number . We expect methods to converge quickly to low-accuracy where only a low-rank approximation is needed, and then slow down as they try to find a larger rank highly-accurate solution.
The results are shown in Fig. 2. Errors barely dip below (for comparison, an error of is achieved by setting ). The NSA and PSPG solvers do quite well. In contrast to the previous test, ASALM does poorly. Our methods for (), and hence (), are not competitive, since they use dense SVDs. We imposed a time-limit of about one minute, so these methods only manage a single iteration or two. Our quasi-Newton method for () does well initially, then takes a long time due to a long partial SVD computation. Interestingly, () does better than pure (). One possible explanation is that it chooses a fortuitous sequence of values, for which the corresponding () subproblems become increasingly hard, and therefore benefit from the warm-start of the solution of the easier previous problem. This is consistent with empirical observations regarding continuation techniques, see e.g., (van den Berg & Friedlander, 2008; Wright et al., 2009b).
Figure 3 is the same test but with and , and the conclusions are largely similar.
7.3 Cloud removal
Figure 4 shows 15 images of size from the MODIS satellite,333Publicly available at http://ladsweb.nascom.nasa.gov/ after some transformations to turn images from different spectral bands into one grayscale images. Each image is a photo of the same rural location but at different points in time over the course of a few months. The background changes slowly and the variability is due to changes in vegetation, snow cover, and different reflectance. There are also outlying sources of error, mainly due to clouds (e.g., major clouds in frames 5 and 7, smaller clouds in frames 9, 11 and 12), as well as artifacts of the CCD camera on the satellite (frame 4 and 6) and issues stitching together photos of the same scene (the lines in frames 8 and 10).
There are hundreds of applications for clean satellite imagery, so removing the outlying error is of great practical importance. Because of slow changing background and sparse errors, we can model the problem using the robust PCA approach. We use the () version due to its speed, and pick parameters by using a Nelder-Mead simplex search. For an error metric to use in the parameter tuning, we remove frame from the data set (call it ) and set to be frames 2–15. From this training data , the algorithm generates and . Since is a matrix, it has far from full column span. Thus our error is the distance of from the span of , i.e., .
Our method takes about 11 iterations and 5 seconds, and uses a dense SVD instead of the randomized method due to the high aspect ratio of the matrix. Some results of the obtained outputs are in Fig. 5, where one can see that some of the anomalies in the original data frames are picked up by the term and removed from the term. Frame 4 has what appears to be a camera pixel error; frame 6 has another artificial error (that is, caused by the camera and not the scene); and frame 12 has cloud cover.
In this paper, we reviewed several formulations and algorithms for the RPCA problem. We introduced a new denoising formulation () to the ones previously considered, and discussed modeling and algorithmic advantages of denoising formulations () and () compared to flipped versions () and (). In particular, we showed that these formulations can be linked using a variational framework, which can be exploited to solve denoising formulations using a sequence of flipped problems. For (), we proposed a quasi-Newton acceleration that is competitive with state of the art, and used this innovation to design a fast method for () through the variational framework. The new methods were compared against prior art on synthetic examples, and applied to a real world cloud removal application application using publicly available MODIS satellite data.
- Aravkin et al. (2013a) Aravkin, A. Y., Burke, J., and Friedlander, M. P. Variational properties of value functions. SIAM J. Optimization, 23(3):1689–1717, 2013a.
- Aravkin et al. (2013b) Aravkin, A. Y., Kumar, R., Mansour, H., Recht, B., and Herrmann, F. J. A robust SVD-free approach to matrix completion, with applications to interpolation of large scale data. 2013b. URL http://arxiv.org/abs/1302.4886.
- Aybat et al. (2013) Aybat, N., Goldfarb, D., and Ma, S. Efficient algorithms for robust and stable principal component pursuit. Computational Optimization and Applications, (accepted), 2013.
- Aybat & Iyengar (2014) Aybat, N. S. and Iyengar, G. An alternating direction method with increasing penalty for stable principal component pursuit. Computational Optimization and Applications, (submitted), 2014. http://arxiv.org/abs/1309.6553.
- Baldassarre et al. (2013) Baldassarre, L., Cevher, V., McCoy, M., Tran Dinh, Q., and Asaei, A. Convexity in source separation: Models, geometry, and algorithms. Technical report, 2013. http://arxiv.org/abs/1311.0258.
- Beck & Teboulle (2009) Beck, A. and Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sciences, 2(1):183–202, January 2009.
- Becker & Candès (2008) Becker, S. and Candès, E. Singular value thresholding toolbox, 2008. Available from http://svt.stanford.edu/.
- Brucker (1984) Brucker, P. An O(n) algorithm for quadratic knapsack problems. Operations Res. Lett., 3(3):163 – 166, 1984. doi: 10.1016/0167-6377(84)90010-5.
- Candès et al. (2011) Candès, E. J., Li, X., Ma, Y., and Wright, J. Robust principal component analysis? J. Assoc. Comput. Mach., 58(3):1–37, May 2011.
- Chandrasekaran et al. (2009) Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., and Willsky, A. S. Sparse and low-rank matrix decompositions. In SYSID 2009, Saint-Malo, France, July 2009.
- Chandrasekaran et al. (2012) Chandrasekaran, V., Parrilo, P. A., and Willsky, A. S. Latent variable graphical model selection via convex optimization. Ann. Stat., 40(4):1935–2357, 2012.
- Chen et al. (2013) Chen, Caihua, He, Bingsheng, Ye, Yinyu, and Yuan, Xiaoming. The direct extension of admm for multi-block convex minimization problems is not necessarily convergent. Optimization Online, 2013.
- Combettes & Pesquet (2007) Combettes, P. L. and Pesquet, J.-C. A Douglas–Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery. IEEE J. Sel. Topics Sig. Processing, 1(4):564–574, December 2007.
- Duchi et al. (2008) Duchi, J., Shalev-Shwartz, S., Singer, Y., and Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. In Intl. Conf. Machine Learning (ICML), pp. 272–279, New York, July 2008. ACM Press.
- Ganesh et al. (2009) Ganesh, A., Lin, Z., Wright, J., Wu, L., Chen, M., and Ma, Y. Fast algorithms for recovering a corrupted low-rank matrix. In Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 213–215, Aruba, Dec. 2009.
- Halko et al. (2011) Halko, N., Martinsson, P.-G., and Tropp, J. A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
- Huber (2004) Huber, P. J. Robust Statistics. John Wiley and Sons, 2 edition, 2004.
- Kyrillidis & Cevher (2014) Kyrillidis, Anastasios and Cevher, Volkan. Matrix recipes for hard thresholding methods. Journal of Mathematical Imaging and Vision, 48(2):235–265, 2014.
- Larsen (1998) Larsen, R. M. Lanczos bidiagonalization with partial reorthogonalization. Tech. Report. DAIMI PB-357, Department of Computer Science, Aarhus University, September 1998.
- Lee et al. (2010) Lee, J., Recht, B., Salakhutdinov, R., Srebro, N., and Tropp, J.A. Practical large-scale optimization for max-norm regularization. In Neural Information Processing Systems (NIPS), Vancouver, 2010.
- Lin et al. (2010) Lin, Z., Chen, M., and Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
- Mansour & Vetro (2014) Mansour, H. and Vetro, A. Video background subtraction using semi-supervised robust matrix completion. In To appear in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2014.
- Peng et al. (2012) Peng, Y., Ganesh, A., Wright, J., Xu, W., and Ma, Y. RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Trans. Pattern Analysis and Machine Intelligence, 34(11):2233–2246, 2012.
- Recht & Ré (2011) Recht, Benjamin and Ré, Christopher. Parallel stochastic gradient algorithms for large-scale matrix completion. Math. Prog. Comput., pp. 1–26, 2011.
- Rockafellar (1970) Rockafellar, R. T. Convex analysis. Princeton Mathematical Series, No. 28. Princeton University Press, Princeton, N.J., 1970.
- Shen et al. (2014) Shen, Y., Wen, Z., and Zhang, Y. Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization. Optimization Methods and Software, 29(2):239–263, March 2014.
- Tao & Yuan (2011) Tao, M. and Yuan, X. Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optimization, 21:57–81, 2011.
- van den Berg & Friedlander (2008) van den Berg, E. and Friedlander, M. P. Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Computing, 31(2):890–912, 2008. software: http://www.cs.ubc.ca/~mpf/spgl1/.
- van den Berg & Friedlander (2011) van den Berg, E. and Friedlander, M. P. Sparse optimization with least-squares constraints. SIAM J. Optimization, 21(4):1201–1229, 2011.
- Wright et al. (2009a) Wright, J., Ganesh, A., Rao, S., and Ma, Y. Robust principal component analysis: Exact recovery of corrupted low-rank matrices by convex optimization. In Neural Information Processing Systems (NIPS), 2009a.
- Wright et al. (2009b) Wright, S. J., Nowak, R. D., and Figueiredo, M. A. T. Sparse Reconstruction by Separable Approximation. IEEE Trans. Sig. Processing, 57(7):2479–2493, July 2009b.
- Zhang et al. (2011) Zhang, Z., Liang, X., Ganesh, A., and Ma, Y. TILT: Transform invariant low-rank textures. In Kimmel, R., Klette, R., and Sugimoto, A. (eds.), Computer Vision – ACCV 2010, volume 6494 of Lecture Notes in Computer Science, pp. 314–328. Springer, 2011.