SupportPredicted ModifiedCS for Recursive Robust Principal Components’ Pursuit
Abstract
This work proposes a causal and recursive algorithm for solving the “robust” principal components’ analysis (PCA) problem. We primarily focus on robustness to correlated outliers. In recent work, we proposed a new way to look at this problem and showed how a key part of its solution strategy involves solving a noisy compressive sensing (CS) problem. However, if the support size of the outliers becomes too large, for a given dimension of the current PC space, then the number of “measurements” available for CS may become too small. In this work, we show how to address this issue by utilizing the correlation of the outliers to predict their support at the current time; and using this as “partial support knowledge” for solving ModifiedCS instead of CS.
I Introduction
In this work, we propose a realtime (causal and recursive) algorithm for solving the “robust” principal components’ analysis (PCA) problem. Here, “robust” refers to robustness to both independent and correlated sparse outliers, although we focus on the latter. The goal of PCA is to find the principal components’ (PC) space, which is the smallest dimensional subspace that spans (or, in practice, approximately spans) a given dataset. Computing the PCs in the presence of outliers is called robust PCA. If the PC space changes over time, there is a need to update the PCs. Doing this recursively is referred to as “recursive robust PCA”. Often the data vectors (both the low rank component and the outliers) are correlated.
A key application where this problem occurs is in automatic video surveillance where the goal is to separate a slowly changing background from moving foreground objects. The background sequence is well modeled by a low rank subspace that can gradually change over time, while the moving foreground objects constitute the “correlated sparse outliers”. We will use this as the motivating problem in this entire paper. Other important applications include sensor networks based detection and tracking of abnormal events such as forest fires or oil spills; or online detection of brain activation patterns from fMRI sequences (the “active” part of the brain can be interpreted as a correlated sparse outlier).
Note that even though we use the term “outlier” for the moving objects or the brain active regions etc, quite often, these are actually the “signals of interest”. But for PCA (used to estimate the low rank subspace of the background), they are the “outliers” that make the PCA problem difficult.
The recursive robust PCA problem, that we study in this work, can be precisely defined as follows. The outliercorrupted data vectors (we will call them measurements), , at each time , can be rewritten as
(1) 
where is the “lowrank” part and is the sparse outlier which can be correlated over time. We put “lowrank” in quotes since it does not mean anything for a single vector. To make this precise, we can rewrite
where is an unknown orthonormal matrix and is a sparse vector whose support changes slowly over time and whose elements are spatially uncorrelated. Let denote the support of , and assume that it is piecewise constant with time. Then the columns of the submatrix, span the low rank subspace in which the current set of ’s lie and . We refer to as the principal components (PC) matrix. Clearly, this is also piecewise constant with time. If at a given time, , changes, then also changes. New directions get added to it and some directions get removed from it. We assume realistic correlation models on both (and hence ) and on . These are defined in Sec. II.
Suppose, we have a good estimate of the initial PC matrix^{1}^{1}1In most applications, it is easy to get a short sequence with no moving objects that can serve as the “training data” to estimate by simple PCA., . For , our goal is to recursively keep estimating the sparse part, , at each time, and to keep updating everysooften, by collecting the recent estimates of .
In (1), if were known, and if we did not want a recursive solution, then this problem would be similar to the dense error correction problem studied in recent work [1, 2]. Of course the reason we need to do PCA is because is unknown.
There has been a large amount of earlier work on robust PCA e.g. [3, 4] and recursive robust PCA e.g. [5, 6]. In these works, (i) either the locations of the missing/corruped data points are assumed known [5], which is not a practical assumption, or (ii) they first try to detect the corrupted pixels and then either fill in the corrupted location using some heuristics or (iii) often just remove the entire outlier vector [3, 4, 6]. Very often, outliers affect only a small part of the data vector, e.g., the moving objects may occupy only a small image region. In a series of recent works [7, 8, 9], an elegant solution has been proposed, that treats the outliers as sparse vectors and hence does not require a two step outlier location detection/correction process and also does not throw out the entire vector. In [7, 8, 9], the offline “robust PCA” problem is redefined as a problem of separating a low rank matrix, , from a sparse matrix, , using the data matrix, . It was shown that by solving (where is the sum of singular values of while is the norm of seen as a long vector), one can recover and exactly provided the singular vectors of are spread out enough (not sparse), the support and signs of are uniformly random (thus it is not low rank) and the rank of is sufficiently small for a given sparsity of . This was called Principal Components’ Pursuit (PCP). PCP was motivated as a tool for video surveillance applications to track moving objects (sparse part) from the slow changing background (low rank part).
While PCP is an elegant idea, it has three practical limitations. In surveillance applications, one would like to obtain the estimates onthefly and quickly as a new frame comes in, rather than in a batch fashion. Second, in many applications, the support sets over time will be heavily correlated, and often, also overlapping, e.g. in the case of moving objects forming the sparse part. This can result in being low rank and thus makes it impossible for PCP to separate from [see Fig. 3]. Third, PCP requires the support of to be fixed and quite small for a given support size of , e.g. see Table 1 of [7]. But, often, this does not hold.
To address the first two drawbacks, in [10], we proposed a recursive robust PCP (RRPCP) algorithm that was also “robust” to timecorrelated sparse outlier sequences. In this work, we show how we can use the timecorrelation of the “outliers” to our advantage to address the third limitation above.
Ia Motivation and Key Ideas
The key idea of Recursive Robust PCP (RRPCP) [10] is as follows. Assume that the current PC matrix has been estimated. We project the outliercorrupted data vector, , into the space perpendicular to it, i.e. we compute
and is an orthogonal complement of . If is dimensional, then the dimension of is where . Notice that
If , then , i.e. this should nullify most of the contribution of the low rank part, so that can be interpreted as small “noise”. Finding the sparse outlier, , from this projected data vector, , now becomes a noisy sparse reconstruction / compressive sensing (CS) [11, 12, 13] problem. We can solve
(2) 
with chosen as explained in [10]. Denote its output by . We can then estimate which can be used to recursively update everysooften as in [10, Algorithm 1]. When a new direction gets added to , but is not yet updated, the “noise”, , will start increasing. We explain how to “cancel” some of it in Sec. IIB after explaining the model on . The block diagram is shown in Fig. 1(a). Our idea is also somewhat related to that of [14, 15] in that all of these also try to cancel the “low rank” part by projecting the original data vector into the perpendicular space of the tall matrix that spans the “low rank” part. But the big difference is that in all these, this matrix is known. In our problem this matrix (the PC matrix) is unknown and can change with time.
Now, if the support size of the sparse outliers increases for a given rank , the number of projected “measurements” available for the CS step, , may become too small for CS to work. But notice that the correlated sparse outliers (e.g. moving foreground objects in video) can be interpreted as sparse signal sequences whose support change over time is either very slow; or quite often is not slow, but is still highly correlated, e.g. the support can “move” or “expand” or change according to some model, over time. By using a model on how the objects move/deform, or other models on how the support changes, it should be possible to obtain a support prediction that can serve as an accurate “partial support knowledge”. We can then tap into our recent work on ModifiedCS which solves the sparse recovery problem when partial support knowledge is available [16]. Denote the partial support knowledge by . ModifiedCS tries to find the solution that is sparsest outside the set among all solutions that satisfy the data constraint. It does this by replacing in (2) by . As proved and experimentally shown in [17], in the noisy case, it has stable and small error using much fewer measurements than what simple CS needs.
In this work, we demonstrate the above idea by assuming that the foreground contains a single rigid moving object that follows a constant velocity model (with small random acceleration). We use a Kalman filter (KF) to track its motion over time. The KF predicted location of the object and its size tell us its predicted support at the current time. This is then used to solve ModifiedCS and obtain an updated support estimate. The centroid (or median) of this support set tells us the observed location of the object, which may be erroneous because our support estimate is not perfect. This serves as the noisy observation for the KF update step.
IB Notation
The set operations , and have the usual meanings. For any set , denotes its complement and denotes its cardinality.
For a matrix , we let denote the column of and we let denote a matrix composed of the columns of indexed by . We use to denote transpose, and to denote its pseudoinverse. For a tall matrix , .
For vector , denotes the th entry of and denotes a vector consisting of the entries of indexed by . We to denote the norm of . The support of , , is the set of indices at which has nonzero value, .
IC Paper Organization
We define the problem and give the correlation models in Sec. II. Our proposed method, SupportPredicted ModifiedCS RRPCP (SuppPredModCSRRPCP), is described in Sec. III, where we also argue why it should be stable. Experiments showing that SuppPredModCSRRPCP provide a significant gain than RRPCP [10] and PCP [7] are given in Sec. IV.
Ii Problem Definition and Correlated Models
We give the problem definition below and explain the correlation models on and in Sec. IIB and IIC respectively.
Iia Problem Definition
Consider the problem of tracking moving foreground objects (correlated sparse outliers) in a slowly changing background (low rank part). For explaining our ideas in a simple fashion, we assume a 1D “image” with one moving foreground object. The extension to 2D image and multiple moving objects is explained in Sec. IV. Our simulations use this case.
In video applications, the outlier (moving object) is not added, but is overlaid. In other words, at each , the image satisfies
(3) 
where is the sparse foreground image and is its support, i.e. . and is the background image. However, this problem can be rewritten in the form (1) by letting
(4) 
Notice that and have the same support .
IiB Model on (background image)
For , we use the model from [10]. Briefly, we assume that the support of , , is piecewise constant with time, and also assume that each nonzero element of is independent and follows a piecewise stationary model with nonstationary transients between pieces. Specifically, an independent first order autoregressive (AR) model is assumed for each nonzero element of , all with the same AR parameter as given in [10]. Every frames, there are some indices get added or deleted from . When a new direction added to , initially along that direction starts with some initial small value, but slowly increases to a stable large value. Before an existing direction gets deleted from , along that direction decays exponentially to zero.
When new directions are added to , but are not part of the current , the “noise”, , will start to increase. But by using the AR model, we can cancel some of the noise by replacing with defined as
(5) 
Now, the “noise” is only which is much smaller than if is close to 1 and if .
IiC Model on Support Change of (foreground image)
Let be the location of the foreground object’s centroid, let denote its velocity and let denote its width. Thus, its support is,
Let
We assume a constant velocity model with small random acceleration [18] on the object’s motion, i.e.,
(6) 
where is bounded noise with zero mean and variance . The variance matrix is of the form because we only add noise to velocity and not to position [18].
Iii SupportPredicted ModifiedCS Recursive Robust Principal Components’ Pursuit
In this section, we explain our algorithm.
Iiia SupportPredicted ModifiedCS
Recall that in RRPCP, the number of projected “measurements” available for the CS step is where . If the support size of the sparse part, , increases for a given , then may become too small for CS to work. In this section, we show how to use the correlated support change of along with ModifiedCS to address this problem. The overall idea is as follows. The support of follows the model given Sec. IIC. We use this in a KF to obtain its location prediction, which can then give us its support prediction . We then solve ModifiedCS, given in (14), with . The support estimate of the ModifiedCS output serves as the updated support, , and the centroid^{2}^{2}2One can replace the centroid by the median for more robustness. of this updated support serves as the “observed” location, , for the KF update step to update the object’s location estimate. We explain each of these steps below.

Predict support by (13)

Update support

ModifiedCS: solve (14) using .

AddLSDel procedure:
(7) (8) (9) (10)

Predict Centroid: Let represent the estimate of at time given measurements up to, and including at time . Similar rule applies for . Let , and denote the prediction and updated error covariance matrices and the Kalman gain used by the KF. Compute
(11)  
(12) 
Predict Support: We can get a reliable support prediction as
(13) 
Update Support using ModifiedCS: Assuming is a good support prediction, we can use it as the partial support knowledge for ModifiedCS, i.e. we can solve
(14) 
with . Let be the solution of (14) with . As explained in [17], since is biased towards zero along and it may be biased away from zero along (there is no cost or constraint on ), we will run into problems if we try to use a single threshold for support estimation. A better approach is the use the AddLSDel procedure as summarized in step 3 of Algorithm 1. This was first introduced in our older work [19, 20] and simultaneously also in [21, 22]. It involves a support addition step (that uses a smaller threshold), as in (7), followed by LS estimation on the new support estimate, , as in (8), and then a deletion step that thresholds the LS estimate, as in (9). This can be followed by a second LS estimation using the final support estimate, as in (10). The addition step threshold, , needs to be just large enough to ensure that the matrix used for LS estimation, is wellconditioned. If is chosen properly, the LS estimate on will have smaller error than the ModifiedCS output. As a result, deletion will be more accurate when done using this estimate. This means that one can also use a larger to ensure quicker deletion of extras.
Update Centroid: Let denote the “observed” centroid of foreground object at time obtained by taking the mean of the updated support estimate , i.e., . Our observation model is
(15) 
where and is observation error, which is assumed to be zero mean with variance . This arises because there are extras and misses in and hence . In this work, is heuristically selected, but, in general, one can approximate it using simplifying assumptions on the support computation.
The KF update step is as follows.
(16)  
(17)  
(18) 
IiiB Complete Algorithm of SuppPredModCSRRPCP
With the support estimate and sparse estimate obtained in Algorithm 1, and can be estimated as
(19)  
(20) 
Also, can be updated as in [10, Algorithm 1].
A complete algorithm incorporating the idea of SupportPredict ModifiedCS is given in Algorithm 2.
For simplicity, we have presented SuppPredModCSRRPCP for a 1D image with one moving foreground object. However, it can be easily extended to the 2D case with multiple moving objects. We explain how to do it in Sec. IV
IiiC Discussion
SuppPredModCSRRPCP is a recursive approach. Hence an important question is when and why will it be stable? We try to give an induction argument here that we will formalize in later work. The key idea is as follows. Everywhere in this discussion, “bounded” means bounded by a timeinvariant value. Suppose that at , is bounded and small. Since is bounded and small, this means that is also bounded and small. This, in turn will mean that the same holds for the support prediction errors and . Using this and arguments similar to those in [17], the support update step will also result in with bounded and small extras and misses (in fact the bound on extras is zero). This step will require showing that the “noise” seen by modifiedCS, , is bounded; that most nonzero elements of are large enough; and that satisfies certain conditions. Finally, since has zero extras, we will just need to argue that the misses, , will result in bounded and small centroid observation error, . This will finally ensure bounded and small . This, along with ensuring stability of , will ensure bounded and small . Our simulations given in Sec. IV do indicate that SuppPredModCSRRPCP is stable.
We use the KF in this paper, but in general the above argument will go through with any stable linear observer.
Iv Numerical Experiments
We evaluate the performance of SuppPredModCSRRPCP in the 2D case for the problem of tracking two moving objects in a simulated image sequence of size . Fig.2 shows the image frame at . One image written as a 1D vector, , is dimensional with and it satisfies (3).
The background image, is simulated according to the model in Sec.IIB. Initially, there are principal directions in the PC matrix . At , one new direction is add to with a small variance and it slowly stabilizes. At , one existing direction starts to decay exponentially to zero. Thus, .
The foreground overlay, , consists of two blocks that have different constant intensity and . These two objects move independently. Each object moves along horizonal and vertical directions independently with some initial location and velocity satisfying (6). In (6) and (15), we use zero mean truncated gaussian noise with variance and for and . Note that , while is only about .
Fig. 3 shows the reconstruction error of for two online methods, Algorithm 2 and RRPCP [10], as well as offline PCP [7]. For offline PCP, we use the entire sequence and show the error for each image frame separately. PCP fails due to the correlation of . The other two online methods work well because they do not require the support and signs of to be i.i.d. Algorithm 2 outperforms RRPCP greatly because it utilizes the correlation model of while RRPCP does not.
With an estimate of , we separate different objects by thresholding the intensity of on the support estimate. This is needed for running two separate KFs for each of their centroids.
References
 [1] John Wright and Yi Ma, “Dense error correction via l1minimization”, IEEE Transactions on Information Theory, 2009.
 [2] J.N. Laska, M.A. Davenport, and R.G. Baraniuk, “Exact signal recovery from sparsely corrupted measurements through the pursuit of justice”, in Asilomar Conf. on Sig. Sys. Comp., Nov 2009, pp. 1556 –1560.
 [3] F. De La Torre and M. Black, “A framework for robust subspace learning”, in ICCV, 2003, pp. 54:117–142.
 [4] H. Xu, C. Caramanis, and S. Mannor, “Principal component analysis with contaminated data: The high dimensional case”, ArXiv eprints, May 2010.
 [5] M. Brand, “Incremental singular value decomposition of uncertain data with missing values”, in ECCV, 2002, pp. 707–720.
 [6] D. Skocaj and A. Leonardis, “Weighted and robust incremental method for subspace learning”, in IEEE Conf. on Comp. Vis. Pat. Rec. (CVPR), Oct. 2003, pp. 1494 –1501 vol.2.
 [7] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?”, Submitted for publication.
 [8] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, “Sparse and lowrank matrix decompositions”, in Allerton, 2009.
 [9] A. Ganesh, J. Wright, X. Li, E. J. Candes, and Y. Ma, “Dense Error Correction for LowRank Matrices via Principal Component Pursuit”, ArXiv eprints, January 2010.
 [10] C. Qiu and N. Vaswani, “Realtime robust principal components’ pursuit”, Allerton, 2010.
 [11] Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders, “Atomic decomposition by basis pursuit”, SIAM Journal on Scientific Computing, vol. 20, pp. 33–61, 1998.
 [12] E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information”, IEEE Trans. Info. Th., vol. 52(2), pp. 489–509, February 2006.
 [13] D. Donoho, “Compressed sensing”, IEEE Trans. on Information Theory, vol. 52(4), pp. 1289–1306, April 2006.
 [14] E. Candes and T. Tao, “Decoding by linear programming”, IEEE Trans. Info. Th., vol. 51(12), pp. 4203 – 4215, Dec. 2005.
 [15] Y. Jin and B. Rao, “Algorithms for robust linear regression by exploiting the connection to sparse signal recovery”, in ICASSP, 2010.
 [16] N. Vaswani and W. Lu, “Modifiedcs: Modifying compressive sensing for problems with partially known support”, IEEE Trans. Signal Processing, Sep 2010.
 [17] N. Vaswani, “Stability (over time) of modifiedcs and lscs for recursive causal sparse reconstruction”, Allerton, 2010.
 [18] H. Vincent Poor, An Introduction to Signal Detection and Estimation, Springer, second edition.
 [19] N. Vaswani, “Lscsresidual (lscs): Compressive sensing on least squares residual”, IEEE Trans. Sig. Proc, vol. vol. 58(8), pp. 41084120, 2010.
 [20] N. Vaswani, “Kalman filtered compressed sensing”, in IEEE Intl. Conf. Image Proc. (ICIP), 2008.
 [21] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction”, IEEE Trans. Info. Th., May 2009.
 [22] D. Needell and J.A. Tropp, “Cosamp: Iterative signal recovery from incomplete and inaccurate samples”, Appl. Comp. Harm. Anal., 2008.