Rank-Sparsity Incoherence for Matrix Decomposition
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is NP-hard in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature, with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
Key words. matrix decomposition, convex relaxation, norm minimization, nuclear norm minimization, uncertainty principle, semidefinite programming, rank, sparsity
AMS subject classifications. 90C25; 90C22; 90C59; 93B30
Complex systems and models arise in a variety of problems in science and engineering. In many applications such complex systems and models are often composed of multiple simpler systems and models. Therefore, in order to better understand the behavior and properties of a complex system a natural approach is to decompose the system into its simpler components. In this paper we consider matrix representations of systems and statistical models in which our matrices are formed by adding together sparse and low-rank matrices. We study the problem of recovering the sparse and low-rank components given no prior knowledge about the sparsity pattern of the sparse matrix, or the rank of the low-rank matrix. We propose a tractable convex program to recover these components, and provide sufficient conditions under which our procedure recovers the sparse and low-rank matrices exactly.
Such a decomposition problem arises in a number of settings, with the sparse and low-rank matrices having different interpretations depending on the application. In a statistical model selection setting, the sparse matrix can correspond to a Gaussian graphical model  and the low-rank matrix can summarize the effect of latent, unobserved variables. Decomposing a given model into these simpler components is useful for developing efficient estimation and inference algorithms. In computational complexity, the notion of matrix rigidity  captures the smallest number of entries of a matrix that must be changed in order to reduce the rank of the matrix below a specified level (the changes can be of arbitrary magnitude). Bounds on the rigidity of a matrix have several implications in complexity theory . Similarly, in a system identification setting the low-rank matrix represents a system with a small model order while the sparse matrix represents a system with a sparse impulse response. Decomposing a system into such simpler components can be used to provide a simpler, more efficient description.
1.1 Our results
Formally the decomposition problem we are interested can be defined as follows:
Given where is an unknown sparse matrix and is an unknown low-rank matrix, recover and from using no additional information on the sparsity pattern and/or the rank of the components.
In the absence of any further assumptions, this decomposition problem is fundamentally ill-posed. Indeed, there are a number of scenarios in which a unique splitting of into “low-rank” and “sparse” parts may not exist; for example, the low-rank matrix may itself be very sparse leading to identifiability issues. In order to characterize when such a decomposition is possible we develop a notion of rank-sparsity incoherence, an uncertainty principle between the sparsity pattern of a matrix and its row/column spaces. This condition is based on quantities involving the tangent spaces to the algebraic variety of sparse matrices and the algebraic variety of low-rank matrices .
Two natural identifiability problems may arise. The first one occurs if the low-rank matrix itself is very sparse. In order to avoid such a problem we impose certain conditions on the row/column spaces of the low-rank matrix. Specifically, for a matrix let be the tangent space at with respect to the variety of all matrices with rank less than or equal to . Operationally, is the span of all matrices with row-space contained in the row-space of or with column-space contained in the column-space of ; see (LABEL:eq:t) for a formal characterization. Let be defined as follows:
Here is the spectral norm (i.e., the largest singular value), and denotes the largest entry in magnitude. Thus being small implies that (appropriately scaled) elements of the tangent space are “diffuse”, i.e., these elements are not too sparse; as a result cannot be very sparse. As shown in Proposition LABEL:prop:lr (see Section LABEL:subsec:specialslr) a low-rank matrix with row/column spaces that are not closely aligned with the coordinate axes has small .
The other identifiability problem may arise if the sparse matrix has all its support concentrated in one column; the entries in this column could negate the entries of the corresponding low-rank matrix, thus leaving the rank and the column space of the low-rank matrix unchanged. To avoid such a situation, we impose conditions on the sparsity pattern of the sparse matrix so that its support is not too concentrated in any row/column. For a matrix let be the tangent space at with respect to the variety of all matrices with number of non-zero entries less than or equal to . The space is simply the set of all matrices that have support contained within the support of ; see (LABEL:eq:om). Let be defined as follows:
The quantity being small for a matrix implies that the spectrum of any element of the tangent space is “diffuse”, i.e., the singular values of these elements are not too large. We show in Proposition LABEL:prop:sp (see Section LABEL:subsec:specialslr) that a sparse matrix with “bounded degree” (a small number of non-zeros per row/column) has small .
For a given matrix , it is impossible for both quantities and to be simultaneously small. Indeed, we prove that for any matrix we must have that (see Theorem LABEL:theo:unp in Section LABEL:subsec:unp). Thus, this uncertainty principle asserts that there is no non-zero matrix with all elements in being diffuse and all elements in having diffuse spectra. As we describe later, the quantities and are also used to characterize fundamental identifiability in the decomposition problem.
In general solving the decomposition problem is NP-hard; hence, we consider tractable approaches employing recently well-studied convex relaxations. We formulate a convex optimization problem for decomposition using a combination of the norm and the nuclear norm. For any matrix the norm is given by
and the nuclear norm, which is the sum of the singular values, is given by
where are the singular values of . The norm has been used as an effective surrogate for the number of non-zero entries of a vector, and a number of results provide conditions under which this heuristic recovers sparse solutions to ill-posed inverse problems . More recently, the nuclear norm has been shown to be an effective surrogate for the rank of a matrix . This relaxation is a generalization of the previously studied trace-heuristic that was used to recover low-rank positive semidefinite matrices . Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems [22, 4]. Based on these results, we propose the following optimization formulation to recover and given :
Here is a parameter that provides a trade-off between the low-rank and sparse components. This optimization problem is convex, and can in fact be rewritten as a semidefinite program (SDP)  (see Appendix LABEL:sec:sdp).
We prove that is the unique optimum of (LABEL:eq:sdp) for a range of if (see Theorem LABEL:theo:main in Section LABEL:subsec:mainres). Thus, the conditions for exact recovery of the sparse and low-rank components via the convex program (LABEL:eq:sdp) involve the tangent-space-based quantities defined in (LABEL:eq:xim) and (LABEL:eq:mum). Essentially these conditions specify that each element of must have a diffuse spectrum, and every element of must be diffuse. In a sense that will be made precise later, the condition required for the convex program (LABEL:eq:sdp) to provide exact recovery is slightly tighter than that required for fundamental identifiability in the decomposition problem. An important feature of our result is that it provides a simple deterministic condition for exact recovery. In addition, note that the conditions only depend on the row/column spaces of the low-rank matrix and the support of the sparse matrix , and not the singular values of or the values of the non-zero entries of . The reason for this is that the non-zero entries of and the singular values of play no role in the subgradient conditions with respect to the norm and the nuclear norm.
In the sequel we discuss concrete classes of sparse and low-rank matrices that have small and respectively. We also show that when the sparse and low-rank matrices and are drawn from certain natural random ensembles, then the sufficient conditions of Theorem LABEL:theo:main are satisfied with high probability; consequently, (LABEL:eq:sdp) provides exact recovery with high probability for such matrices.
1.2 Previous work using incoherence
The concept of incoherence was studied in the context of recovering sparse representations of vectors from a so-called “overcomplete dictionary” . More concretely consider a situation in which one is given a vector formed by a sparse linear combination of a few elements from a combined time-frequency dictionary, i.e., a vector formed by adding a few sinusoids and a few “spikes”; the goal is to recover the spikes and sinusoids that compose the vector from the infinitely many possible solutions. Based on a notion of time-frequency incoherence, the heuristic was shown to succeed in recovering sparse solutions . Incoherence is also a concept that is implicitly used in recent work under the title of compressed sensing, which aims to recover “low-dimensional” objects such as sparse vectors [3, 11] and low-rank matrices [22, 4] given incomplete observations. Our work is closer in spirit to that in , and can be viewed as a method to recover the “simplest explanation” of a matrix given an “overcomplete dictionary” of sparse and low-rank matrix atoms.
In Section LABEL:sec:app we elaborate on the applications mentioned previously, and discuss the implications of our results for each of these applications. Section LABEL:sec:inc formally describes conditions for fundamental identifiability in the decomposition problem based on the quantities and defined in (LABEL:eq:xim) and (LABEL:eq:mum). We also provide a proof of the rank-sparsity uncertainty principle of Theorem LABEL:theo:unp. We prove Theorem LABEL:theo:main in Section LABEL:sec:main, and also provide concrete classes of sparse and low-rank matrices that satisfy the sufficient conditions of Theorem LABEL:theo:main. Section LABEL:sec:sim describes the results of simulations of our approach applied to synthetic matrix decomposition problems. We conclude with a discussion in Section LABEL:sec:conc. The Appendix provides additional details and proofs.
In this section we describe several applications that involve decomposing a matrix into sparse and low-rank components.
2.1 Graphical modeling with latent variables
We begin with a problem in statistical model selection. In many applications large covariance matrices are approximated as low-rank matrices based on the assumption that a small number of latent factors explain most of the observed statistics (e.g., principal component analysis). Another well-studied class of models are those described by graphical models  in which the inverse of the covariance matrix (also called the precision or concentration or information matrix) is assumed to be sparse (typically this sparsity is with respect to some graph). We describe a model selection problem involving graphical models with latent variables. Let the covariance matrix of a collection of jointly Gaussian variables be denoted by , where represents observed variables and represents unobserved, hidden variables. The marginal statistics corresponding to the observed variables are given by the marginal covariance matrix , which is simply a submatrix of the full covariance matrix . Suppose, however, that we parameterize our model by the information matrix given by (such a parameterization reveals the connection to graphical models). In such a parameterization, the marginal information matrix corresponding to the inverse is given by the Schur complement with respect to the block :
Thus if we only observe the variables , we only have access to (or ). A simple explanation of the statistical structure underlying these variables involves recognizing the presence of the latent, unobserved variables . However (LABEL:eq:schur) has the interesting structure that is often sparse due to graphical structure amongst the observed variables , while has low-rank if the number of latent, unobserved variables is small relative to the number of observed variables (the rank is equal to the number of latent variables ). Therefore, decomposing into these sparse and low-rank components reveals the graphical structure in the observed variables as well as the effect due to (and the number of) the unobserved latent variables. We discuss this application in more detail in a separate report .
2.2 Matrix rigidity
The rigidity of a matrix , denoted by , is the smallest number of entries that need to be changed in order to reduce the rank of below . Obtaining bounds on rigidity has a number of implications in complexity theory , such as the trade-offs between size and depth in arithmetic circuits. However, computing the rigidity of a matrix is in general an NP-hard problem . For any one can check that (this follows directly from a Schur complement argument). Generically every is very rigid, i.e., , although special classes of matrices may be less rigid. We show that the SDP (LABEL:eq:sdp) can be used to compute rigidity for certain matrices with sufficiently small rigidity (see Section LABEL:subsec:rand for more details). Indeed, this convex program (LABEL:eq:sdp) also provides a certificate of the sparse and low-rank components that form such low-rigidity matrices; that is, the SDP (LABEL:eq:sdp) not only enables us to compute the rigidity for certain matrices but additionally provides the changes required in order to realize a matrix of lower rank.
2.3 Composite system identification
A decomposition problem can also be posed in the system identification setting. Linear time-invariant (LTI) systems can be represented by Hankel matrices, where the matrix represents the input-output relationship of the system . Thus, a sparse Hankel matrix corresponds to an LTI system with a sparse impulse response. A low-rank Hankel matrix corresponds to a system with small model order, and provides a minimal realization for a system . Given an LTI system as follows
where is sparse and is low-rank, obtaining a simple description of requires decomposing it into its simpler sparse and low-rank components. One can obtain these components by solving our rank-sparsity decomposition problem. Note that in practice one can impose in (LABEL:eq:sdp) the additional constraint that the sparse and low-rank matrices have Hankel structure.
2.4 Partially coherent decomposition in optical systems
We outline an optics application that is described in greater detail in . Optical imaging systems are commonly modeled using the Hopkins integral , which gives the output intensity at a point as a function of the input transmission via a quadratic form. In many applications the operator in this quadratic form can be well-approximated by a (finite) positive semi-definite matrix. Optical systems described by a low-pass filter are called coherent imaging systems, and the corresponding system matrices have small rank. For systems that are not perfectly coherent various methods have been proposed to find an optimal coherent decomposition , and these essentially identify the best approximation of the system matrix by a matrix of lower rank. At the other end are incoherent optical systems that allow some high frequencies, and are characterized by system matrices that are diagonal. As most real-world imaging systems are some combination of coherent and incoherent, it was suggested in  that optical systems are better described by a sum of coherent and incoherent systems rather than by the best coherent (i.e., low-rank) approximation as in . Thus, decomposing an imaging system into coherent and incoherent components involves splitting the optical system matrix into low-rank and diagonal components. Identifying these simpler components has important applications in tasks such as optical microlithography [21, 15].
3 Rank-Sparsity Incoherence
Throughout this paper, we restrict ourselves to square matrices to avoid cluttered notation. All our analysis extends to rectangular matrices, if we simply replace by .
3.1 Identifiability issues
As described in the introduction, the matrix decomposition problem can be fundamentally ill-posed. We describe two situations in which identifiability issues arise. These examples suggest the kinds of additional conditions that are required in order to ensure that there exists a unique decomposition into sparse and low-rank matrices.
First, let be any sparse matrix and let , where represents the -th standard basis vector. In this case, the low-rank matrix is also very sparse, and a valid sparse-plus-low-rank decomposition might be and . Thus, we need conditions that ensure that the low-rank matrix is not too sparse. One way to accomplish this is to require that the quantity be small. As will be discussed in Section LABEL:subsec:specialslr), if the row and column spaces of are “incoherent” with respect to the standard basis, i.e., the row/column spaces are not aligned closely with any of the coordinate axes, then is small.
Next, consider the scenario in which is any low-rank matrix and with being the first column of . Thus, has zeros in the first column, , and has the same column space as . Therefore, a reasonable sparse-plus-low-rank decomposition in this case might be and . Here . Requiring that a sparse matrix have small avoids such identifiability issues. Indeed we show in Section LABEL:subsec:specialslr that sparse matrices with “bounded degree” (i.e., few non-zero entries per row/column) have small .
3.2 Tangent-space identifiability
We begin by describing the sets of sparse and low-rank matrices. These sets can be considered either as differentiable manifolds (away from their singularities) or as algebraic varieties; we emphasize the latter viewpoint here. Recall that an algebraic variety is defined as the zero set of a system of polynomial equations . The variety of rank-constrained matrices is defined as:
This is an algebraic variety since it can be defined through the vanishing of all minors of the matrix . The dimension of this variety is , and it is non-singular everywhere except at those matrices with rank less than or equal to . For any matrix , the tangent space with respect to at is the span of all matrices with either the same row-space as or the same column-space as . Specifically, let be a singular value decomposition (SVD) of with , where . Then we have that
If the dimension of is . Note that we always have . In the rest of this paper we view as a subspace in .
Next we consider the set of all matrices that are constrained by the size of their support. Such sparse matrices can also be viewed as algebraic varieties:
The dimension of this variety is , and it is non-singular everywhere except at those matrices with support size less than or equal to . In fact can be thought of as a union of subspaces, with each subspace being aligned with of the coordinate axes. For any matrix , the tangent space with respect to at is given by
If the dimension of is . Note again that we always have . As with , we view as a subspace in . Since both and are subspaces of , we can compare vectors in these subspaces.
Before analyzing whether can be recovered in general (for example, using the SDP (LABEL:eq:sdp)), we ask a simpler question. Suppose that we had prior information about the tangent spaces and , in addition to being given . Can we then uniquely recover from ? Assuming such prior knowledge of the tangent spaces is unrealistic in practice; however, we obtain useful insight into the kinds of conditions required on sparse and low-rank matrices for exact decomposition. Given this knowledge of the tangent spaces, a necessary and sufficient condition for unique recovery is that the tangent spaces and intersect transversally:
That is, the subspaces and have a trivial intersection. The sufficiency of this condition for unique decomposition is easily seen. For the necessity part, suppose for the sake of a contradiction that a non-zero matrix belongs to ; one can add and subtract from and respectively while still having a valid decomposition, which violates the uniqueness requirement. The following proposition, proved in Appendix LABEL:sec:proofs, provides a simple condition in terms of the quantities and for the tangent spaces and to intersect transversally.
Given any two matrices and , we have that
where and are defined in (LABEL:eq:xim) and (LABEL:eq:mum), and the tangent spaces and are defined in (LABEL:eq:om) and (LABEL:eq:t).
Thus, both and being small implies that the tangent spaces and intersect transversally; consequently, we can exactly recover given and . As we shall see, the condition required in Theorem LABEL:theo:main (see Section LABEL:subsec:mainres) for exact recovery using the convex program (LABEL:eq:sdp) will be simply a mild tightening of the condition required above for unique decomposition given the tangent spaces.
3.3 Rank-sparsity uncertainty principle
Another important consequence of Proposition LABEL:prop:trans is that we have an elementary proof of the following rank-sparsity uncertainty principle.
For any matrix , we have that
where and are as defined in (LABEL:eq:xim) and (LABEL:eq:mum) respectively.
Proof: Given any it is clear that , i.e., is an element of both tangent spaces. However would imply from Proposition LABEL:prop:trans that , which is a contradiction. Consequently, we must have that .
Hence, for any matrix both and cannot be simultaneously small. Note that Proposition LABEL:prop:trans is an assertion involving and for (in general) different matrices, while Theorem LABEL:theo:unp is a statement about and for the same matrix. Essentially the uncertainty principle asserts that no matrix can be too sparse while having “diffuse” row and column spaces. An extreme example is the matrix , which has the property that .
4 Exact Decomposition Using Semidefinite Programming
We begin this section by studying the optimality conditions of the convex program (LABEL:eq:sdp), after which we provide a proof of Theorem LABEL:theo:main with simple conditions that guarantee exact decomposition. Next we discuss concrete classes of sparse and low-rank matrices that satisfy the conditions of Theorem LABEL:theo:main, and can thus be uniquely decomposed using (LABEL:eq:sdp).
4.1 Optimality conditions
The orthogonal projection onto the space is denoted , which simply sets to zero those entries with support not inside . The subspace orthogonal to is denoted , and it consists of matrices with complementary support, i.e., supported on . The projection onto is denoted .
Similarly the orthogonal projection onto the space is denoted . Letting be the SVD of , we have the following explicit relation for :
Here and . The space orthogonal to is denoted , and the corresponding projection is denoted . The space consists of matrices with row-space orthogonal to the row-space of and column-space orthogonal to the column-space of . We have that
where is the identity matrix.
Following standard notation in convex analysis , we denote the subgradient of a convex function at a point in its domain by . The subgradient consists of all such that
From the optimality conditions for a convex program , we have that is an optimum of (LABEL:eq:sdp) if and only if there exists a dual such that
From the characterization of the subgradient of the norm, we have that if and only if
Here equals if , if , and if . We also have that if and only if 
Note that these are necessary and sufficient conditions for to be an optimum of (LABEL:eq:sdp). The following proposition provides sufficient conditions for to be the unique optimum of (LABEL:eq:sdp), and it involves a slight tightening of the conditions (LABEL:eq:opt), (LABEL:eq:subo), and (LABEL:eq:subt).
Suppose that . Then is the unique optimizer of (LABEL:eq:sdp) if the following conditions are satisfied:
There exists a dual such that
The proof of the proposition can be found in Appendix LABEL:sec:proofs. Figure LABEL:fig:fig1 provides a visual representation of these conditions. In particular, we see that the spaces and intersect transversely (part of Proposition LABEL:prop:sc). One can also intuitively see that guaranteeing the existence of a dual with the requisite conditions (part of Proposition LABEL:prop:sc) is perhaps easier if the intersection between and is more transverse. Note that condition of this proposition essentially requires identifiability with respect to the tangent spaces, as discussed in Section LABEL:subsec:tsiden.
4.2 Sufficient conditions based on and
Next we provide simple sufficient conditions on and that guarantee the existence of an appropriate dual (as required by Proposition LABEL:prop:sc). Given matrices and with , we have from Proposition LABEL:prop:trans that , i.e., condition of Proposition LABEL:prop:sc is satisfied. We prove that if a slightly stronger condition holds, there exists a dual that satisfies the requirements of condition of Proposition LABEL:prop:sc.
the unique optimum of (LABEL:eq:sdp) is for the following range of :
Specifically is always inside the above range, and thus guarantees exact recovery of .
The proof of this theorem can be found in Appendix LABEL:sec:proofs. The main idea behind the proof is that we only consider candidates for the dual that lie in the direct sum of the tangent spaces. Since , we have from Proposition LABEL:prop:trans that the tangent spaces and have a transverse intersection, i.e., . Therefore, there exists a unique element that satisfies and . The proof proceeds by showing that if then the projections of this onto the orthogonal spaces and are small, thus satisfying condition of Proposition LABEL:prop:sc.
One consequence of Theorem LABEL:theo:main is that if , then there exists no other such that with . We consider this implication locally around . Recall that the quantities and are defined with respect to the tangent spaces and . Suppose is slightly perturbed along the variety of rank-constrained matrices to some . This ensures that the tangent space varies smoothly from to , and consequently that . However, compensating for this by changing to moves outside the variety of sparse matrices. This is because is not sparse. Thus the dimension of the tangent space is much greater than that of the tangent space , as a result of which ; therefore we have that . The same reasoning holds in the opposite scenario. Consider perturbing slightly along the variety of sparse matrices to some . While this ensures that , changing to moves outside the variety of rank-constrained matrices. Therefore the dimension of the tangent space is greater than that of , resulting in ; consequently we have that .
4.3 Sparse and low-rank matrices with
We discuss concrete classes of sparse and low-rank matrices that satisfy the sufficient condition of Theorem LABEL:theo:main for exact decomposition. We begin by showing that sparse matrices with “bounded degree”, i.e., bounded number of non-zeros per row/column, have small .
Let be any matrix with at most non-zero entries per row/column, and with at least non-zero entries per row/column. With as defined in (LABEL:eq:mum), we have that
See Appendix LABEL:sec:proofs for the proof. Note that if has full support, i.e., , then . Therefore, a constraint on the number of zeros per row/column provides a useful bound on . We emphasize here that simply bounding the number of non-zero entries in does not suffice; the sparsity pattern also plays a role in determining the value of .
Next we consider low-rank matrices that have small . Specifically, we show that matrices with row and column spaces that are incoherent with respect to the standard basis have small . We measure the incoherence of a subspace as follows:
where is the ’th standard basis vector, denotes the projection onto the subspace , and denotes the vector norm. This definition of incoherence also played an important role in the results in . A small value of implies that the subspace is not closely aligned with any of the coordinate axes. In general for any -dimensional subspace , we have that
where the lower bound is achieved, for example, by a subspace that spans any columns of an orthonormal Hadamard matrix, while the upper bound is achieved by any subspace that contains a standard basis vector. Based on the definition of , we define the incoherence of the row/column spaces of a matrix as
If the SVD of then and . We show in Appendix LABEL:sec:proofs that matrices with incoherent row/column spaces have small ; the proof technique for the lower bound here was suggested by Ben Recht .
Let be any matrix with defined as in (LABEL:eq:inc), and defined as in (LABEL:eq:xim). We have that
If is a full-rank matrix or a matrix such as , then . Therefore, a bound on the incoherence of the row/column spaces of is important in order to bound . Using Propositions LABEL:prop:sp and LABEL:prop:lr along with Theorem LABEL:theo:main we have the following corollary, which states that sparse bounded-degree matrices and low-rank matrices with incoherent row/column spaces can be uniquely decomposed.
Let with being the maximum number of nonzero entries per row/column of and being the maximum incoherence of the row/column spaces of (as defined by (LABEL:eq:inc)). If we have that
then the unique optimum of the convex program (LABEL:eq:sdp) is for a range of values of :
Specifically is always inside the above range, and thus guarantees exact recovery of .
We emphasize that this is a result with deterministic sufficient conditions on exact decomposability.
4.4 Decomposing random sparse and low-rank matrices
Next we show that sparse and low-rank matrices drawn from certain natural random ensembles satisfy the sufficient conditions of Corollary LABEL:corl:deginc with high probability. We first consider random sparse matrices with a fixed number of non-zero entries.
Random sparsity model
The matrix is such that is chosen uniformly at random from the collection of all support sets of size . There is no assumption made about the values of at locations specified by .
Suppose that is drawn according to the random sparsity model with non-zero entries. Let be the maximum number of non-zero entries in each row/column of . We have that
with high probability.
The proof of this lemma follows from a standard balls and bins argument, and can be found in several references (see for example ).
Next we consider low-rank matrices in which the singular vectors are chosen uniformly at random from the set of all partial isometries. Such a model was considered in recent work on the matrix completion problem , which aims to recover a low-rank matrix given observations of a subset of entries of the matrix.
Random orthogonal model 
A rank- matrix with SVD is constructed as follows: The singular vectors are drawn uniformly at random from the collection of rank- partial isometries in . The choices of and need not be mutually independent. No restriction is placed on the singular values.
As shown in , low-rank matrices drawn from such a model have incoherent row/column spaces.
Suppose that a rank- matrix is drawn according to the random orthogonal model. Then we have that that (defined by (LABEL:eq:inc)) is bounded as
with very high probability.
Applying these two results in conjunction with Corollary LABEL:corl:deginc, we have that sparse and low-rank matrices drawn from the random sparsity model and the random orthogonal model can be uniquely decomposed with high probability.
Suppose that a rank- matrix is drawn from the random orthogonal model, and that is drawn from the random sparsity model with non-zero entries. Given , there exists a range of values for (given by (LABEL:eq:grange)) so that is the unique optimum of the SDP (LABEL:eq:sdp) with high probability provided
Thus, for matrices with rank smaller than the SDP (LABEL:eq:sdp) yields exact recovery with high probability even when the size of the support of is super-linear in . During final preparation of this manuscript we learned of related contemporaneous work  that specifically studies the problem of decomposing random sparse and low-rank matrices. In addition to the assumptions of our random sparsity and random orthogonal models,  also requires that the non-zero entries of have independently chosen signs that are with equal probability, while the left and right singular vectors of are chosen independent of each other. For this particular specialization of our more general framework, the results in  improve upon our bound in Corollary LABEL:corl:rand.
Implications for the matrix rigidity problem
Corollary LABEL:corl:rand has implications for the matrix rigidity problem discussed in Section LABEL:sec:app. Recall that is the smallest number of entries of that need to be changed to reduce the rank of below (the changes can be of arbitrary magnitude). A generic matrix has rigidity . However, special structured classes of matrices can have low rigidity. Consider a matrix formed by adding a sparse matrix drawn from the random sparsity model with support size , and a low-rank matrix drawn from the random orthogonal model with rank for some fixed . Such a matrix has rigidity , and one can recover the sparse and low-rank components that compose with high probability by solving the SDP (LABEL:eq:sdp). To see this, note that
which satisfies the sufficient condition of Corollary LABEL:corl:rand for exact recovery. Therefore, while the rigidity of a matrix is NP-hard to compute in general , for such low-rigidity matrices one can compute the rigidity ; in fact the SDP (LABEL:eq:sdp) provides a certificate of the sparse and low-rank matrices that form the low rigidity matrix .
5 Simulation Results
We confirm the theoretical predictions in this paper with some simple experimental results. We also present a heuristic to choose the trade-off parameter . All our simulations were performed using YALMIP  and the SDPT3 software  for solving SDPs.
In the first experiment we generate random matrices according to the random sparsity and random orthogonal models described in Section LABEL:subsec:rand. To generate a random rank- matrix according to the random orthogonal model, we generate with i.i.d. Gaussian entries and set . To generate an -sparse matrix according to the random sparsity model, we choose a support set of size uniformly at random and the values within this support are i.i.d. Gaussian. The goal is to recover from using the SDP (LABEL:eq:sdp). Let be defined as:
where is the solution of (LABEL:eq:sdp), and is the Frobenius norm. We declare success in recovering if . (We discuss the issue of choosing in the next experiment.) Figure LABEL:fig:simfig1 shows the success rate in recovering for various values of and (averaged over experiments for each ). Thus we see that one can recover sufficiently sparse and sufficiently low-rank from using (LABEL:eq:sdp).
Next we consider the problem of choosing the trade-off parameter . Based on Theorem LABEL:theo:main we know that exact recovery is possible for a range of . Therefore, one can simply check the stability of the solution as is varied without knowing the appropriate range for in advance. To formalize this scheme we consider the following SDP for , which is a slightly modified version of (LABEL:eq:sdp):
There is a one-to-one correspondence between (LABEL:eq:sdp) and (LABEL:eq:sdpt) given by . The benefit in looking at (LABEL:eq:sdpt) is that the range of valid parameters is compact, i.e., , as opposed to the situation in (LABEL:eq:sdp) where . We compute the difference between solutions for some and as follows:
where is some small fixed constant, say . We generate a random that is -sparse and a random with rank as described above. Given , we solve (LABEL:eq:sdpt) for various values of . Figure LABEL:fig:simfig2 shows two curves – one is (which is defined analogous to in (LABEL:eq:tol)) and the other is . Clearly we do not have access to in practice. However, we see that is near-zero in exactly three regions. For sufficiently small the optimal solution to (LABEL:eq:sdpt) is , while for sufficiently large the optimal solution is . As seen in the figure, stabilizes for small and large . The third “middle” range of stability is where we typically have . Notice that outside of these three regions is not close to and in fact changes rapidly. Therefore if a reasonable guess for (or ) is not available, one could solve (LABEL:eq:sdpt) for a range of and choose a solution corresponding to the “middle” range in which is stable and near zero. A related method to check for stability is to compute the sensitivity of the cost of the optimal solution with respect to , which can be obtained from the dual solution.
We have studied the problem of exactly decomposing a given matrix into its sparse and low-rank components and . This problem arises in a number of applications in model selection, system identification, complexity theory, and optics. We characterized fundamental identifiability in the decomposition problem based on a notion of rank-sparsity incoherence, which relates the sparsity pattern of a matrix and its row/column spaces via an uncertainty principle. As the general decomposition problem is NP-hard we propose a natural SDP relaxation (LABEL:eq:sdp) to solve the problem, and provide sufficient conditions on sparse and low-rank matrices so that the SDP exactly recovers such matrices. Our sufficient conditions are deterministic in nature; they essentially require that the sparse matrix must have support that is not too concentrated in any row/column, while the low-rank matrix must have row/column spaces that are not closely aligned with the coordinate axes. Our analysis centers around studying the tangent spaces with respect to the algebraic varieties of sparse and low-rank matrices. Indeed the sufficient conditions for identifiability and for exact recovery using the SDP can also be viewed as requiring that certain tangent spaces have a transverse intersection. We also demonstrated the implications of our results for the matrix rigidity problem.
An interesting problem for further research is the development of special-purpose algorithms that take advantage of structure in (LABEL:eq:sdp) to provide a more efficient solution than a general-purpose SDP solver. Another question that arises in applications such as model selection (due to noise or finite sample effects) is to approximately decompose a matrix into sparse and low-rank components.
The authors would like to thank Dr. Benjamin Recht and Prof. Maryam Fazel for helpful discussions.
A SDP formulation
The problem (LABEL:eq:sdp) can be recast as a semidefinite program (SDP). We appeal to the fact that the spectral norm is the dual norm of the nuclear norm :
Further, the spectral norm admits a simple semidefinite characterization :
From duality, we can obtain the following SDP characterization of the nuclear norm:
Putting these facts together, (LABEL:eq:sdp) can be rewritten as
Here, refers to the vector that has in every entry.
Proof of Proposition LABEL:prop:trans
We begin by establishing that
where denotes the projection onto the space . Assume for the sake of a contradiction that this assertion is not true. Thus, there exists such that . Scale appropriately such that . Thus with , but we also have that as . This leads to a contradiction.
Next, we show that
which would allow us to conclude the proof of this proposition. We have the following sequence of inequalities