ADMiRA: Atomic Decomposition for Minimum Rank Approximation
Abstract
We address the inverse problem that arises in compressed sensing of a lowrank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a lowrank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rankrestricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately lowrank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rankrestricted isometry property, ADMiRA is a competitive algorithm for matrix completion.
I Introduction
Recent studies in compressed sensing have shown that
a sparsity prior in the representation of the unknowns can guarantee unique and stable solutions to underdetermined linear systems.
The idea has been generalized to the matrix case [1] with the rank replacing sparsity to define the parsimony of the representation of the unknowns.
Compressed sensing of a lowrank matrix addresses the inverse problem of reconstructing an unknown lowrank matrix
from its linear measurements
One method to solve the inverse problem by exploiting the prior that is lowrank is to solve the rank minimization problem P1, to minimize the rank within the affine space defined by and :
In practice, in the presence of measurement noise or modeling error, a more appropriate measurement model is where the perturbation has bounded Euclidean norm, . In this case, the rank minimization problem is written as
with an ellipsoidal constraint. Indeed, rank minimization has been studied in more general setting where the feasible set is not necessarily restricted as either an affine space or an ellipsoid. However, due to the nonconvexity of the rank, rank minimization is NPhard even when the feasible set is convex. Fazel, Hindi, and Boyd [3] proposed a convex relaxation of the rank minimization problem by introducing a convex surrogate of , which is known as nuclear norm and denotes the sum of all singular values of matrix .
Recht, Fazel, and Parrilo [2] studied rank minimization in the framework of compressed sensing and showed that rank minimization for the matrix case is analogous to norm (number of nonzero elements) minimization for the vector case. They provided an analogy between the two problems and their respective solutions by convex relaxation. In the analogy, norm minimization for the norm minimization problem is analogous to nuclear norm minimization for rank minimization. Both are efficient algorithms, with guaranteed performance under certain conditions, to solve NPhard problems: norm minimization and rank minimization, respectively. The respective conditions are given by the sparsityrestricted isometry property [4] and the rankrestricted isometry property [2], [1], respectively. However, whereas norm minimization corresponds to a linear program (or a quadratically constrained linear program for the noisy case), nuclear norm minimization is formulated as a convex semidefinite program (SDP). Although there exist polynomial time algorithms to solve SDP, in practice they do not scale well to large problems.
Recently, several authors proposed methods for solving large scale SDP derived from rank minimization. These include interior point methods for SDP, projected subgradient methods, and lowrank parametrization [2] combined with a customized interior point method [5]. These methods can solve larger rank minimization problems, which the general purpose SDP solvers cannot. However, the dimension of the problem is still restricted and some of these methods do not guarantee convergence to the global minimum. Cai, Candes, and Shen [6] proposed singular value thresholding (SVT), which penalizes the objective of nuclear norm minimization by the squared Frobenius norm. The dual of the penalized problem admits a projected subgradient method where the updates can be done by computing truncated singular value decompositions. They have shown that the solution given by SVT converges to the solution to nuclear norm minimization as the penalty parameter increases. However, an analysis of the convergence rate is missing and hence the quality of the solution obtained by this method is not guaranteed. Furthermore, the efficiency of SVT is restricted to the noiseless case where the constraint is affine (i.e., linear equality). Ma, Goldfarb, and Chen [7] proposed a formulation of nuclear norm minimization by using the Bregman divergence that admits an efficient fixed point algorithm, which is also based on the singular value decomposition. They did not provide a convergence rate analysis and the efficiency is also restricted to the noiseless, affine constraint case. Meka et. al. [8] used multiplicative updates and online convex programming to provide an approximate solution to rank minimization. However, their result depends on the (unverified) existence of an oracle that provides the solution to the rank minimization problem with a single linear constraint in constant time.
An alternative method to solve the inverse problem of compressed sensing of a matrix is minimum rank approximation,
where denotes the minimum rank. The advantage of formulation P2 is that it can handle both the noiseless case and the noisy case in a single form. It also works for more general case where is not exactly lowrank but admits an accurate approximation by a lowrank matrix. When the minimum rank is unknown, an incremental search over will increase the complexity of the solution by at most factor . If an upper bound on is available, then a bisection search over can be used because the minimum of P2 is monotone decreasing in . Hence the factor reduces to . Indeed, this is not an issue in many applications where the rank is assumed to be a small constant.
Recently, several algorithms have been proposed to solve P2. Halder and Diego [9] proposed an alternating least square approach by exploiting the explicit factorization of a rank matrix. Their algorithm is computationally efficient but does not provide any performance guarantee. Keshavan, Oh, and Montanari [10] proposed an algorithm based on optimization over the Grassmann manifold. Their algorithm first finds a good starting point by an operation called trimming and minimizes the objective of P2 using a line search and gradient descent over the Grassmann manifold. They provide a performance guarantee only for the matrix completion problem where the linear operator takes a few entries from . Moreover, the performance guarantee is restricted to the noiseless case.
Minimum rank approximation, or rank approximation for the matrix case, is analogous to term approximation for the vector case. Like rank matrix approximation, term vector approximation is a way to find the sparsest solution of an illposed inverse problem in compressed sensing. For term approximation, besides efficient greedy heuristics such as Matching Pursuit (MP) [11] and Orthogonal Matching Pursuit (OMP) [12], there are recent algorithms, which are more efficient than convex relaxation and also have performance guarantees. These include Compressive Sampling Matching Pursuit (CoSaMP) [13] and Subspace Pursuit (SP) [14]. To date, no such algorithms have been available for the matrix case.
In this paper, we propose an iterative algorithm for the rank minimization problem, which is a generalization
Matrix completion is a special case of lowrank matrix approximation from linear measurements where the linear operator takes a few random entries of the unknown matrix. It has received considerable attention owing to its important applications such as collaborative filtering. However, the linear operator in matrix completion does not satisfy the rankrestricted isometry property [17]. Therefore, at the present time, ADMiRA does not have a guarantee for matrix completion. None the less, empirical performance on matrix completion is better than SVT (for the experiments in this paper).
The remaining of this paper is organized as follows: The atomic decomposition and the analogy between the greedy algorithm for the vector case and the matrix case are introduced in Section II. The new algorithm ADMiRA and its performance guarantee are explained in Section III and in Section IV, respectively. By using the tools in Section V, the performance guarantees are derived in Section VI and Section VII. Implementation issues and the computational complexity are discussed in Section VIII and numerical results in Section IX, followed by conclusions. Our exposition of ADMiRA follows the line of Needell and Tropp’s exposition of CoSaMP [13], to highlight, on the one hand, the close analogy, and on the other hand the differences between the two algorithms and their analysis. Indeed, there exist significant differences between rank approximation for the matrix case and term approximation for the vector case, which are discussed in some detail.
Ii Vector vs Matrix
Iia Preliminaries
Throughout this paper, we use two vector spaces: the space of column vectors and the space of matrices . For , the inner product is defined by for where denotes the Hermitian transpose of , and the induced HilbertSchmidt norm is the Euclidean or norm given by for . For , the inner product is defined by for , and the induced norm is the Frobenius norm given by for .
IiB Atomic Decomposition
Let denote the set of all nonzero rankone matrices in .
We can refine so that any two distinct elements are not collinear.
The resulting subset is referred to as the set of atoms
Given a matrix , its representation as a linear combination of atoms is referred to as an atomic decomposition of . Since spans , an atomic decomposition of exists for all . A subset of unitnorm and pairwise orthogonal atoms in will be called an orthonormal set of atoms.
Definition II.1
Let be a set of atoms of . Given , we define as the smallest set of atoms in that spans ,
(1) 
Note that is not unique.
An orthonormal set is given by the singular value decomposition of . Let denote the singular value decomposition of with singular values in decreasing order. While need not be in , for each , there exists such that and . Then an orthonormal set is given by
Remark II.2
and of a matrix are the counterparts of and for a vector , respectively.
IiC Generalized Correlation Maximization
Recht, Fazel, and Parrilo [2] showed an analogy between rank minimization P1 and norm minimization. We consider instead the rank matrix approximation problem P2 and its analogue – the term vector approximation problem
In Problem P3, variable lives in the union of dimensional subspaces of , each spanned by elements in the finite set , the standard basis of . Thus the union contains all sparse vectors in . Importantly, finitely many (, to be precise) subspaces participate in the union. Therefore, it is not surprising that P3 can be solved exactly by exhaustive enumeration, and finite selection algorithms such as CoSaMP are applicable.
In the rank matrix approximation problem P2, the matrix variable lives in the union of subspaces of , each of which is spanned by atoms in the set . Indeed, if is spanned by atoms in , then by the subadditivity of the rank. Conversely, if , then is a linear combination of rankone matrices and hence there exist atoms that span . Note that uncountably infinitely many subspaces participate in the union. Therefore, some selection rules in the greedy algorithms for norm minimization and term vector approximation do not generalize in a straightforward way. None the less, using our formulation of the rank matrix approximation problem in terms of an atomic decomposition, we extend the analogy between the vector and matrix cases, and propose a way to generalize these selection rules to the rank matrix approximation problem.
First, consider the correlation maximization in greedy algorithms for the vector case. Matching Pursuit (MP) [11] and Orthogonal Matching Pursuit (OMP) [12] choose the index that maximizes the correlation between the th column of and the residual in each iteration, where is the solution of the previous iteration. Given a set , let denote the (orthogonal) projection operator onto the subspace spanned by in the corresponding embedding space. When is a singleton set, will denote . For example, denotes the projection operator onto the subspace in spanned by . From
it follows that maximizing the correlation implies maximizing the norm of the projection of the image under of the residual onto the selected one dimensional subspace.
The following selection rule generalizes the correlation maximization to the matrix case. We maximize the norm of the projection over all onedimensional subspaces spanned by an atom in :
(2) 
where denotes the adjoint operator of . By the EckartYoung Theorem, the basis of the best subspace is obtained from the singular value decomposition of , as , where and are the principal left and right singular vectors.
Remark II.3
Applying the selection rule (2) to update recursively leads to greedy algorithms generalizing MP and OMP to rank minimization.
Next, consider the rule in recent algorithms such as CoSaMP and SP. The selection rule chooses the subset of with defined by
(3) 
This is equivalent to maximizing
In other words, selection rule (3) finds the best subspace spanned by elements in that maximizes the norm of the projection of onto that dimensional subspace.
The following selection rule generalizes the selection rule (3) to the matrix case. We maximize the norm of the projection over all subspaces spanned by a subset with at most atoms in :
A basis of the best subspace is again obtained from the singular value decomposition of ,
as , where and , are the principal left and right singular vectors, respectively
and for each , satisfies
Iii Algorithm
Algorithm 1 describes ADMiRA. Intuitively, ADMiRA iteratively refines the pair where is the set of atoms that spans an approximate solution to P2. Step 4 finds a set of atoms that spans a good approximation of , which corresponds to the information not explained by the solution in the previous iteration. Here ADMiRA assumes that acts like an isometry on a lowrank matrix , which implies that acts like a (scaled) identity operator on . Under this assumption, the leading principal components of the proxy matrix are a good choice for . The quality of a linear approximation of spanned by improves as iteration goes. This will be quantitatively analyzed in the proof of the performance guarantee. If and span good approximations of and , respectively, then will span a good approximation of . Steps 6 and 7 refine the set into a set of atoms. We first compute a rank approximate solution and then take its best rank approximation to get a feasible solution with rank . In the process, the set of atoms is also trimmed to the atom set so that it can span an approximate solution closer to .
ADMiRA is guaranteed to converge to the global optimum in at most iterations when the assumptions of ADMiRA in Section IV are satisfied. However, similarly to the vector case [13], it is more difficult to verify the satisfiability of the assumptions than solve the recovery problem itself, and to date there is no known algorithm to perform this verification. Instead of relying on the theoretical bound on the number of iterations, we use an empirical stopping criterion below. If either the monotone decrease of is broken or falls a given threshold, ADMiRA stops.
In terms of computation, Steps 4 and 7 involve finding a best rank or rank approximation to a given matrix (e.g., by truncating the SVD), while Step 6 involves the solution of a linear leastsquares problem – all standard numerical linear algebra problems. Step 5 merges two given sets of atoms in by taking their union. As described in more detail in Section VIII, these computations can be further simplified and their cost reduced by storing and operating on the low rank matrices in factored form, and taking advantage of special structure of the measurement operator , such as sparsity.
Most steps of ADMiRA are similar to those of CoSaMP except Step 4 and Step 7. The common feasible set of the maximization problems in Step 4 and Step 7 is infinite and not orthogonal, whereas the analogous set in CoSaMP is finite and orthonormal. As a result, the maximization problems over the infinite set in ADMiRA are more difficult than those in the analogous steps of CoSaMP, which can be simply solved by selecting the coordinates with the largest magnitudes. None the less, singular value decomposition can solve the maximization problems over the infinite set efficiently.
Iv Main Results: Performance Guarantee
Iva RankRestricted Isometry Property (RRIP)
Recht et al [2] generalized the sparsityrestricted isometry property (RIP) defined for sparse vectors to low rank matrices. They also demonstrated “nearly isometric families” satisfying this RRIP (with overwhelming probability). These include random linear operators generated from i.i.d. Gaussian, or i.i.d. symmetric Bernoulli distributions. In order to draw the analogy with known results in norm minimization, we slightly modify their definition by squaring the norm in the inequality. Given a linear operator , the rankrestricted isometry constant is defined as the minimum constant that satisfies
(4) 
for all with for some constant .
Throughout this paper, we assume that the linear operator is scaled appropriately so that in (4)
IvB Performance Guarantee
Subject to the RRIP, the Atomic Decomposition for Minimum Rank Approximation Algorithm (ADMiRA) has a performance guarantee analogous to that of CoSaMP.
The followings are the assumptions in ADMiRA:
 A1:

The target rank is fixed as .
 A2:

The linear operator satisfies .
 A3:

The measurement is obtained by
(5) where is the discrepancy between the measurement and the linear model . No assumptions are made about the matrix underlying the measurement, and it can be arbitrary.
Assumption A2 plays a key role in deriving the performance guarantee of ADMiRA: it enforces the rankrestricted isometry property of the linear operator . Although the verification of the satisfiability of A2 is as difficult as or more difficult than the recovery problem itself, as mentioned above, nearly isometric families that satisfy the condition in A2 have been demonstrated [2].
The performance guarantees are specified in terms of a measure of inherent approximation error, termed unrecoverable energy defined by
(6) 
where denotes the best rank approximation of . The first two terms in define a metric of the minimum distance between the “true” matrix and a rank matrix. This is analogous to the notion of a measure of compressibility of a vector in sparse vector approximation. By the EckartYoungMirsky theorem [18], no rank matrix can come closer to in this metric. In particular, the optimal solution to P2 cannot come closer to in this metric. The third term is the norm of the measurement noise, which must also limit the accuracy of the approximation provided by a solution to P2.
Theorem IV.1
Let denote the estimate of in the th iteration of ADMiRA. For each , satisfies the following recursion:
where is the unrecoverable energy. From the above relation, it follows that
Theorem IV.1 shows the geometric convergence of ADMiRA. In fact, convergence in a finite number of steps can be achieved as stated by the following theorem.
Theorem IV.2
After at most iterations, ADMiRA provides a rank approximation of , which satisfies
where is the unrecoverable energy.
Depending on the spectral properties of the matrix , even faster convergence is possible (See Section VII for details).
IvC Relationship between P1, P2, and ADMiRA
The approximation given by ADMiRA is a solution to P2. When there is no noise in the measurement, i.e., , where is the solution to P1, Theorem IV.1 states that if the ADMiRA assumptions are satisfied with , then . An appropriate value can be assigned to by an incremental search over .
For the noisy measurement case, the linear constraint in P1 is replaced by a quadratic constraint and the rank minimization problem is written as:
Let denote a minimizer to P1. In this case, the approximation produced by ADMiRA is not necessarily equivalent to , but by Theorem IV.1 the distance between the two is bounded by for all that satisfies the ADMiRA assumptions.
V Properties of the RankRestricted Isometry
We introduce and prove a number of properties of the rankrestricted isometry. These properties serve as key tools for proving the performance guarantees for ADMiRA in this paper. These properties further extend the analogy between the sparse vector and the lowrank matrix approximation problems (P3 and P2, respectively), and are therefore also of interest in their own right. The proofs are contained in the Appendix.
Proposition V.1
The rankrestricted isometry constant is nondecreasing in .
An operator satisfying the RRIP satisfies, as a consequence, a number of other properties when composed with other linear operators defined by the atomic decomposition.
Definition V.2
Given a set , define a linear operator by
(7) 
It follows from (7) that the adjoint operator is given by
(8) 
Note that for the operator composition admits a matrix representation. Its pseudoinverse is denoted by .
Remark V.3
If is an orthonormal set, then is an isometry and . If is a set of atoms in , then for all .
Proposition V.4
Suppose that linear operator has the rankrestricted isometry constant . Let be a set of atoms in such that . Then
(9) 
Proposition V.5
Suppose that linear operator has the rankrestricted isometry constant . Let be a set of atoms in such that and let satisfy . Then
(10) 
Proposition V.6
Suppose that linear operator has the rankrestricted isometry constant . Let be a set of atoms in such that and let be a projection operator that commutes with . Then
(11) 
The following rankrestricted orthogonality property for the matrix case is analogous to the sparsityrestricted orthogonality property for the vector case (Lemma 2.1 in [4]).
Proposition V.7
Suppose that linear operator has the rankrestricted isometry constant . Let satisfy and for all . Then
(12) 
Remark V.8
For the vector case, the representation of a vector in terms of the standard basis of determines . Let be arbitrary. Then the following properties hold: (i) the projection operators and commute; and (ii) is sparse (or sparser) if is sparse. These properties follow from the orthogonality of the standard basis. Proposition 3.2 in [13], corresponding in the vector case to our Proposition V.7, requires these two properties. However, the analogues of properties (i) and (ii) do not hold for the matrix case. Indeed, for , the projection operators and do not commute in general and can be greater than even though . Proposition V.7 is a stronger version of the corresponding result (Proposition 3.2 in [13]) for the vector case in the sense that it requires a weaker condition (orthogonality between two lowrank matrices), which can be satisfied without the analogues of properties (i) and (ii).
Corollary V.9
Suppose that linear operator has the rankrestricted isometry constant . If sets of atoms in and matrix satisfy , , and , then
(13) 
Remark V.10
Finally, we relate the RRIP to the nuclear norm, extending the analogous result [13] from the sparse vector case to the rank matrix case.
Proposition V.11
If a linear map satisfies
(14) 
for all with , then
(15) 
for all .
Vi Proof of Theorem iv.1
Via Exactly Low Rank Matrix Case
Theorem VI.1
Assume in (5). Let denote the estimate of in the th iteration of ADMiRA. Then for each , satisfies the following recursion:
From the above relation, it follows that
Theorem VI.1 is proved by applying a sequence of lemmata. We generalize the proof of the performance guarantee for CoSaMP [13] to the matrix case by applying the generalized analogy proposed in this paper. The flow and the techniques used in the proofs are similar to those in [13]. However, in the matrix case, there are additional unknowns in the form of the singular vectors. Therefore, the generalization of the proofs in [13] to the matrix case is not straightforward and the proofs are sufficiently different from those for the vector case to warrant detailed exposition. The main steps in the derivation of the performance guarantee are stated in this section and the detailed proofs are in the Appendix.
For the proof, we study the th iteration starting with the previous result in the th iteration. Let denote the true solution with rank . Matrix denotes , which is the estimate of in the th (previous) iteration. Set is the set of orthogonal atoms obtained in the previous iteration. From , we compute the proxy matrix . Set is the solution of the following low rank approximation problem:
Lemma VI.2
Let in (5). Then
Lemma VI.2 shows that subject to the rankrestricted isometry property, the set of atoms chosen in Step 4 of ADMiRA is a good set: it captures of the energy of the atoms in that were not captured by , and the effects of additive measurement noise are bounded by a small constant. In other words, the algorithm is guaranteed to make good progress in this step.
Lemma VI.3
Let and let be sets of atoms in such that , , and . Let . Then
Lemma VI.3 shows that the augmented set of atoms produced in Step 5 of the algorithm is at least as good in explaining the unknown as was the set in explaining the part of not captured by the estimate from the previous iteration.
Lemma VI.4
Lemma VI.4 shows that the leastsquares step, Step 6 of the algorithm, performs almost as well as one could do with operator equal to an identity operator: because is restricted to , it is impossible to recover components of in . Hence, the first constant cannot be smaller than 1. A value of 1 for the second constant, the noise gain, would correspond to a perfectly conditioned system.
Lemma VI.5
As expected, reducing the rank of the estimate from to , to produce , increases the approximation error. However, Lemma VI.5 shows that this increase is moderate – by no more than a factor of 2.
The update completes the th iteration. Combining all the results in the lemmata provides the proof of Theorem VI.1.
Proof:
ViB General Matrix Case
Theorem IV.1 is proved by combining Theorem VI.1 and the following lemma, which shows how to convert the mismodeling error (deviations of from a low rank matrix) to an equivalent additive measurement noise with a quantified norm.
Lemma VI.6
Let be an arbitrary matrix in . The measurement is also represented as where
Proof:
Vii Required Number of Iterations
Theorem IV.2 provides a uniform bound on the number of iterations required to achieve the guaranteed approximation accuracy. In addition to this uniform iteration bound, Theorem VII.6 in this section shows that even faster convergence may be expected for matrices with clustered singular values.
In the analysis of the iteration number, the distribution of the singular values of the matrices involved is the only thing that matters. Indeed, the singular vectors do not play any role in the analysis. As a consequence, the proofs for the vector case (CoSaMP) and the matrix case (ADMiRA) are very similar, and the corresponding bounds on the number of iterations coincide. However, for the completeness, we provide the proofs for the matrix case.
Definition VII.1
Given , is defined in (1). We define the atomic bands of by
where denotes the set of nonnegative integers. Note that atomic bands are disjoint subsets of , which is an orthonormal set of atoms in , and therefore atomic bands are mutually orthogonal. From the atomic bands, the profile of is defined as the number of nonempty atomic bands, i.e.,
(17) 
From the definition, .
The atomic bands and admit a simple interpretation in terms of the spectrum of . Let be ordered as . Then , where is the th singular value of , in decreasing order. Let
Then . In other words, contains the corresponding to normalized singular values falling in a one octave interval (“bin”). The quantity then is the number of such occupied octave bins, and measures the spread of singular values of on a log scale.
Remark VII.2
For the vector case, the term analogous to the atomic band is the component band [19] defined by
for .
First, the number of iterations for the exactly lowrank case is bounded by the following theorem.
Theorem VII.3
Let be a rank matrix and let , where is defined in (17). Then after at most
iterations, the estimate produced by ADMiRA satisfies
We introduce additional notations for the proof of Theorem VII.3. Let denote the estimate at the th iteration of ADMiRA. For a nonnegative integer , we define an auxiliary matrix
Then satisfies
(18) 
The proof of Theorem VII.3 is done by a sequence of lemmata. The first lemma presents two possibilities in each iteration of ADMiRA: if the iteration is successful, the approximation error is small; otherwise, the approximation error is dominated by the unidentified portion of the matrix and the approximation error in the next iteration decreases by a constant ratio.
Lemma VII.4
Let