ADMiRA: Atomic Decomposition for Minimum Rank Approximation

# ADMiRA: Atomic Decomposition for Minimum Rank Approximation

## Abstract

We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion.

Rank minimization, performance guarantee, matrix completion, singular value decomposition, compressed sensing.

## I Introduction

Recent studies in compressed sensing have shown that a sparsity prior in the representation of the unknowns can guarantee unique and stable solutions to underdetermined linear systems. The idea has been generalized to the matrix case [1] with the rank replacing sparsity to define the parsimony of the representation of the unknowns. Compressed sensing of a low-rank matrix addresses the inverse problem of reconstructing an unknown low-rank matrix from its linear measurements 1 via a given linear operator . As in the vector case, the inverse problem is ill-posed in the sense that the number of measurements is much smaller than the number of the unknowns. Continuing the analogy with the vector case, the remarkable fact is that the number of measurements sufficient for unique and stable recovery is roughly on the same order as the number of degrees of freedom in the unknown low rank matrix. Moreover, under certain conditions, the recovery can be accomplished by polynomial-time algorithms [2].

One method to solve the inverse problem by exploiting the prior that is low-rank is to solve the rank minimization problem P1, to minimize the rank within the affine space defined by and :

 P1:minX∈Cm×nrank(X)subject toAX=b.

In practice, in the presence of measurement noise or modeling error, a more appropriate measurement model is where the perturbation has bounded Euclidean norm, . In this case, the rank minimization problem is written as

 P1':minX∈Cm×nrank(X)subject to∥AX−b∥2≤η

with an ellipsoidal constraint. Indeed, rank minimization has been studied in more general setting where the feasible set is not necessarily restricted as either an affine space or an ellipsoid. However, due to the non-convexity of the rank, rank minimization is NP-hard even when the feasible set is convex. Fazel, Hindi, and Boyd [3] proposed a convex relaxation of the rank minimization problem by introducing a convex surrogate of , which is known as nuclear norm and denotes the sum of all singular values of matrix .

Recht, Fazel, and Parrilo [2] studied rank minimization in the framework of compressed sensing and showed that rank minimization for the matrix case is analogous to -norm (number of nonzero elements) minimization for the vector case. They provided an analogy between the two problems and their respective solutions by convex relaxation. In the analogy, -norm minimization for the -norm minimization problem is analogous to nuclear norm minimization for rank minimization. Both are efficient algorithms, with guaranteed performance under certain conditions, to solve NP-hard problems: -norm minimization and rank minimization, respectively. The respective conditions are given by the sparsity-restricted isometry property [4] and the rank-restricted isometry property [2], [1], respectively. However, whereas -norm minimization corresponds to a linear program (or a quadratically constrained linear program for the noisy case), nuclear norm minimization is formulated as a convex semidefinite program (SDP). Although there exist polynomial time algorithms to solve SDP, in practice they do not scale well to large problems.

Recently, several authors proposed methods for solving large scale SDP derived from rank minimization. These include interior point methods for SDP, projected subgradient methods, and low-rank parametrization [2] combined with a customized interior point method [5]. These methods can solve larger rank minimization problems, which the general purpose SDP solvers cannot. However, the dimension of the problem is still restricted and some of these methods do not guarantee convergence to the global minimum. Cai, Candes, and Shen [6] proposed singular value thresholding (SVT), which penalizes the objective of nuclear norm minimization by the squared Frobenius norm. The dual of the penalized problem admits a projected subgradient method where the updates can be done by computing truncated singular value decompositions. They have shown that the solution given by SVT converges to the solution to nuclear norm minimization as the penalty parameter increases. However, an analysis of the convergence rate is missing and hence the quality of the solution obtained by this method is not guaranteed. Furthermore, the efficiency of SVT is restricted to the noiseless case where the constraint is affine (i.e., linear equality). Ma, Goldfarb, and Chen [7] proposed a formulation of nuclear norm minimization by using the Bregman divergence that admits an efficient fixed point algorithm, which is also based on the singular value decomposition. They did not provide a convergence rate analysis and the efficiency is also restricted to the noiseless, affine constraint case. Meka et. al. [8] used multiplicative updates and online convex programming to provide an approximate solution to rank minimization. However, their result depends on the (unverified) existence of an oracle that provides the solution to the rank minimization problem with a single linear constraint in constant time.

An alternative method to solve the inverse problem of compressed sensing of a matrix is minimum rank approximation,

 P2:minX∈Cm×n∥AX−b∥2subject torank(X)≤r,

where denotes the minimum rank. The advantage of formulation P2 is that it can handle both the noiseless case and the noisy case in a single form. It also works for more general case where is not exactly low-rank but admits an accurate approximation by a low-rank matrix. When the minimum rank is unknown, an incremental search over will increase the complexity of the solution by at most factor . If an upper bound on is available, then a bisection search over can be used because the minimum of P2 is monotone decreasing in . Hence the factor reduces to . Indeed, this is not an issue in many applications where the rank is assumed to be a small constant.

Recently, several algorithms have been proposed to solve P2. Halder and Diego [9] proposed an alternating least square approach by exploiting the explicit factorization of a rank- matrix. Their algorithm is computationally efficient but does not provide any performance guarantee. Keshavan, Oh, and Montanari [10] proposed an algorithm based on optimization over the Grassmann manifold. Their algorithm first finds a good starting point by an operation called trimming and minimizes the objective of P2 using a line search and gradient descent over the Grassmann manifold. They provide a performance guarantee only for the matrix completion problem where the linear operator takes a few entries from . Moreover, the performance guarantee is restricted to the noiseless case.

Minimum rank approximation, or rank- approximation for the matrix case, is analogous to -term approximation for the vector case. Like rank- matrix approximation, -term vector approximation is a way to find the sparsest solution of an ill-posed inverse problem in compressed sensing. For -term approximation, besides efficient greedy heuristics such as Matching Pursuit (MP) [11] and Orthogonal Matching Pursuit (OMP) [12], there are recent algorithms, which are more efficient than convex relaxation and also have performance guarantees. These include Compressive Sampling Matching Pursuit (CoSaMP) [13] and Subspace Pursuit (SP) [14]. To date, no such algorithms have been available for the matrix case.

In this paper, we propose an iterative algorithm for the rank minimization problem, which is a generalization 2 of the CoSaMP algorithm to the matrix case. We call this algorithm “Atomic Decomposition for Minimum Rank Approximation,” abbreviated as ADMiRA. ADMiRA is computationally efficient in the sense that the core computation consists of least squares and truncated singular value decompositions, which are both basic linear algebra problems and admit efficient algorithms. Indeed, ADMiRA is the first guaranteed algorithms among those proposed to solve minimum rank approximation 3. Furthermore, ADMiRA provides a strong performance guarantee for P2 that covers the general case where is only approximately low-rank and contains noise. The strong performance guarantee of ADMiRA is comparable to that of nuclear norm minimization in [1]. In the noiseless case, SVT [6] may be considered a competitor to ADMiRA. However, for the noisy case, SVT involves more than the simple singular value thresholding operation.

Matrix completion is a special case of low-rank matrix approximation from linear measurements where the linear operator takes a few random entries of the unknown matrix. It has received considerable attention owing to its important applications such as collaborative filtering. However, the linear operator in matrix completion does not satisfy the rank-restricted isometry property [17]. Therefore, at the present time, ADMiRA does not have a guarantee for matrix completion. None the less, empirical performance on matrix completion is better than SVT (for the experiments in this paper).

The remaining of this paper is organized as follows: The atomic decomposition and the analogy between the greedy algorithm for the vector case and the matrix case are introduced in Section II. The new algorithm ADMiRA and its performance guarantee are explained in Section III and in Section IV, respectively. By using the tools in Section V, the performance guarantees are derived in Section VI and Section VII. Implementation issues and the computational complexity are discussed in Section VIII and numerical results in Section IX, followed by conclusions. Our exposition of ADMiRA follows the line of Needell and Tropp’s exposition of CoSaMP [13], to highlight, on the one hand, the close analogy, and on the other hand the differences between the two algorithms and their analysis. Indeed, there exist significant differences between rank- approximation for the matrix case and -term approximation for the vector case, which are discussed in some detail.

## Ii Vector vs Matrix

### Ii-a Preliminaries

Throughout this paper, we use two vector spaces: the space of column vectors and the space of matrices . For , the inner product is defined by for where denotes the Hermitian transpose of , and the induced Hilbert-Schmidt norm is the Euclidean or -norm given by for . For , the inner product is defined by for , and the induced norm is the Frobenius norm given by for .

### Ii-B Atomic Decomposition

Let denote the set of all nonzero rank-one matrices in . We can refine so that any two distinct elements are not collinear. The resulting subset is referred to as the set of atoms 4 of . Then the set of atomic spaces of is defined by . Each subspace is one-dimensional and hence is irreducible in the sense that for some implies . Since is an uncountably infinite set in a finite dimensional space , the elements in are not linearly independent. Regardless of the choice of , is uniquely determined. Without loss of generality, we fix such that all elements have unit Frobenius norm.

Given a matrix , its representation as a linear combination of atoms is referred to as an atomic decomposition of . Since spans , an atomic decomposition of exists for all . A subset of unit-norm and pairwise orthogonal atoms in will be called an orthonormal set of atoms.

###### Definition II.1

Let be a set of atoms of . Given , we define as the smallest set of atoms in that spans ,

 atoms(X)≜argminΨ{|Ψ|: Ψ⊂O,X∈span(Ψ)}. (1)

Note that is not unique.

An orthonormal set is given by the singular value decomposition of . Let denote the singular value decomposition of with singular values in decreasing order. While need not be in , for each , there exists such that and . Then an orthonormal set is given by

 atoms(X)={ρkukvHk}rank(X)k=1.
###### Remark II.2

and of a matrix are the counterparts of and for a vector , respectively.

### Ii-C Generalized Correlation Maximization

Recht, Fazel, and Parrilo [2] showed an analogy between rank minimization P1 and -norm minimization. We consider instead the rank- matrix approximation problem P2 and its analogue – the -term vector approximation problem

 P3:minx∈Cn∥Ax−b∥2subject to∥x∥0≤s.

In Problem P3, variable lives in the union of dimensional subspaces of , each spanned by elements in the finite set , the standard basis of . Thus the union contains all -sparse vectors in . Importantly, finitely many (, to be precise) subspaces participate in the union. Therefore, it is not surprising that P3 can be solved exactly by exhaustive enumeration, and finite selection algorithms such as CoSaMP are applicable.

In the rank- matrix approximation problem P2, the matrix variable lives in the union of subspaces of , each of which is spanned by atoms in the set . Indeed, if is spanned by atoms in , then by the subadditivity of the rank. Conversely, if , then is a linear combination of rank-one matrices and hence there exist atoms that span . Note that uncountably infinitely many subspaces participate in the union. Therefore, some selection rules in the greedy algorithms for -norm minimization and -term vector approximation do not generalize in a straightforward way. None the less, using our formulation of the rank- matrix approximation problem in terms of an atomic decomposition, we extend the analogy between the vector and matrix cases, and propose a way to generalize these selection rules to the rank- matrix approximation problem.

First, consider the correlation maximization in greedy algorithms for the vector case. Matching Pursuit (MP) [11] and Orthogonal Matching Pursuit (OMP) [12] choose the index that maximizes the correlation between the -th column of and the residual in each iteration, where is the solution of the previous iteration. Given a set , let denote the (orthogonal) projection operator onto the subspace spanned by in the corresponding embedding space. When is a singleton set, will denote . For example, denotes the projection operator onto the subspace in spanned by . From

 ∣∣aHk(b−A^x)∣∣=∣∣⟨AH(b−A^x),ek⟩Cn∣∣=∥∥PekAH(b−A^x)∥∥2,

it follows that maximizing the correlation implies maximizing the norm of the projection of the image under of the residual onto the selected one dimensional subspace.

The following selection rule generalizes the correlation maximization to the matrix case. We maximize the norm of the projection over all one-dimensional subspaces spanned by an atom in :

 maxψ∈O∣∣⟨b−AˆX,Aψ⟩Cm×n∣∣=maxψ∈O∥∥PψA∗(b−AˆX)∥∥F, (2)

where denotes the adjoint operator of . By the Eckart-Young Theorem, the basis of the best subspace is obtained from the singular value decomposition of , as , where and are the principal left and right singular vectors.

###### Remark II.3

Applying the selection rule (2) to update recursively leads to greedy algorithms generalizing MP and OMP to rank minimization.

Next, consider the rule in recent algorithms such as CoSaMP and SP. The selection rule chooses the subset of with defined by

 ∣∣aHk(b−A^x)∣∣≥∣∣aHj(b−A^x)∣∣,∀k∈J,∀j∉J. (3)

This is equivalent to maximizing

 ∑k∈J∣∣aHk(b−A^x)∣∣2=∑k∈J∥∥PekAH(b−A^x)∥∥22=∥∥P{ek}k∈JAH(b−A^x)∥∥22.

In other words, selection rule (3) finds the best subspace spanned by elements in that maximizes the norm of the projection of onto that -dimensional subspace.

The following selection rule generalizes the selection rule (3) to the matrix case. We maximize the norm of the projection over all subspaces spanned by a subset with at most atoms in :

 maxΨ⊂O{∥∥PΨA∗(b−AˆX)∥∥F: |Ψ|≤r}

A basis of the best subspace is again obtained from the singular value decomposition of , as , where and , are the principal left and right singular vectors, respectively and for each , satisfies 5 . Note that is an orthonormal set although this is not enforced as an explicit constraint in the maximization.

## Iii Algorithm

Algorithm 1 describes ADMiRA. Intuitively, ADMiRA iteratively refines the pair where is the set of atoms that spans an approximate solution to P2. Step 4 finds a set of atoms that spans a good approximation of , which corresponds to the information not explained by the solution in the previous iteration. Here ADMiRA assumes that acts like an isometry on a low-rank matrix , which implies that acts like a (scaled) identity operator on . Under this assumption, the leading principal components of the proxy matrix are a good choice for . The quality of a linear approximation of spanned by improves as iteration goes. This will be quantitatively analyzed in the proof of the performance guarantee. If and span good approximations of and , respectively, then will span a good approximation of . Steps 6 and 7 refine the set into a set of atoms. We first compute a rank- approximate solution and then take its best rank- approximation to get a feasible solution with rank . In the process, the set of atoms is also trimmed to the atom set so that it can span an approximate solution closer to .

ADMiRA is guaranteed to converge to the global optimum in at most iterations when the assumptions of ADMiRA in Section IV are satisfied. However, similarly to the vector case [13], it is more difficult to verify the satisfiability of the assumptions than solve the recovery problem itself, and to date there is no known algorithm to perform this verification. Instead of relying on the theoretical bound on the number of iterations, we use an empirical stopping criterion below. If either the monotone decrease of is broken or falls a given threshold, ADMiRA stops.

In terms of computation, Steps 4 and 7 involve finding a best rank- or rank- approximation to a given matrix (e.g., by truncating the SVD), while Step 6 involves the solution of a linear least-squares problem – all standard numerical linear algebra problems. Step 5 merges two given sets of atoms in by taking their union. As described in more detail in Section VIII, these computations can be further simplified and their cost reduced by storing and operating on the low rank matrices in factored form, and taking advantage of special structure of the measurement operator , such as sparsity.

Most steps of ADMiRA are similar to those of CoSaMP except Step 4 and Step 7. The common feasible set of the maximization problems in Step 4 and Step 7 is infinite and not orthogonal, whereas the analogous set in CoSaMP is finite and orthonormal. As a result, the maximization problems over the infinite set in ADMiRA are more difficult than those in the analogous steps of CoSaMP, which can be simply solved by selecting the coordinates with the largest magnitudes. None the less, singular value decomposition can solve the maximization problems over the infinite set efficiently.

## Iv Main Results: Performance Guarantee

### Iv-a Rank-Restricted Isometry Property (R-RIP)

Recht et al [2] generalized the sparsity-restricted isometry property (RIP) defined for sparse vectors to low rank matrices. They also demonstrated “nearly isometric families” satisfying this R-RIP (with overwhelming probability). These include random linear operators generated from i.i.d. Gaussian, or i.i.d. symmetric Bernoulli distributions. In order to draw the analogy with known results in -norm minimization, we slightly modify their definition by squaring the norm in the inequality. Given a linear operator , the rank-restricted isometry constant is defined as the minimum constant that satisfies

 (1−δr(A))∥X∥2F≤∥γAX∥22≤(1+δr(A))∥X∥2F, (4)

for all with for some constant . Throughout this paper, we assume that the linear operator is scaled appropriately so that in (4) 6 . If has a small rank-restricted isometry constant , then (4) implies that acts like an isometry (scaled by ) on the matrices whose rank is equal to or less than . In this case, is called a rank-restricted isometry to indicate that the domain where is nearly an isometry is restricted to the set of low-rank matrices.

### Iv-B Performance Guarantee

Subject to the R-RIP, the Atomic Decomposition for Minimum Rank Approximation Algorithm (ADMiRA) has a performance guarantee analogous to that of CoSaMP.

The followings are the assumptions in ADMiRA:

A1:

The target rank is fixed as .

A2:

The linear operator satisfies .

A3:

The measurement is obtained by

 b=AX0+ν, (5)

where is the discrepancy between the measurement and the linear model . No assumptions are made about the matrix underlying the measurement, and it can be arbitrary.

Assumption A2 plays a key role in deriving the performance guarantee of ADMiRA: it enforces the rank-restricted isometry property of the linear operator . Although the verification of the satisfiability of A2 is as difficult as or more difficult than the recovery problem itself, as mentioned above, nearly isometric families that satisfy the condition in A2 have been demonstrated [2].

The performance guarantees are specified in terms of a measure of inherent approximation error, termed unrecoverable energy defined by

 ϵ=∥X0−X0,r∥F+1√r∥X0−X0,r∥∗+∥ν∥2, (6)

where denotes the best rank- approximation of . The first two terms in define a metric of the minimum distance between the “true” matrix and a rank- matrix. This is analogous to the notion of a measure of compressibility of a vector in sparse vector approximation. By the Eckart-Young-Mirsky theorem [18], no rank- matrix can come closer to in this metric. In particular, the optimal solution to P2 cannot come closer to in this metric. The third term is the norm of the measurement noise, which must also limit the accuracy of the approximation provided by a solution to P2.

###### Theorem IV.1

Let denote the estimate of in the -th iteration of ADMiRA. For each , satisfies the following recursion:

 ∥X0−ˆXk+1∥F≤0.5∥X0−ˆXk∥F+8ϵ,

where is the unrecoverable energy. From the above relation, it follows that

 ∥X0−ˆXk∥F≤2−k∥X0∥F+16ϵ,∀k≥0.

Theorem IV.1 shows the geometric convergence of ADMiRA. In fact, convergence in a finite number of steps can be achieved as stated by the following theorem.

###### Theorem IV.2

After at most iterations, ADMiRA provides a rank- approximation of , which satisfies

 ∥X0−ˆX∥F≤17ϵ,

where is the unrecoverable energy.

Depending on the spectral properties of the matrix , even faster convergence is possible (See Section VII for details).

### Iv-C Relationship between P1, P2, and ADMiRA

The approximation given by ADMiRA is a solution to P2. When there is no noise in the measurement, i.e., , where is the solution to P1, Theorem IV.1 states that if the ADMiRA assumptions are satisfied with , then . An appropriate value can be assigned to by an incremental search over .

For the noisy measurement case, the linear constraint in P1 is replaced by a quadratic constraint and the rank minimization problem is written as:

 P1′:minX∈Cm×nrank(X)subject to∥AX−b∥2≤η.

Let denote a minimizer to P1. In this case, the approximation produced by ADMiRA is not necessarily equivalent to , but by Theorem IV.1 the distance between the two is bounded by for all that satisfies the ADMiRA assumptions.

## V Properties of the Rank-Restricted Isometry

We introduce and prove a number of properties of the rank-restricted isometry. These properties serve as key tools for proving the performance guarantees for ADMiRA in this paper. These properties further extend the analogy between the sparse vector and the low-rank matrix approximation problems (P3 and P2, respectively), and are therefore also of interest in their own right. The proofs are contained in the Appendix.

###### Proposition V.1

The rank-restricted isometry constant is nondecreasing in .

An operator satisfying the R-RIP satisfies, as a consequence, a number of other properties when composed with other linear operators defined by the atomic decomposition.

###### Definition V.2

Given a set , define a linear operator by

 LΨα=|Ψ|∑k=1αkψk,∀α∈C|Ψ|. (7)

It follows from (7) that the adjoint operator is given by

 (L∗ΨX)k=⟨X,ψk⟩Cm×n,∀k=1,…,|Ψ|, ∀X∈Cm×n. (8)

Note that for the operator composition admits a matrix representation. Its pseudo-inverse is denoted by .

###### Remark V.3

If is an orthonormal set, then is an isometry and . If is a set of atoms in , then for all .

###### Proposition V.4

Suppose that linear operator has the rank-restricted isometry constant . Let be a set of atoms in such that . Then

 ∥∥PΨA∗b∥∥F≤√1+δr(A)∥b∥2,∀b∈Cp. (9)
###### Proposition V.5

Suppose that linear operator has the rank-restricted isometry constant . Let be a set of atoms in such that and let satisfy . Then

 ∥∥PΨA∗AX∥∥F≤(1+δr(A))∥X∥F. (10)
###### Proposition V.6

Suppose that linear operator has the rank-restricted isometry constant . Let be a set of atoms in such that and let be a projection operator that commutes with . Then

 (1−δr(A))∥PPΨX∥F≤∥∥PPΨA∗APPΨX∥∥F,∀X∈Cm×n. (11)

The following rank-restricted orthogonality property for the matrix case is analogous to the sparsity-restricted orthogonality property for the vector case (Lemma 2.1 in [4]).

###### Proposition V.7

Suppose that linear operator has the rank-restricted isometry constant . Let satisfy and for all . Then

 |⟨AX,AY⟩Cp|≤√2δr(A)∥X∥F∥Y∥F. (12)
###### Remark V.8

For the vector case, the representation of a vector in terms of the standard basis of determines . Let be arbitrary. Then the following properties hold: (i) the projection operators and commute; and (ii) is -sparse (or sparser) if is -sparse. These properties follow from the orthogonality of the standard basis. Proposition 3.2 in [13], corresponding in the vector case to our Proposition V.7, requires these two properties. However, the analogues of properties (i) and (ii) do not hold for the matrix case. Indeed, for , the projection operators and do not commute in general and can be greater than even though . Proposition V.7 is a stronger version of the corresponding result (Proposition 3.2 in [13]) for the vector case in the sense that it requires a weaker condition (orthogonality between two low-rank matrices), which can be satisfied without the analogues of properties (i) and (ii).

###### Corollary V.9

Suppose that linear operator has the rank-restricted isometry constant . If sets of atoms in and matrix satisfy , , and , then

 (13)
###### Remark V.10

For the real matrix case, Proposition V.7 can be improved by dropping the constant . This improvement is achieved by replacing the parallelogram identity in the proof to the version for the real scalar field case. This argument also applies to Corollary V.9.

Finally, we relate the R-RIP to the nuclear norm, extending the analogous result [13] from the -sparse vector case to the rank- matrix case.

###### Proposition V.11

If a linear map satisfies

 ∥AX∥22≤(1+δr(A))∥X∥2F, (14)

for all with , then

 (15)

for all .

## Vi Proof of Theorem iv.1

### Vi-a Exactly Low Rank Matrix Case

###### Theorem VI.1

Assume in (5). Let denote the estimate of in the -th iteration of ADMiRA. Then for each , satisfies the following recursion:

From the above relation, it follows that

 ∥X0−ˆXk∥F≤2−k∥X0∥F+13∥ν∥2,∀k≥0.

Theorem VI.1 is proved by applying a sequence of lemmata. We generalize the proof of the performance guarantee for CoSaMP [13] to the matrix case by applying the generalized analogy proposed in this paper. The flow and the techniques used in the proofs are similar to those in [13]. However, in the matrix case, there are additional unknowns in the form of the singular vectors. Therefore, the generalization of the proofs in [13] to the matrix case is not straightforward and the proofs are sufficiently different from those for the vector case to warrant detailed exposition. The main steps in the derivation of the performance guarantee are stated in this section and the detailed proofs are in the Appendix.

For the proof, we study the -th iteration starting with the previous result in the -th iteration. Let denote the true solution with rank . Matrix denotes , which is the estimate of in the -th (previous) iteration. Set is the set of orthogonal atoms obtained in the previous iteration. From , we compute the proxy matrix . Set is the solution of the following low rank approximation problem:

 Ψ′≜argmaxΨ{∥∥PΨA∗(b−AˆX)∥∥F: Ψ⊂O,|Ψ|≤2r}
###### Lemma VI.2

Let in (5). Then

 ∥∥P⊥Ψ′(X0−ˆX)∥∥F≤0.24∥X0−ˆX∥F+2.13∥ν∥2

Lemma VI.2 shows that subject to the rank-restricted isometry property, the set of atoms chosen in Step 4 of ADMiRA is a good set: it captures of the energy of the atoms in that were not captured by , and the effects of additive measurement noise are bounded by a small constant. In other words, the algorithm is guaranteed to make good progress in this step.

###### Lemma VI.3

Let and let be sets of atoms in such that , , and . Let . Then

 ∥∥P⊥˜ΨX0∥∥F≤∥∥P⊥Ψ′(X0−ˆX)∥∥F.

Lemma VI.3 shows that the augmented set of atoms produced in Step 5 of the algorithm is at least as good in explaining the unknown as was the set in explaining the part of not captured by the estimate from the previous iteration.

###### Lemma VI.4

Let in (5) and let be a set of atoms in with . Then

 ˜X=argminX{∥b−AX∥2: X∈span(˜Ψ)} (16)

satisfies

 ∥X0−˜X∥F≤1.04∥∥P⊥˜ΨX0∥∥F+1.02∥ν∥2.

Lemma VI.4 shows that the least-squares step, Step 6 of the algorithm, performs almost as well as one could do with operator equal to an identity operator: because is restricted to , it is impossible to recover components of in . Hence, the first constant cannot be smaller than 1. A value of 1 for the second constant, the noise gain, would correspond to a perfectly conditioned system.

###### Lemma VI.5

Let in (5) and let denote the best rank- approximation of , i.e.,

 ˜Xr=argminX{∥˜X−X∥F: rank(X)≤r}.

Then

 ∥X0−˜Xr∥F≤2∥X0−˜X∥F.

As expected, reducing the rank of the estimate from to , to produce , increases the approximation error. However, Lemma VI.5 shows that this increase is moderate – by no more than a factor of 2.

The update completes the -th iteration. Combining all the results in the lemmata provides the proof of Theorem VI.1.

###### Proof:

(Theorem VI.1)

 ∥X0−ˆXk+1∥F = ∥X0−˜Xr∥F ≤ 2∥X0−˜X∥F(Lemma~{}???) ≤ 2⋅(1.04∥∥P⊥˜ΨX0∥∥F+1.02∥ν∥2)(Lemma~{}???) ≤ 2.08∥∥P⊥Ψ′(X0−ˆXk)∥∥F+2.04∥ν∥2(Lemma~{}???) ≤ 2.08⋅(0.24∥X0−ˆXk∥F+2.13∥ν∥2)+2.04∥ν∥2(Lemma~{}???) ≤ 0.5∥X0−ˆXk∥F+6.5∥ν∥2.

The recursion together with the fact that provide the final result.

### Vi-B General Matrix Case

Theorem IV.1 is proved by combining Theorem VI.1 and the following lemma, which shows how to convert the mismodeling error (deviations of from a low rank matrix) to an equivalent additive measurement noise with a quantified norm.

###### Lemma VI.6

Let be an arbitrary matrix in . The measurement is also represented as where

###### Proof:

Let . Then .

 ∥˜ν∥2 ≤ ∥A(X0−X0,r)∥2+∥ν∥2 ≤

where the last inequality holds by Proposition V.11. The inequality implies .

###### Proof:

(Theorem IV.1) Let be an arbitrary matrix in . The measurement is given by , where is defined in Lemma VI.6. By Theorem VI.1,

 ∥X0,r−ˆXk+1∥F≤0.5∥X0,r−ˆXk∥F+6.5∥˜ν∥2.

Applying the triangle inequality and the above inequality,

 ∥X0−ˆXk+1∥F ≤ ∥X0,r−ˆXk+1∥F+∥X0−X0,r∥F ≤ 0.5∥X0,r−ˆXk∥F+6.5∥˜ν∥2+∥X0−X0,r∥F

Using the upper bound on yields

 ∥X0−ˆXk+1∥F ≤ < 0.5∥X0−ˆXk∥F+8ϵ,

where is the unrecoverable energy.

## Vii Required Number of Iterations

Theorem IV.2 provides a uniform bound on the number of iterations required to achieve the guaranteed approximation accuracy. In addition to this uniform iteration bound, Theorem VII.6 in this section shows that even faster convergence may be expected for matrices with clustered singular values.

In the analysis of the iteration number, the distribution of the singular values of the matrices involved is the only thing that matters. Indeed, the singular vectors do not play any role in the analysis. As a consequence, the proofs for the vector case (CoSaMP) and the matrix case (ADMiRA) are very similar, and the corresponding bounds on the number of iterations coincide. However, for the completeness, we provide the proofs for the matrix case.

###### Definition VII.1

Given , is defined in (1). We define the atomic bands of by

 Bj≜{ψ∈atoms(X): 2−(j+1)∥X∥2F<∥∥PψX∥∥2F≤2−j∥X∥2F},for j∈Z+,

where denotes the set of nonnegative integers. Note that atomic bands are disjoint subsets of , which is an orthonormal set of atoms in , and therefore atomic bands are mutually orthogonal. From the atomic bands, the profile of is defined as the number of nonempty atomic bands, i.e.,

 profile(X)≜|{j: Bj≠∅}|. (17)

From the definition, .

The atomic bands and admit a simple interpretation in terms of the spectrum of . Let be ordered as . Then , where is the -th singular value of , in decreasing order. Let

 ˜σ2k=σ2k∥X∥2F.

Then . In other words, contains the corresponding to normalized singular values falling in a one octave interval (“bin”). The quantity then is the number of such occupied octave bins, and measures the spread of singular values of on a log scale.

###### Remark VII.2

For the vector case, the term analogous to the atomic band is the component band [19] defined by

 Bj≜{k∈{1,…,n}: 2−(j+1)∥x∥22<|xk|2≤2−j∥x∥22},for j∈Z+,

for .

First, the number of iterations for the exactly low-rank case is bounded by the following theorem.

###### Theorem VII.3

Let be a rank- matrix and let , where is defined in (17). Then after at most

 tlog4/3(1+4.3√r/t)+6

iterations, the estimate produced by ADMiRA satisfies

 ∥X0−ˆX∥F≤15∥ν∥2.

We introduce additional notations for the proof of Theorem VII.3. Let denote the estimate at the -th iteration of ADMiRA. For a nonnegative integer , we define an auxiliary matrix

 Yj≜∑k≥jPBkX0.

Then satisfies

 ∥∥Yj∥∥2F≤∑k≥j2−k∥X0∥2F⋅|Bk|. (18)

The proof of Theorem VII.3 is done by a sequence of lemmata. The first lemma presents two possibilities in each iteration of ADMiRA: if the iteration is successful, the approximation error is small; otherwise, the approximation error is dominated by the un-identified portion of the matrix and the approximation error in the next iteration decreases by a constant ratio.

Let