New Algorithms for Learning Incoherent and Overcomplete Dictionaries
Abstract
In sparse recovery we are given a matrix (“the dictionary”) and a vector of the form where is sparse, and the goal is to recover . This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary is unknown and has to be learned from random examples of the form where is drawn from an appropriate distribution — this is the dictionary learning problem. In most settings, is overcomplete: it has more columns than rows. This paper presents a polynomialtime algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of [48] who gave an algorithm for the undercomplete case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of [18]. In particular, a dictionary is incoherent if each pair of columns has inner product at most .
The algorithm makes natural stochastic assumptions about the unknown sparse vector , which can contain nonzero entries (for any ). This is close to the best allowable by the best sparse recovery algorithms even if one knows the dictionary exactly. Moreover, both the running time and sample complexity depend on , where is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on , and this is necessary.
1 Introduction
Finding sparse representations for data —signals, images, natural language— is a major focus of computational harmonic analysis [20, 41]. This requires having the right dictionary for the dataset, which allows each data point to be written as a sparse linear combination of the columns of . For images, popular choices for the dictionary include sinusoids, wavelets, ridgelets, curvelets, etc. [41] and each one is useful for different types of features: wavelets for impulsive events, ridgelets for discontinuities in edges, curvelets for smooth curves, etc. It is common to combine such handdesigned bases into a single dictionary, which is “redundant” or “overcomplete” because . This can allow sparse representation even if an image contains many different “types” of features jumbled together. In machine learning dictionaries are also used for feature selection [45] and for building classifiers on top of sparse coding primitives [35].
In many settings handdesigned dictionaries do not do as well as dictionaries that are fit to the dataset using automated methods. In image processing such discovered dictionaries are used to perform denoising [21], edgedetection [40], superresolution [52] and compression. The problem of discovering the best dictionary to a dataset is called dictionary learning and also referred to as sparse coding in machine learning. Dictionary learning is also a basic building block in the design of deep learning systems [46]. See [3, 20] for further applications. In fact, the dictionary learning problem was identified by [44] as part of a study on internal image representations in the visual cortex. Their work suggested that basis vectors in learned dictionaries often correspond to wellknown image filters such as Gabor filters.
Our goal is to design an algorithm for this problem with provable guarantees in the same spirit as recent work on nonnegative matrix factorization [7], topic models [8, 6] and mixtures models [43, 12]. (We will later discuss why current algorithms in [39], [22], [4], [36], [38] do not come with such guarantees.) Designing such algorithms for dictionary learning has proved challenging. Even if the dictionary is completely known, it can be NPhard to represent a vector as a sparse linear combination of the columns of [16]. However for many natural types of dictionaries, the problem of finding a sparse representation is computationally easy. The pioneering work of [18], [17] and [29] (building on the uncertainty principle of [19]) presented a number of important examples (in fact, the ones we used above) of dictionaries that are incoherent and showed that minimization can find a sparse representation in a known, incoherent dictionary if one exists.
Definition 1.1 (incoherent).
An matrix whose columns are unit vectors is incoherent if we have We will refer to as incoherent if is .
A randomly chosen dictionary is incoherent with high probability (even if ). [18] gave many other important examples of incoherent dictionaries, such as one constructed from spikes and sines, as well as those built up from wavelets and sines, or even wavelets and ridgelets. There is a rich body of literature devoted to incoherent dictionaries (see additional references in [25]). [18] proved that given where has nonzero entries, where , basis pursuit (solvable by a linear program) recovers exactly and it is unique. [25] (and subsequently [50]) gave algorithms for recovering even in the presence of additive noise. [49] gave a more general exact recovery condition (ERC) under which the sparse recovery problem for incoherent dictionaries can be algorithmically solved. All of these require . In a foundational work, [13] showed that basis pursuit solves the sparse recovery problem even for if satisfies the weaker restricted isometry property [14]. Also if is a fullrank square matrix, then we can compute from , trivially. But our focus here will be on incoherent and overcomplete dictionaries; extending these results to RIP matrices is left as a major open problem.
The main result in this paper is an algorithm that provably learns an unknown, incoherent dictionary from random samples where is a vector with at most nonzero entries (for any , small enough constant depending on ). Hence we can allow almost as many nonzeros in the hidden vector as the best sparse recovery algorithms which assume that the dictionary is known. The precise requirements that we place on the distributional model are described in Section 1.2. We can relax some of these conditions at the cost of increased running time or requiring to be more sparse. Finally, our algorithm can tolerate a substantial amount of additive noise, an important consideration in most applications including sparse coding, provided it is independent and uncorrelated with the dictionary.
1.1 Related Works
Algorithms used in practice
Dictionary learning is solved in practice by variants of alternating minimization. [39] gave the first approach; subsequent popular approaches include the method of optimal directions (MOD) of [22], and KSVD of [4]. The general idea is to maintain a guess for and and at every step either update (using basis pursuit) or update by, say, solving a least squares problem. Provable guarantees for such algorithms have proved difficult because the initial guesses may be very far from the true dictionary, causing basis pursuit to behave erratically. Also, the algorithms could converge to a dictionary that is not incoherent, and thus unusable for sparse recovery. (In practice, these heuristics do often work.)
Algorithms with guarantees
An elegant paper of [48] shows how to provably recover exactly if it has full column rank, and has at most nonzeros. However, requiring to be full column rank precludes most interesting applications where the dictionary is redundant and hence cannot have full column rank (see [18, 20, 41]). Moreover, the algorithm in [48] is not noise tolerant.
After the initial announcement of this work, [2, 1] independently gave provable algorithms for learning overcomplete and incoherent dictionaries. Their first paper [2] requires the entries in to be independent random variables. Their second [1] gives an algorithm –a version of alternating minimization– that converges to the correct dictionary given a good initial dictionary (such a good initialization can only be found using [2] in special cases, or more generally using this paper). Unlike our algorithms, theirs assume the sparsity of is at most or (assumption A4 in both papers), which are far from the limit of incoherent dictionaries. The main change from the initial version of our paper is that we have improved the dependence of our algorithms from to (see Section 5).
After this work, [11] give an quasipolynomial time algorithm for dictionary learning using sumofsquares SDP hierarchy. The algorithm can output an approximate dictionary even when sparsity is almost linear in the dimensions with weaker assumptions.
Independent Component Analysis
When the entries of are independent, algorithms for independent component analysis or ICA [15] can recover . [23] gave a provable algorithm that recovers up to arbitrary accuracy, provided entries in are nonGaussian (when is Gaussian, is only determined up to rotations anyway). Subsequent works considered the overcomplete case and gave provable algorithms even when is with [37, 28].
However, these algorithms are incomparable to ours since the algorithms are relying on different assumptions (independence vs. sparsity). With sparsity assumption, we can make much weaker assumptions on how is generated. In particular, all these algorithms require the support of the vector to be at least 3wise independent () in the undercomplete case and wise independence in the overcomplete case . Our algorithm only requires the support to have bounded moments ( where is a large constant or even a polynomial depending on , see Definition 1.5). Also, because our algorithm relies on the sparsity constraint, we are able to get almost exact recover in the noiseless case (see Theorem 1.4 and Section 5). This kind of guarantee is impossible for ICA without sparsity assumption.
1.2 Our Results
A range of results are possible which trade off more assumptions with better performance. We give two illustrative ones: the first makes the most assumptions but has the best performance; the second has the weakest assumptions and somewhat worse performance. The theorem statements will be cleaner if we use asymptotic notation: the parameters will go to infinity and the constants denoted as “” are arbitrary so long as they do not grow with these parameters.
First we define the class of distributions that the sparse vectors must be drawn from. We will be interested in distributions on sparse vectors in where each coordinate is nonzero with probability (the constant in can differ among coordinates).
Definition 1.2 (Distribution class and its moments).
The distribution is in class if (i) each nonzero has expectation and lies in where . (ii) Conditioned on any subset of coordinates in being nonzero, the values are independent of each other.
The distribution has bounded wise moments if the probability that is nonzero in any subset of coordinates is at most times where .
Remark:
(i) The bounded moments condition trivially holds for any constant if the set of nonzero locations is a random subset of size . The values of these nonzero locations are allowed to be distributed very differently from one another. (ii) The requirement that nonzero ’s be bounded away from zero in magnitude is similar in spirit to the SpikeandSlab Sparse Coding (S3C) model of [27], which also encourages nonzero latent variables to be bounded away from zero to avoid degeneracy issues that arise when some coefficients are much larger than others. (iii) In the rest of the paper we will be focusing on the case when , all the proofs generalize directly to the case by losing constant factors in the guarantees.
Because of symmetry in the problem, we can only hope to learn dictionary up to permutation and signflips. We say two dictionaries are columnwise close, if after appropriate permutation and flipping the corresponding columns are within distance .
Definition 1.3.
Two dictionaries are columnwise close, if there exists a permutation and such that .
Later when we are talking about two dictionaries that are close, we always assume the columns are ordered correctly so that .
Theorem 1.4.
There is a polynomial time algorithm to learn a incoherent dictionary from random examples. With high probability the algorithm returns a dictionary that is columnwise close to given random samples of the form , where is chosen according to some distribution in and is in :

If and the distribution has bounded wise moments, is a universal constant, then the algorithm requires samples and runs in time .

If and the distribution has bounded wise moments, is a constant only depending on , then the algorithm requires samples and runs in time

Even if each sample is of the form , where ’s are independent spherical Gaussian noise with standard deviation , the algorithms above still succeed provided the number of samples is at least and respectively.
In particular and and and are larger by a factor.
Remark:
The sparsity that our algorithm can tolerate – the minimum of and – approaches the sparsity that the best known algorithms require even if is known.
Although the running time and sample complexity of the algorithm are relatively large polynomials, there are many ways to optimize the algorithm. See the discussion in Section 7.
Now we describe the other result which requires fewer assumptions on how the samples are generated, but require more stringent bounds on the sparsity:
Definition 1.5 (Distribution class ).
A distribution is in class if (i) the events have weakly bounded second and third moments, in the sense that , . (ii) Each nonzero is in where .
Theorem 1.6.
There is a polynomial time algorithm to learn a incoherent dictionary from random examples of the form , where is chosen according to some distribution in . If and we are given samples , then the algorithm succeeds with high probability, and the output dictionary is columnwise close to the true dictionary. The algorithm runs in time . The algorithm is also noisetolerant as in Theorem 1.4.
1.3 Proof Outline
The key observation in the algorithm is that we can test whether two samples share the same dictionary element (see Section 2). Given this information, we can build a graph whose vertices are the samples, and edges correspond to samples that share the same dictionary element. A large cluster in this graph corresponds to the set of all samples with . In Section 3 we give an algorithm for finding all the large clusters. Then we show how to recover the dictionary given the clusters in Section 4. This allows us to get a rough estimate of the dictionary matrix. Section 5 gives an algorithm for refining the solution in the noiseless case. The three main parts of the techniques are:
Overlapping Clustering: Heuristics such as MOD [22] or KSVD [4] have a cyclic dependence: If we knew , we could solve for and if we knew all of the ’s we could solve for . Our main idea is to break this cycle by (without knowing ) finding all of the samples where . We can think of this as a cluster . Although our strategy is to cluster a random graph, what is crucial is that we are looking for an overlapping clustering since each sample belongs to clusters! Many of the algorithms which have been designed for finding overlapping clusterings (e.g. [9], [10]) have a poor dependence on the maximum number of clusters that a node can belong to. Instead, we give a simple combinatorial algorithm based on triplet (or higherorder) tests that recovers the underlying, overlapping clustering. In order to prove correctness of our combinatorial algorithm, we rely on tools from discrete geometry, namely the piercing number [42, 5].
Recovering the Dictionary: Next, we observe that there are a number of natural algorithms for recovering the dictionary once we know the clusters . We can think of a random sample from as applying a filter to the samples we are given, and filtering out only those samples where . The claim is that this distribution will have a much larger variance along the direction than along other directions, and this allows us to recovery the dictionary either using a certain averaging algorithm, or by computing the largest singular vector of the samples in . In fact, this latter approach is similar to KSVD [4] and hence our analysis yields insights into why these heuristics work.
Fast Convergence: The above approach yields provable algorithms for dictionary learning whose running time and sample complexity depend polynomially on . However once we have a suitably good approximation to the true dictionary, can we converge at a much faster rate? We analyze a simple alternating minimization algorithm Iterative Average and we derive a formula for its updates where we can analyze it by thinking of it instead as a noisy version of the matrix power method (see Lemma 5.6). This analysis is inspired by recent work on analyzing alternating minimization for the matrix completion problem [34, 32], and we obtain algorithms whose running time and sample complexity depends on . Hence we get algorithms that converge rapidly to the true dictionary while simultaneously being able to handle almost the same sparsity as in the sparse recovery problem where is known!
NOTATION: Throughout this paper, we will use to denote the sample and as the vector that generated it – i.e. . Let denote the support of . For a vector let be the coordinate. For a matrix (especially the dictionary matrix), we use to denote the th column (the th dictionary element). Also, for a set , we use to denote the submatrix of with columns in . We will use to denote the Frobenius norm and to denote the spectral norm. Moreover we will use to denote the distribution on sparse vectors that is used to generate our samples, and will denote the restriction of this distribution to vectors where . When we are working with a graph we will use to denote the set of neighbors of in . Throughout the paper “with high probability” means the probability is at least for large enough .
2 The Connection Graph
In this part we show how to test whether two samples share the same dictionary element, i.e., whether the supports and intersect. The idea is we can check the innerproduct of and , which can be decomposed into the sum of innerproducts of dictionary elements
If the supports are disjoint, then each of the terms above is small since by the incoherence assumption. To prove the sum is indeed small, we will appeal to the classic HansonWright inequality:
Theorem 2.1 (HansonWright).
[31] Let be a vector of independent, subGaussian random variables with mean zero and variance one. Let be a symmetric matrix. Then
This will allow us to determine if and intersect but with false negatives:
Lemma 2.2.
Suppose for large enough constant (depending on in Definition 1.2). Then if and are disjoint, with high probability .
Proof.
Let be the submatrix resulting from restricting to the locations where and are nonzero. Set to be a matrix where the submatrices in the topleft and bottomright are zero, and the submatrices in the bottomleft and topright are and respectively. Here we think of the vector as being a length vector whose first entries are the nonzero entries in and whose last entries are the nonzero entries in . And by construction, we have that
We can now appeal to the HansonWright inequality (above). Note that since and do not intersect, the entries in are each at most and so the Frobenius norm of is at most . This is also an upperbound on the spectral norm of . We can set , and for both terms in the minimum are and this implies the lemma. ∎
We will also make use of a weaker bound (but whose conditions allow us to make fewer distributional assumptions):
Lemma 2.3.
If then implies that and intersect
Proof.
Suppose and are disjoint. Then the following upper bound holds:
and this implies the lemma. ∎
This only works up to . In comparison, the stronger bound of Lemma 2.2 makes use of the randomness of the signs of and works up to .
In our algorithm, we build the following graph:
Definition 2.4.
Given samples , build a connection graph on nodes where and are connected by an edge if and only if .
This graph will “miss” some edges, since if a pair and have intersecting support we do not necessarily meet the above condition. But by Lemma 2.2 (with high probability) this graph will not have any false positives:
Corollary 2.5.
With high probability, each edge present in the connection graph corresponds to a pair where and have nonempty intersection.
Consider a sample for which there is an edge to both and . This means that there is some coordinate in both and and some coordinate in both and . However the challenge is that we do not immediately know if and have a common intersection or not.
3 Overlapping Clustering
Our goal in this section is to determine which samples have just from the connection graph. To do this, we will identify a combinatorial condition that allows us to decide whether or not a set of three samples and that have supports and respectively – have a common intersection or not. From this condition, it is straightforward to give an algorithm that correctly groups together all of the samples that have . In order to reduce the number of letters used we will focus on the first three samples and although all the claims and lemmas hold for all triples.
Suppose we are given two samples and with supports and where . We will prove that this pair can be used to recover all the samples for which . This will follow because we will show that the expected number of common neighbors between and will be large if and otherwise will be small. So throughout this subsection let us consider a sample and let be its support. We will need the following elementary claim.
Claim 3.1.
Suppose , then
Proof.
Using ideas similar to Lemma 2.2, we can show if (that is, the new sample has a unique intersection with ), then .
Now let , let be the event that . Clearly, when event happens, for all . The probability of is at least
Here we used bounded second moment property for the conditional probability and union bound. ∎
This claim establishes a lower bound on the expected number of common neighbors of a triple, if they have a common intersection. Next we establish an upper bound, if they don’t have a common intersection. Suppose . In principle we should be concerned that could still intersect each of , and in different locations. Let , and .
Lemma 3.2.
Suppose that . Then the probability that intersects each of , and is at most
Proof.
We can break up the event whose probability we would like to bound into two (not necessarily disjoint) events: (1) the probability that intersects each of , and disjointly (i.e. it contains a point but , and similarly for the other sets ). (2) the probability that contains a point in the common intersection of two of the sets, and one point from the remaining set. Clearly if intersects the each of , and then at least one of these two events must occur.
The probability of the first event is at most the probability that contains at least one element from each of three disjoint sets of size at most . The probability that contains an element of just one such set is at most the expected intersection which is , and since the expected intersection of with each of these sets are nonpositively correlated (because they are disjoint) we have that the probability of the first event can be bounded by .
Similarly, for the second event: consider the probability that contains an element in . Since , then must also contain an element in too. The expected intersection of and is and the expected intersection of and is , and again the expectations are nonpositively correlated since the two sets and are disjoint by assumption. Repeating this argument for the other pairs completes the proof of the lemma. ∎
Note that if has bounded higher order moment, the probability that two sets of size intersect in at least elements is at most . Hence we can assume that with high probability there is no pair of samples whose supports intersect in more than a constant number of locations. When only has bounded 3wise moment see Appendix A.
Let us quantitatively compare our lower and upper bound: If then the expected number of common neighbors for a triple with is much larger than the expected number of common neighbors of a triple whose common intersection is empty. Under this condition, if we take samples each triple with a common intersection will have at least common neighbors, and each triple whose common intersection is empty will have less than common neighbors.
Hence we can search for a triple with a common intersection as follows: We can find a pair of samples and whose supports intersect. We can take a neighbor of in the connection graph (at random), and by counting the number of common neighbors of , and we can decide whether or not their supports have a common intersection.
Definition 3.3.
We will call a pair of samples and an identifying pair for coordinate if the intersection of and is exactly .
Theorem 3.4.
The output of OverlappingCluster is an overlapping clustering where each set corresponds to some and contains all for which . The algorithm runs in time and succeeds with high probability if and if
Proof.
We can use Lemma 2.2 to conclude that each edge in corresponds to a pair whose support intersects. We can appeal to Lemma 3.2 and Claim 3.1 to conclude that for , with high probability each triple with a common intersection has at least common neighbors, and each triple without a common intersection has at most common neighbors.
In fact, for a random edge , the probability that the common intersection of and is exactly is because we know that they do intersect, and that intersection has a constant probability of being size one and it is uniformly distributed over possible locations. Appealing to a coupon collector argument we conclude that if the inner loop is run at least times then the algorithm finds an identifying pair for each column with high probability.
Note that we may have pairs that are not an identifying triple for some coordinate . However, any other pair found by the algorithm must have a common intersection. Consider for example a pair where and have a common intersection . Then we know that there is some other pair which is an identifying pair for and hence . (In fact this containment is strict, since will also contain a set corresponding to an identifying pair for too). Hence the secondtolast step in the algorithm will necessarily delete all such nonidentifying pairs .
What is the running time of this algorithm? We need time to build the connection graph, and the loop takes time. Finally, the deletion step requires time since there will be pairs found in the previous step and for each pair of pairs, we can delete if and only if there is a strictly smaller that contains and . This concludes the proof of correctness of the algorithm, and its running time analysis. ∎
4 Recovering the Dictionary
4.1 Finding the Relative Signs
Here we show how to recover the column once we have learned which samples have . We will refer to this set of samples as the “cluster” . The key observation is that if and uniquely intersect in index then the sign of is equal to the sign of . If there are enough such pairs and , we can determine not only which samples have but also which pairs of samples and have and . This is the main step of the algorithm OverlappingAverage.
Theorem 4.1.
If the input to OverlappingAverage are the true clusters up to permutation, then the algorithm outputs a dictionary that is columnwise close to with high probability if and if Furthermore the algorithm runs in time .
Intuitively, the algorithm works because the sets correctly identify samples with the same sign. This is summarized in the following lemma.
Lemma 4.2.
In Algorithm 2, is either or .
Proof.
It suffices to prove the lemma at the start of Step 8, since this step only takes the complement of with respect to . Appealing to Lemma 2.2 we conclude that and uniquely intersect in coordinate then the sign of is equal to the sign of . Hence when Algorithm 2 adds an element to it must have the same sign as the component of . What remains is to prove that each node is correctly labeled. We will do this by showing that for any such vertex, there is a length two path of labeled pairs that connects to , and this is true because the number of labeled pairs is large. We need the following simple claim:
Claim 4.3.
If then with high probability any two clusters share at most nodes in common.
This follows since the probability that a node is contained in any fixed pair of clusters is at most . Then for any node , we would like to lower bound the number of labeled pairs it has in . Since is in at most other clusters , the number of pairs where that are not labeled for is at most
Therefore for a fixed node for at least a fraction of the other nodes the pair is labeled. Hence we conclude that for each pair of nodes the number of for which both and are labeled is at least and so for every , there is a labeled path of length two connecting to . ∎
Using this lemma, we are ready to prove Algorithm 2 correctly learns all columns of .
Proof.
We can invoke Lemma 4.2 and conclude that is either or , whichever set is larger. Let us suppose that it is the former. Then each in is an independent sample from the distribution conditioned on , which we call . We have that where is a constant in because for all .
Let us compute the variance:
Note that there are no crossterms because the signs of each are independent. Furthermore we can bound the norm of each vector via incoherence. We can conclude that if , then with high probability using vector Bernstein’s inequality ([30], Theorem 12). This latter condition holds because we set to itself or its complement based on which one is larger. ∎
4.2 An Approach via SVD
Here we give an alternative algorithm for recovering the dictionary based instead on SVD. Intuitively if we take all the samples whose support contains index , then every such sample has a component along direction . Therefore direction should have the largest variance and can be found by SVD. The advantage is that methods like KSVD which are quite popular in practice also rely on finding directions of maximum variance, so the analysis we provide here yields insights into why these approaches work. However, the crucial difference is that we rely on finding the correct overlapping clustering in the first step of our dictionary learning algorithms, whereas KSVD and approaches like approximate it via their current guess for the dictionary.
Let us fix some notation: Let be the distribution conditioned on . Then once we have found the overlapping clustering, each cluster is a set of random samples from . Also let .
Definition 4.4.
Let .
Note that is the projected variance of on the direction . Our goal is to show that for any (i.e. ), the variance is strictly smaller.
Lemma 4.5.
The projected variance of on is at most
Proof.
Let and be the components of in the direction of and perpendicular to . Then we want bound where is sampled from . Since the signs of each are independent, we can write
Since we have:
Also . Let be the unit vector in the direction . We can write
where denotes the dictionary with the column removed. The maximum over of is just the largest singular value of which is the same as the largest singular value of which by the Greshgorin Disk Theorem (see e.g. [33]) is at most . And hence we can bound
Also since we obtain:
and this concludes the proof of the lemma. ∎
Definition 4.6.
Let , so the expression in Lemma 4.5 can be be an upper bounded by .
We will show that an approach based on SVD recovers the true dictionary up to additive accuracy . Note that here is a parameter that converges to zero as the size of the problem increases, but is not a function of the number of samples. So unlike the algorithm in the previous subsection, we cannot make the error in our algorithm arbitrarily small by increasing the number of samples, but this algorithm has the advantage that it succeeds even when .
Corollary 4.7.
The maximum singular value of is at least and the direction satisfies . Furthermore the second largest singular value is bounded by .
Proof.
The bound in Lemma 4.5 is only an upper bound, however the direction has variance and hence the direction of maximum variance must correspond to . Then we can appeal to the variational characterization of singular values (see [33]) that
Then condition that for the second singular value implies the second part of the corollary. ∎
Since we have a lower bound on the separation between the first and second singular values of , we can apply Wedin’s Theorem and show that we can recover approximately even in the presence of noise.
Theorem 4.8 (Wedin).
[51] Let and let and furthermore let and be the first singular vectors of and respectively. Then
Hence even if we do not have access to but rather an approximation to it (e.g. an empirical covariance matrix computed from our samples), we can use the above perturbation bound to show that we can still recover a direction that is close to – and in fact converges to as we take more and more samples.
Theorem 4.9.
If the input to OverlappingSVD is the correct clustering, then the algorithm outputs a dictionary so that for each , with high probability if and if
Proof.
Appealing to Theorem 3.4, we have that with high probability the call to OverlappingCluster returns the correct overlapping clustering. Then given samples from the distribution the classic result of Rudelson implies that the computed empirical covariance matrix is close in spectral norm to the true covariance matrix [47]. This, combined with the separation of the first and second singular values established in Corollary 4.7 and Wedin’s Theorem 4.8 imply that we recover each column of up to an additive accuracy of and this implies the theorem. Note that since we only need to compute the first singular vector, this can be done via power iteration [26] and hence the bottleneck in the running time is the call to OverlappingCluster. ∎
4.3 Noise Tolerance
Here we elaborate on why the algorithm can tolerate noise provided that the noise is uncorrelated with the dictionary (e.g. Gaussian noise). The observation is that in constructing the connection graph, we only make use of the inner products between pairs of samples and , the value of which is roughly preserved under various noise models. In turn, the overlapping clustering is a purely combinatorial algorithm that only makes use of the connection graph. Finally, we recover the dictionary using singular value decomposition, which is wellknown to be stable under noise (e.g. Wedin’s Theorem 4.8).
5 Refining the Solution
Earlier sections gave noisetolerant algorithms for the dictionary learning problem with sample complexity . This dependency on is necessary for any noisetolerant algorithm since even if the dictionary has only one vector, we need samples to estimate the vector in presence of noise. However when is exactly equal to we can hope to recover the dictionary with better running time and much fewer samples. In particular, [24] recently established that minimization is locally correct for incoherent dictionaries, therefore it seems plausible that given a very good estimate for there is some algorithm that computes a refined estimate of whose running time and sample complexity have a better dependence on .
In this section we analyze the localconvergence of an algorithm that is similar to KSVD [4]; see Algorithm 4 IterativeAverage. Recall denotes the submatrix of whose columns are indices in ; also, is the leftpseudoinverse of the matrix . Hence , is the projection matrix to the span of columns of .