Learning Topic Models — Going beyond SVD
Abstract
Topic Modeling is an approach used for automatic comprehension and classification of data in a variety of settings, and perhaps the canonical application is in uncovering thematic structure in a corpus of documents. A number of foundational works both in machine learning [15] and in theory [27] have suggested a probabilistic model for documents, whereby documents arise as a convex combination of (i.e. distribution on) a small number of topic vectors, each topic vector being a distribution on words (i.e. a vector of wordfrequencies). Similar models have since been used in a variety of application areas; the Latent Dirichlet Allocation or LDA model of Blei et al. is especially popular.
Theoretical studies of topic modeling focus on learning the model’s parameters assuming the data is actually generated from it. Existing approaches for the most part rely on Singular Value Decomposition (SVD), and consequently have one of two limitations: these works need to either assume that each document contains only one topic, or else can only recover the span of the topic vectors instead of the topic vectors themselves.
This paper formally justifies Nonnegative Matrix Factorization (NMF) as a main tool in this context, which is an analog of SVD where all vectors are nonnegative. Using this tool we give the first polynomialtime algorithm for learning topic models without the above two limitations. The algorithm uses a fairly mild assumption about the underlying topic matrix called separability, which is usually found to hold in reallife data. A compelling feature of our algorithm is that it generalizes to models that incorporate topictopic correlations, such as the Correlated Topic Model (CTM) and the Pachinko Allocation Model (PAM).
We hope that this paper will motivate further theoretical results that use NMF as a replacement for SVD – just as NMF has come to replace SVD in many applications.
1 Introduction
Developing tools for automatic comprehension and classification of data —web pages, newspaper articles, images, genetic sequences, user ratings — is a holy grail of machine learning. Topic Modeling is an approach that has proved successful in all of the aforementioned settings, though for concreteness here we will focus on uncovering thematic structure of a corpus of documents (see e.g. [4], [6]).
In order to learn structure one has to posit the existence of structure, and in topic models one assumes a generative model for a collection of documents. Specifically, each document is represented as a vector of wordfrequencies (the bag of words representation). Seminal papers in theoretical CS (Papadimitriou et al. [27]) and machine learning (Hofmann’s Probabilistic Latent Semantic Analysis [15]) suggested that documents arise as a convex combination of (i.e. distribution on) a small number of topic vectors, where each topic vector is a distribution on words (i.e. a vector of wordfrequencies). Each convex combination of topics thus is itself a distribution on words, and the document is assumed to be generated by drawing independent samples from it. Subsequent work makes specific choices for the distribution used to generate topic combinations —the wellknown Latent Dirichlet Allocation (LDA) model of Blei et al [6] hypothesizes a Dirichlet distribution (see Section 4).
In machine learning, the prevailing approach is to use local search (e.g. [11]) or other heuristics [32] in an attempt to find a maximum likelihood fit to the above model. For example, fitting to a corpus of newspaper articles may reveal topic vectors corresponding to, say, politics, sports, weather, entertainment etc., and a particular article could be explained as a combination of the topics politics, sports, and entertainment. Unfortunately (and not surprisingly), the maximum likelihood estimation is hard (see Section 6) and consequently when using this paradigm, it seems necessary to rely on unproven heuristics even though these have wellknown limitations (e.g. getting stuck in a local minima [11, 28]).
The work of Papadimitriou et al [27] (which also formalized the topic modeling problem) and a long line of subsequent work have attempted to give provable guarantees for the problem of learning the model parameters assuming the data is actually generated from it. This is in contrast to a maximum likelihood approach, which asks to find the closestfit model for arbitrary data. The principal algorithmic problem is the following (see Section 1.1 for more details):
Meta Problem in Topic Modeling: There is an unknown topic matrix with nonnegative entries that is dimension , and a stochastically generated unknown matrix that is dimension . Each column of is viewed as a probability distribution on rows, and for each column we are given i.i.d. samples from the associated distribution.
Goal: Reconstruct and parameters of the generating distribution for .
The challenging aspect of this problem is that we wish to recover nonnegative matrices with small innerdimension . The general problem of finding nonnegative factors of specified dimensions when given the matrix (or a close approximation) is called the Nonnegative Matrix Factorization (NMF) problem (see [22], and [2] for a longer history) and it is NPhard [31]. Lacking a tool to solve such problems, theoretical work has generally relied on the Singular Value Decomposition (SVD) which given the matrix will instead find factors with both positive and negative entries. SVD can be used as a tool for clustering – in which case one needs to assume that each document has only one topic. In Papadimitriou et al [27] this is called the pure documents case and is solved under strong assumptions about the topic matrix (see also [26] and [1] which uses the method of moments instead). Alternatively, other papers use SVD to recover the span of the columns of (i.e. the topic vectors) [3], [20], [19], which suffices for some applications such as computing the inner product of two document vectors (in the space spanned by the topics) as a measure of their similarity.
These limitations of existing approaches —either restricting to one topic per document, or else learning only the span of the topics instead of the topics themselves—are quite serious. In practice documents are much more faithfully described as a distribution on topics and indeed for a wide range of applications one needs the actual topics and not just their span – such as when browsing a collection of documents without a particular query phrase in mind, or tracking how topics evolve over time (see [4] for a survey of various applications). Here we consider what we believe to be a much weaker assumption – separability. Indeed, this property has already been identified as a natural one in the machine learning community [12] and has been empirically observed to hold in topic matrices fitted to various types of data [5].
Separability requires that each topic has some nearperfect indicator word – a word that we call the anchor word for this topic— that appears with reasonable probability in that topic but with negligible probability in all other topics (e.g., “soccer” could be an anchor word for the topic “sports”). We give a formal definition in Section 1.1. This property is particularly natural in the context of topic modeling, where the number of distinct words (dictionary size) is very large compared to the number of topics. In a typical application, it is common to have a dictionary size in the thousands or tens of thousands, but the number of topics is usually somewhere in the range from to . Note that separability does not mean that the anchor word always occurs (in fact, a typical document may be very likely to contain no anchor words). Instead, it dictates that when an anchor word does occur, it is a strong indicator that the corresponding topic is in the mixture used to generate the document.
Recently, we gave a polynomial time algorithm to solve NMF under the condition that the topic matrix is separable [2]. The intuition that underlies this algorithm is that the set of anchor words can be thought of as extreme points (in a geometric sense) of the dictionary. This condition can be used to identify all of the anchor words and then also the nonnegative factors. Ideas from this algorithm are a key ingredient in our present paper, but our focus is on the question:
Question.
What if we are not given the true matrix , but are instead given a few samples (say, samples) from the distribution represented by each column?
The main technical challenge in adapting our earlier NMF algorithm is that each document vector is a very poor approximation to the corresponding column of —it is too noisy in any reasonable measure of noise. Nevertheless, the core insights of our NMF algorithm still apply. Note that it is impossible to learn the matrix to within arbitrary accuracy. (Indeed, this is information theoretically impossible even if we knew the topic matrix and the distribution from which the columns of are generated.) So we cannot in general give an estimator that converges to the true matrix , and yet we can give an estimator that converges to the true topic matrix ! (For an overview of our algorithm, see the first paragraph of Section 3.)
We hope that this application of our NMF algorithm is just a starting point and other theoretical results will start using NMF as a replacement for SVD – just as NMF has come to replace SVD in several applied settings.
1.1 Our Results
Now we precisely define the topic modeling (learning) problem which was informally introduced above. There is an unknown topic matrix which is dimension (i.e. is the dictionary size) and each column of is a distribution on . There is an unknown matrix whose each column is itself a distribution (aka convex combination) on . The columns of are i.i.d. samples from a distribution which belongs to a known family, e.g., Dirichlet distributions, but whose parameters are unknown. Thus each column of (being a convex combination of distributions) is itself a distribution on , and the algorithm’s input consists of i.i.d. samples for each column of . Here is the document size and is assumed to be a constant for simplicity. Our algorithm can be easily adapted to work when the documents have different sizes.
The algorithm’s running time will necessarily depend upon various model parameters, since distinguishing a very small parameter from imposes a lower bound on the number of samples needed. The first such parameter is a quantitative version of separability, which was presented above as a natural assumption in context of topic modeling.
Definition 1.1 (Separable Topic Matrix).
An matrix is separable if for each there is some row of that has a single nonzero entry which is in the column and it is at least .
The next parameter measures the lowest probability with which a topic occurs in the distribution that generates columns of .
Definition 1.2 (Topic Imbalance).
The topic imbalance of the model is the ratio between the largest and smallest expected entries in a column of , in other words, where is a random weighting of topics chosen from the distribution.
Finally, we require that topics stay identifiable despite samplinginduced noise. To formalize this, we define a matrix that will be important throughout this paper:
Definition 1.3 (TopicTopic Covariance Matrix ).
If is the distribution that generates the columns of , then is defined as an matrix whose th entry is where is a vector chosen from .
Let be a lower bound on the condition number of the matrix . This is defined in Section 2, but for a matrix it is within a factor of of the smallest singular value. Our algorithm will work for any , but the number of documents we require will depend (polynomially) on :
Theorem 1.4 (Main).
There is a polynomial time algorithm that learns the parameters of a topic model if the number of documents is at least
where the three numbers are as defined above. The algorithm learns the topicterm matrix up to additive error . Moreover, when the number of documents is also larger than the algorithm can learn the topictopic covariance matrix up to additive error .
As noted earlier, we are able to recover the topic matrix even though we do not always recover the parameters of the column distribution . In some special cases we can also recover the parameters of , e.g. when this distribution is Dirichlet, as happens in the popular Latent Dirichlet Allocation (LDA) model [6, 4]. In Section 4.1 we compute a lower bound on the parameter for the Dirichlet distribution, which allows us to apply our main learning algorithm, and also the parameters of can be recovered from the covariance matrix (see Section 4.2).
Recently the basic LDA model has been refined to allow correlation among different topics, which is more realistic. See for example the Correlated Topic Model (CTM) [7] and the Pachinko Allocation Model (PAM) [24]. A compelling aspect of our algorithm is that it extends to these models as well: we can learn the topic matrix, even though we cannot always identify . (Indeed, the distribution in the Pachinko is not even identifiable: two different sets of parameters can generate exactly the same distribution)
Comparison with existing approaches.
(i) We rely crucially on separability. But note that this assumption is weaker in some sense than the assumptions in all prior works that provably learn the topic matrix. They assume a single topic per document, which can be seen as a strong separability assumption about instead of —in every column of only one entry is nonzero. By contrast, separability only assumes a similar condition for a negligible fraction —namely, out of — of rows of . Besides, it is found to actually hold in topic matrices found using current heuristics. (ii) Needless to say, existing theoretical approaches for recovering topic matrix couldn’t handle topic correlations at all since they only allow one topic per document. (iii) We remark that prior approaches that learn the span of instead of needed strong concentration bounds on eigenvalues of random matrices, and thus require substantial document sizes (on the order of the number of words in the dictionary!). By contrast we can work with documents of size.
2 Tools for (Noisy) Nonnegative Matrix Factorization
2.1 Various Condition Numbers
Central to our arguments will be various notions of matrices being “far” from being lowrank. The most interesting one for our purposes was introduced by Kleinberg and Sandler [19] in the context of collaborative filtering; and can be thought of as an analogue to the smallest singular value of a matrix.
Definition 2.1 ( Condition Number).
If matrix has nonnegative entries and all rows sum to then its Condition Number is defined as:
If does not have row sums of one then is equal to where is the diagonal matrix such that has row sums of one.
For example, if the rows of have disjoint support then and in general the quantity can be thought of a measure of how close two distributions on disjoint sets of rows can be. Note that, if is an dimensional real vector, and hence (if is the smallest singular value of ), we have:
The above notions of condition number will be most relevant in the context of the topictopic covariance matrix . We shall always use to denote the condition number of . The definition of condition number will be preserved even when we estimate the topictopic covariance matrix using random samples.
Lemma 2.2.
When , with high probability the matrix is entrywise close to with error . Further, when where is topic imbalance, the matrix has condition number at least .
Proof.
Since , the first part is just by Chernoff bound and union bound. The further part follows because is robustly simplicial, and the error can change the norm of for any unit by at most . The extra factor comes from the normalization to make rows of sum up to 1. ∎
In our previous work on nonnegative matrix factorization [2] we defined a different measure of “distance” from singular which is essential to the polynomial time algorithm for NMF:
Definition 2.3 (robustly simplicial).
If each column of a matrix has unit norm, then we say it is robustly simplicial if no column in has distance smaller than to the convex hull of the remaining columns in .
The following claim clarifies the interrelationships of these latter condition numbers.
Claim 2.4.
(i) If is separable then has condition number at least . (ii) If has all row sums equal to then is robustly simplicial for .
We shall see that the condition number for product of matrices is at least the product of condition number. The main application of this composition is to show that the matrix (or the empirical version ) is at least robustly simplicial. The following lemma will play a crucial role in analyzing our main algorithm:
Lemma 2.5 (Composition Lemma).
If and are matrices with condition number and , then is at least . Specificially, when is separable the matrix is at least robustly simplicial.
2.2 Noisy Nonnegative Matrix Factorization under Separability
A key ingredient is an approximate NMF algorithm from [2], which can recover an approximate nonnegative matrix factorization when the distance between each row of and the corresponding row in is small. We emphasize that this is not enough for our purposes, since the termbydocument matrix will have a substantial amount of noise (when compared to its expectation) precisely because the number of words in a document is much smaller than the dictionary size . Rather, we will apply the following algorithm (and an improvement that we give in Section 5) to the Gram matrix .
Theorem 2.6 (Robust NMF Algorithm [2]).
Suppose where and are normalized to have rows sum up to 1, is separable and is robustly simplicial. Let . There is a polynomial time algorithm that given such that for all rows , finds a such that . Further every row in is a row in . The corresponding row in can be represented as . Here is a vector in the convex hull of other rows in with unit length in norm.
In this paper we need a slightly different goal here than in [2]. Our goal is not to recover estimates to the anchor words that are close in norm but rather to recover almost anchor words (word whose row in has almost all its weight on a single coordinate). Hence, we will be able to achieve better bounds by treating this problem directly, and we give a substitute for the above theorem. We defer the proof to Section 5.
Theorem 2.7.
Suppose where and are normalized to have rows sum up to 1, is separable and is robustly simplicial. When there is a polynomial time algorithm that given such that for all rows , finds row (almost anchor words) in . The th almost anchor word corresponds to a row in that can be represented as . Here is a vector in the convex hull of other rows in with unit length in norm.
3 Algorithm for Learning a Topic Model: Proof of Theorem 1.4
First it is important to understand why separability helps in nonnegative matrix factorization, and specifically, the exact role played by the anchor words. Suppose the NMF algorithm is given a matrix . If is separable then this means that contains a diagonal matrix (up to row permutations). Thus a scaled copy of each row of is present as a row in . In fact, if we knew the anchor words of , then by looking at the corresponding rows of we could“read off” the corresponding row of (up to scaling), and use these in turn to recover all of . Thus the anchor words constitute the “key” that “unlocks” the factorization, and indeed the main step of our earlier NMF algorithm was a geometric procedure to identify the anchor words. When one is given a noisy version of , the analogous notion is “almost anchor” words, which correspond to rows of that are “very close” to rows of ; see Theorem 2.7.
Now we sketch how to apply these insights to learning topic models. Let denote the provided termbydocument matrix, whose each column describes the empirical word frequencies in the documents. It is obtained from sampling and thus is an extremely noisy approximation to . Our algorithm starts by forming the Gram matrix , which can be thought of as an empirical wordword covariance matrix. In fact as the number of documents increases tends to a limit implying . (See Lemma 3.7.) Imagine that we are given the exact matrix instead of a noisy approximation. Notice that is a product of three nonnegative matrices, the first of which is separable and the last is the transpose of the first. NMF at first sight seems too weak to help find such factorizations. However, if we think of as a product of two nonnegative matrices, and , then our NMF algorithm [2] can at least identify the anchor words of . As noted above, these suffice to recover , and then (using the anchor words of again) all of as well. See Section 3.1 for details.
Of course, we are not given but merely a good approximation to it. Now our NMF algorithm allows us to recover “almost anchor” words of , and the crux of the proof is Section 3.2 showing that these suffice to recover provably good estimates to and . This uses (mostly) bounds from matrix perturbation theory, and interrelationships of condition numbers mentioned in Section 2.
For simplicity we assume the following condition on the topic model, which we will see in Section 3.5 can be assumed without loss of generality:
(*) The number of words, , is at most .
Please see Algorithm 1: Main Algorithm for description of the algorithm. Note that is our shorthand for , which as noted converges to as the number of documents increases.

Query the oracle for documents, where

Split the words of each document into two halves, and let , be the termbydocument matrix with first and second half of words respectively.

Compute wordbyword matrix

Apply the “Robust NMF” algorithm of Theorem 2.7 to which returns words that are ”almost” the anchor words of .

Use these words as input to Recover with Almost Anchor Words to compute and
3.1 Recover and with Anchor Words
We first describe how the recovery procedure works in an “idealized” setting (Algorthm 2,Recover with True Anchor Words), when we are given the exact value of and a set of anchor words – one for each topic. We can permute the rows of so that the anchor words are exactly the first words. Therefore where is a diagonal matrix. Note that is not necessarily the identity matrix (nor even a scaled copy of the identity matrix), but we do know that the diagonal entries are at least . We apply the same permutation to the rows and columns of . As shown in Figure 1, if we look at the submatrix formed by the first rows and columns, it is exactly . Similarly, the submatrix consisting of the first rows is exactly . We can use these two matrices to compute and , in this idealized setting (and we will use the same basic strategy in the general case, but need only be more careful about how we analyze how errors compound in our algorithm).
Our algorithm has exact knowledge of the matrices and , and so the main task is to recover the diagonal matrix . Given , we can then compute and (for the Dirichlet Allocation we can also compute its parameters  i.e. the so that ). The key idea to this algorithm is that the row sums of and are the same, and we can use the row sums of to set up a system of linear constraints on the diagonal entries of .
Lemma 3.1.
When the matrix is exactly equal to and we know the set of anchor words, Recover with True Anchor Words outputs and correctly.
Proof.
The Lemma is straight forward from Figure 1 and the procedure. By Figure 1 we can find the exact value of and in the matrix . Step 2 of recover computes by computing . The two vectors are equal because is the topicterm matrix and its columns sum up to 1, in particular .
In Step 3, since is invertible by Lemma 2.2, is a diagonal matrix with entries at least , the matrix is also invertible. Therefore there is a unique solution . Also and hence . Finally, using the fact that , the output in step 4 is just , and the output in step 5 is equal to . ∎
3.2 Recover and with Almost Anchor Words
What if we are not given the exact anchor words, but are given words that are “close” to anchor words? As we noted, in general we cannot hope to recover the true anchor words, but even a good approximation will be enough to recover and .
When we restrict to the rows corresponding to “almost” anchor words, the submatrix will not be diagonal. However, it will be close to a diagonal in the sense that the submatrix will be a diagonal matrix multiplied by , and is close to the identity matrix (and the diagonal entries of are at least ). Here we analyze the same procedure as above and show that it still recovers and (approximately) even when given “almost” anchor words instead of true anchor words. For clarity we state the procedure again in Algorithm 3: Recover with Almost Anchor Words. The guarantees at each step are different than before, but the implementation of the procedure is the same. Notice that here we permute the rows of (and hence the rows and columns of ) so that the “almost” anchor words returned by Theorem 2.6 appear first and the submatrix on these rows is equal to .
Here, we still assume that the matrix is exactly equal to and hence the first rows of form the submatrix and the first rows and columns are . The complication here is that is not necessarily equal to , since the matrix is not necessarily the identity. However, we can show that is ”close” to if is suitably close to the identity matrix – i.e. given good enough proxies for the anchor words, we can bound the error of the above recovery procedure. We write . Intuitively when has only small entries should behave like the identity matrix. In particular, should have only small offdiagonal entries. We make this precise through the following lemmas:
Lemma 3.2.
Let and , then is a vector with entries in the range .
Proof.
is clearly invertible because the spectral norm of is at most . Let . Since we multiply on both sides to get . Let be the largest absolute value of any entry of (). Consider the entry where is achieved, we know Thus . Now all the entries in are within in absolute value, and we know that . Hence all the entries of are in the range , as desired. ∎
Lemma 3.3.
Let and , then the columns of have norm at most .
Proof.
Without loss of generality, we can consider just the first column of , which is equal to , where is the indicator vector that is one on the first coordinate and zero elsewhere.
The approach is similar to that in Lemma 3.2. Let . Left multiply by and we obtain . Hence . Let be the largest absolute value of entries of (). Let be the entry in which is achieved. Then
Therefore . Further, the . ∎
Now we are ready to show that the procedure Recover with Almost Anchor Words succeeds when given ”almost” anchor words:
Lemma 3.4.
When the matrix is exactly equal to , the matrix restricted to almost anchor words is where has norm when viewed as a vector, procedure Recover with Almost Anchor Words outputs such that each column of has error at most . The matrix has additive error whose norm when viewed as a vector is at most .
Proof.
Since is exactly , our algorithm is given and with no error. In Step 3, since , and are all invertible, we have
Ideally we would want , and indeed . From Lemma 3.2, the vector has entries in the range , thus each entry of is within a multiplicative factor from the corresponding entry in .
Consider the output in Step 4. Since , , are invertible, the first output is
Our goal is to bound the error of the columns of the output compared to the corresponding columns of . Notice that it is sufficient to show that the row of is close (in distance) to the indicator vector .
Claim 3.5.
For each ,
Proof.
Again, without loss of generality we can consider just the first row. From Lemma 3.3 has distance at most to . has entries in the range . And so
The last term can be bounded by . Consider the first term on the right hand side: The vector has one nonzero entry (the first one) whose absolute value is at most . Hence, from Lemma 3.3 the first term can be bounded by , and this implies the claim. ∎
The first row of is where is a vector with norm at most . So every column of is recovered with error at most .
Consider the second output of the algorithm. The output is and we can write and . The leading error are and hence the norm of the leading error term (when treated as a vector) is at most and other terms are of order and can safely be bounded by for suitably small ). ∎
Finally we consider the general case (in which there is additive noise in Step 1): we are not given exactly. We are given which is close to (by Lemma 3.7). We will bound the accumulation of this last type of error. Suppose in Step of RECOVER we obtain and and furthermore the entries of and have absolute value at most and the matrix has norm when viewed as a vector.
Lemma 3.6.
If are sufficiently small, RECOVER outputs such that each entry of has additive error at most . Also the matrix has additive error whose norm when viewed as a vector is at most .
The main idea of the proof is to write as . In this way the error can be translated to an error on and Lemma 3.4 can be applied. The error can be handled similarly.
Proof.
We shall follow the proof of Lemma 3.4. First can express the error term instead as . This is always possible because all of , , are invertible. Moreover, the norm of when viewed as a vector is at most , because this norm will grow by a factor of at most when multiplied by , a factor of at most 2 when multiplied by and at most when multiplied by . The bound of comes from Lemma 2.2, we lose an extra because may not have rows sum up to 1.
Hence and the additive error for can be transformed into error in , and we will be able to apply the analysis in Lemma 3.4.
Similarly, we can express the error term as . Entries of have absolute value at most . The right hand side of the equation in step 3 is equal to so the error is at most per entry. Following the proof of Lemma 3.4, we know has diagonal entries within .
Now we consider the output. The output for is equal to
Here we know has norm at most per row, is a diagonal matrix with entries in , entries of has absolute value . Following the proof of Lemma 3.4 the final entrywise error of is roughly the sum of these three errors, and is bounded by (Notice that Lemma 3.4 gives bound for norm of rows, which is stronger. Here we switched to entrywise error because the entries of are bounded while the norm of might be large).
Similarly, the output of is equal to . Again we write and . The extra term is small because the entries of are at most to (otherwise won’t be close to identity). The error can be bounded by . ∎
Now in order to prove our main theorem we just need to show that when number of documents is large enough, the matrix is close to the , and plug the error bounds into Lemma 3.6.
3.3 Error Bounds for
Here we show that the matrix indeed converges to when is large enough.
Lemma 3.7.
When , with high probability all entries of have absolute value at most . Further, the norm of rows of are also close to the norm of the corresponding row in .
Proof.
We shall first show that the expectation of is equal to where is . Then by concentration bounds we show that entries of are close to their expectations. Notice that we can also hope to show that converges to . However in that case we will not be able to get the inverse polynomial relationship with (indeed, even if goes to infinity it is impossible to learn with only one document). Replacing with the empirical allows our algorithm to perform better when the number of words per document is larger.
To show the expectation is correct we observe that conditioned on , the entries of two matrices and are independent. Their expectations are both . Therefore,
We still need to show that is close to this expectation. This is not surprising because is the average of independent samples (of ). Further, the variance of each entry in can be bounded because and also come from independent samples. For any , , , let be the probability distribution that and are sampled from, then is distributed as and is distributed as . The variance of these two variables are less than no matter what is by the properties of binomial distribution. Conditioned on the vector these two variables are independent, thus the variance of their product is at most . The variance of any entry in is at most . Higher moments can be bounded similarly and they satisfy the assumptions of Bernstein inequalities. Thus by Bernstein inequalities the probability that any entry is more than away from its true value is much smaller than .
The further part follows from the observation that the norm of a row in is proportional to the number of appearances of the word. As long as the number of appearances concentrates the error in norm must be small. The words are all independent (conditioned on ) so this is just a direct application of Chernoff bounds. ∎
3.4 Proving the Main Theorem
We are now ready to prove Theorem 1.4:
Proof.
By Lemma 3.7 we know when we have at least documents, is entry wise close to . In this case error per row for Theorem 2.6 is at most because in this step we can assume that there are at most words (see Section 3.5) and to normalize the row we need a multiplicative factor of at most (we shall only consider rows with norm at least , with high probability all the anchor words are in these rows). The parameter for Theorem 2.7 is by Lemma 2.2. Thus the almost anchor words found by the algorithm has weight at least on diagonals. The error for is at most , the error for any entry of and is at most . Therefore by Lemma 3.6 the entrywise error for is at most .
When the error is bounded by . In this case we need
The latter constraint comes from Lemma 2.2.
To get within additive error for the parameter , we further need to be close enough to the variancecovariance matrix of the documenttopic distribution, which means is at least
∎
3.5 Reducing Dictionary Size
Above we assumed that the number of distinct words is small. Here, we give a simple gadget that shows in the general case we can assume that this is the case at the loss of an additional additive in our accuracy:
Lemma 3.8.
The general case can be reduced to an instance in which there are at most words all of which (with at most one exception) occur with probability at least .
Proof.
In fact, we can collect all words that occur infrequently and “merge” all of these words into a aggregate word that we will call the runoff word. To this end, we call a word large if it appears more than times in documents, and otherwise we call it small. Indeed, with high probability all large words are words that occur with probability at least in our model. Also, all words that has a entry larger than in the corresponding row of will appear with at least probability, and is thus a large word with high probability. We can merge all small words (i.e. rename all of these words to a single, new word). Hence we can apply the above algorithm (which assumed that there are not too many distinct words). After we get a result with the modified documents we can ignore the runoff words and assign weight for all the small words. The result will still be correct up to additive error. ∎
4 The Dirichlet Subcase
Here we demonstrate that the parameters of a Dirichlet distribution can be (robustly) recovered from just the covariance matrix . Hence an immediate corollary is that our main learning algorithm can recover both the topic matrix and the distribution that generates columns of in a Latent Dirichlet Allocation (LDA) Model [6], provided that is separable. We believe that this algorithm may be of practical use, and provides the first alternative to local search and (unproven) approximation procedures for this inference problem [32], [11], [6].
The Dirichlet distribution is parametrized by a vector of positive reals is a natural family of continuous multivariate probability distributions. The support of the Dirichlet Distribution is the unit simplex whose dimension is the same as the dimension of . Let be a dimensional vector. Then for a vector in the dimensional simplex, its probability density is given by
where is the Gamma function. In particular, when all the ’s are equal to one, the Dirichlet Distribution is just the uniform random distribution over the probability simplex.
The expectation and variance of ’s are easy to compute given the parameters . We denote , then the ratio should be interpreted as the “size” of the th variable , and shows whether the distributions is concentrated in the interior (when is large) or near the boundary (when is small). The first two moments of Dirichlet Distribution is listed as below:
Suppose the Dirichlet distribution has and the sum of parameters is ; we give an algorithm that computes close estimates to the vector of parameters given a sufficiently close estimate to the covariance matrix (Theorem 4.3). Combining this with Theorem 1.4, we obtain the following corollary:
Theorem 4.1.
There is an algorithm that learns the topic matrix with high probability up to an additive error of from at most
documents sampled from the LDA model and runs in time polynomial in , . Furthermore, we also recover the parameters of the Dirichlet distribution to within an additive .
Our main goal in this section is to bound the condition number of the Dirichlet distribution (Section 4.1), and using this we show how to recover the parameters of the distribution from its covariance matrix (Section 4.2).
4.1 Condition Number of a Dirichlet Distribution
There is a wellknown metaprinciple that if a matrix is chosen by picking its columns independently from a fairly diffuse distribution, then it will be far from low rank. However, our analysis will require us to prove an explicit lower bound on . We now prove such a bound when the columns of are chosen from a Dirichlet distribution with parameter vector . We note that it is easy to establish such bounds for other types of distributions as well. Recall that we defined in Section 1, and here we will abuse notation and throughout this section we will denote by the matrix where is a Dirichlet distribution with parameter .
Let . The mean, variance and covariance for a Dirichlet distribution are wellknown, from which we observe that is equal to when and is equal to when .
Lemma 4.2.
The condition number of is at least .
Proof.
As the entries is when and when , after normalization is just the matrix where is outer product and is the identity matrix.
Let be a vector such that and achieves the minimum in and let and let be the complement. We can assume without loss of generality that (otherwise just take instead). The product is . The first term is a nonnegative vector and hence for each , . This implies that
∎
4.2 Recovering the Parameters of a Dirichlet Distribution
When the variance covariance matrix is recovered with error in norm when viewed as a vector, we can use Algorithm 4: Dirichlet to compute the parameters for the Dirichlet.
Theorem 4.3.
When the variance covariance matrix is recovered with error in norm when viewed as a vector, the procedure Dirichlet() learns the parameter of the Dirichlet distribution with error at most .
Proof.
The ’s all have error at most . The value is and the value is . Since we know the error for is at most . Finally we need to bound the denominator (since ). Thus the final error is at most . ∎
5 Obtaining Almost Anchor Words
In this section, we prove Theorem 2.7, which we restate here:
Theorem 5.1.
Suppose where and are normalized to have rows sum up to 1, is separable and is robustly simplicial. When there is a polynomial time algorithm that given such that for all rows