Computing n-Gram Statistics in MapReduce

Computing n-Gram Statistics in MapReduce

Klaus Berberich, Srikanta Bedathur
Max Planck Institute for Informatics
Saarbrücken, Germany
Indraprastha Institute of Information Technology
New Delhi, India

Statistics about -grams (i.e., sequences of contiguous words or other tokens in text documents or other string data) are an important building block in information retrieval and natural language processing. In this work, we study how -gram statistics, optionally restricted by a maximum -gram length and minimum collection frequency, can be computed efficiently harnessing MapReduce for distributed data processing. We describe different algorithms, ranging from an extension of word counting, via methods based on the Apriori principle, to a novel method Suffix- that relies on sorting and aggregating suffixes. We examine possible extensions of our method to support the notions of maximality/closedness and to perform aggregations beyond occurrence counting. Assuming Hadoop as a concrete MapReduce implementation, we provide insights on an efficient implementation of the methods. Extensive experiments on The New York Times Annotated Corpus and ClueWeb09 expose the relative benefits and trade-offs of the methods.

I Introduction

Applications in various fields including information retrieval [12, 46] and natural language processing [13, 18, 39] rely on statistics about -grams (i.e., sequences of contiguous words in text documents or other string data) as an important building block. Google and Microsoft have made available -gram statistics computed on parts of the Web. While certainly a valuable resource, one limitation of these datasets is that they only consider -grams consisting of up to five words. With this limitation, there is no way to capture idioms, quotations, poetry, lyrics, and other types of named entities (e.g., products, books, songs, or movies) that typically consist of more than five words and are crucial to applications including plagiarism detection, opinion mining, and social media analytics.

MapReduce has gained popularity in recent years both as a programming model and in its open-source implementation Hadoop. It provides a platform for distributed data processing, for instance, on web-scale document collections. MapReduce imposes a rigid programming model, but treats its users with features such as handling of node failures and an automatic distribution of the computation. To make most effective use of it, problems need to be cast into its programming model, taking into account its particularities.

In this work, we address the problem of efficiently computing -gram statistics on MapReduce platforms. We allow for a restriction of the -gram statistics to be computed by a maximum length and a minimum collection frequency . Only -grams consisting of up to words and occurring at least times in the document collection are thus considered.

While this can be seen as a special case of frequent sequence mining, our experiments on two real-world datasets show that MapReduce adaptations of Apriori-based methods [38, 44] do not perform well – in particular when long and/or less frequent -grams are of interest. In this light, we develop our novel method Suffix- that is based on ideas from string processing. Our method makes thoughtful use of MapReduce’s grouping and sorting functionality. It keeps the number of records that have to be sorted by MapReduce low and exploits their order to achieve a compact main-memory footprint, when determining collection frequencies of all -grams considered.

We also describe possible extensions of our method. This includes the notions of maximality/closedness, known from frequent sequence mining, that can drastically reduce the amount of -gram statistics computed. In addition, we investigate to what extent our method can support aggregations beyond occurrence counting, using -gram time series, recently made popular by Michel et al. [32], as an example.

Contributions made in this work include:

  • a novel method Suffix- to compute -gram statistics that has been specifically designed for MapReduce;

  • a detailed account on efficient implementation and possible extensions of Suffix- (e.g., to consider maximal/closed -grams or support other aggregations);

  • a comprehensive experimental evaluation on The New York Times Annotated Corpus (1.8 million news articles from 1987–2007) and ClueWeb09-B (50 million web pages crawled in 2009), as two large-scale real-world datasets, comparing our method against state-of-the-art competitors and investigating their trade-offs.

Suffix- outperforms its best competitor in our experiments by up to a factor 12x when long and/or less frequent -grams are of interest. Otherwise, it performs at least on par with the best competitor.

Organization. Section II introduces our model. Section III details on methods to compute -gram statistics based on prior ideas. Section IV introduces our method Suffix-. Aspects of efficient implementation are addressed in Section V. Possible extensions of Suffix- are sketched in Section VI. Our experiments are the subject of Section VII. In Section VIII, we put our work into context, before concluding in Section IX.

Ii Preliminaries

We now introduce our model, establish our notation, and provide some technical background on MapReduce.

Ii-a Data Model

Our methods operate on sequences of terms (i.e., words or other textual tokens) drawn from a vocabulary . We let denote the universe of all sequences over . Given a sequence with , we refer to its length as , write for the subsequence , and let refer to the element . For two sequences and , we let denote their concatenation. We say that

  • is a prefix of () iff

  • is a suffix of () iff

  • is a subsequence of () iff

and capture how often occurs in as

To avoid confusion, we use the following convention: When referring to sequences of terms having a specific length , we will use the notion -gram or indicate the considered length by alluding to, for instance, -grams. The notion -gram, as found in the title, will be used when referring to variable-length sequences of terms.

As an input, all methods considered in this work receive a document collection consisting of sequences of terms as documents. Our focus is on determining how often -grams occur in the document collection. Formally, the collection frequency of an -gram is defined as as Alternatively, one could consider the document frequency of -grams as the total number of documents that contain a specific -gram. While this corresponds to the notion of support typically used in frequent sequence mining, it is less common for natural language applications. However, all methods presented below can easily be modified to produce document frequencies instead.

Ii-B MapReduce

MapReduce, as described by Dean and Ghemawat [17], is a programming model and an associated runtime system at Google. While originally proprietary, the MapReduce programming model has been widely adopted in practice and several implementations exist. In this work, we rely on Hadoop [1] as a popular open-source MapReduce platform. The objective of MapReduce is to facilitate distributed data processing on large-scale clusters of commodity computers. MapReduce enforces a functional style of programming and lets users express their tasks as two functions

map() : (k1,v1) -> list<(k2,v2)>
reduce() : (k2, list<v2>) -> list<(k3,v3)>

that consume and emit key-value pairs. Between the map- and reduce-phase, the system sorts and groups the key-value pairs emitted by the map-function. The partitioning of key-value pairs (i.e., how they are assigned to cluster nodes) and their sort order (i.e., in which order they are seen by the reduce-function on each cluster node) can be customized, if needed for the task at hand. For detailed introductions to working with MapReduce and Hadoop, we refer to Lin and Dyer [29] as well as White [41].

Iii Methods based on prior ideas

With our notation established, we next describe three methods based on prior ideas to compute -gram statistics in MapReduce. Before delving into their details, let us state the problem that we address in more formal terms:

Given a document collection , a minimum collection frequency , a maximum length , our objective is to identify all -grams with their collection frequency , for which and hold.

We thus assume that -grams are only of interest to the task at hand, if they occur at least times in the document collection, coined frequent in the following, and consist of at most terms. Consider, as an example task, the construction of -gram language models [46], for which one would only look at -grams up to a specific length and/or resort to back-off models [24] to obtain more robust estimates for -grams that occur less than specific number of times.

The problem statement above can be seen as a special case of frequent sequence mining that considers only contiguous sequences of single-element itemsets. We believe this to be an important special case that warrants individual attention and allows for an efficient solution in MapReduce, as we show in this work. A more elaborate comparison to existing research on frequent sequence mining is part of Section VIII.

To ease our explanations below, we use the following running example, considering a collection of three documents:

= a x b x x
= b a x b x
= x b a x b

With parameters and , we expect as output

a : b : x :
a x : x b :
a x b :

from any method, when applied to this document collection.

Iii-a Naïve Counting

One of the example applications of MapReduce, given by Dean and Ghemawat [17] and also used in many tutorials, is word counting, i.e., determining the collection frequency of every word in the document collection. It is straightforward to adapt word counting to consider variable-length -grams instead of only unigrams and discard those that occur less than times. Pseudo code of this method, which we coin Naïve, is given in Algorithm 1.

0 // Mapper map(long , seq ) begin
       1 for  to  do
              2 for  to  do
                     3 emit(seq , long )
0// Reducer reduce(seq , list<long> ) begin
       1 if  then
              2 emit(seq , int )
Algorithm 1 Naïve

In the map-function, the method emits all -grams of length up to for a document together with the document identifier. If an -gram occurs more than once, it is emitted multiple times. In the reduce-phase, the collection frequency of every -gram is determined and, if it exceeds , emitted together with the -gram itself.

Interestingly, apart from minor optimizations, this is the method that Brants et al. [13] used for training large-scale language models at Google, considering -grams up to length five. In practice, several tweaks can be applied to improve this simple method including local pre-aggregation in the map-phase (e.g., using a combiner in Hadoop). Implementation details of this kind are covered in more detail in Section V. The potentially vast number of emitted key-value pairs that needs to be transferred and sorted, though, remains a shortcoming.

In the worst case, when , Naïve emits key-value pairs for a document , each consuming bytes, so that the method transfers bytes between the map- and reduce-phase. Complementary to that, we can determine the number of key-value pairs emitted based on the -gram statistics. Naïve emits a total of key-value pairs, each of which consumes bytes.

Iii-B Apriori-Based Methods

How can one do better than the naïve method just outlined? One idea is to exploit the Apriori principle, as described by Agrawal et al. [9] in their seminal paper on identifying frequent itemsets and follow-up work on frequent pattern mining [10, 37, 38, 44]. Cast into our setting, the Apriori principle states that

holds for any two sequences and , i.e., the collection frequency of a sequence is an upper bound for the collection frequency of any supersequence . In what follows, we describe two methods that make use of the Apriori principle to compute -gram statistics in MapReduce.


The first Apriori-based method Apriori-Scan, like the original Apriori algorithm [9] and GSP [38], performs multiple scans over the input data. During the -th scan the method determines -grams that occur at least times in the document collection. To this end, it exploits the output from the previous scan via the Apriori principle to prune the considered -grams. In the -th scan, only those -grams are considered whose two constituent -grams are known to be frequent. Unlike GSP, that first generates all potentially frequent sequences as candidates, Apriori-Scan considers only sequences that actually occur in the document collection. The method terminates after scans or when a scan does not produce any output.

Algorithm 2 shows how the method can be implemented in MapReduce. The outer repeat-loop controls the execution of multiple MapReduce jobs, each of which performs one distributed parallel scan over the input data. In the -th iteration, and thus the -th scan of the input data, the method considers all -grams from an input document in the map-function, but discards those that have a constituent -gram that is known to be infrequent. This pruning is done, leveraging the output from the previous iteration that is kept in a dictionary. In the reduce-function, analogous to Naïve, collection frequencies of -grams are determined and output if above the minimum collection frequency . After iterations or once an iteration does not produce any output, the method terminates, which is safe since the Apriori principle guarantees that no longer -gram can occur or more times in the document collection.

int repeat
       0 hashset<int[]> load(output-) // Mapper map(long , seq ) begin
              1 for  to  do
                     2 if 
                     3 contains(dict, )
                     4  contains(dict, ) then
                            5 emit(seq , long )
       0// Reducer reduce(seq , list<long> ) begin
              1 if  then
                     2 emit(seq , int )
until isEmpty(output-) ;
Algorithm 2 Apriori-Scan

When applied to our running example, in its third scan of the input data, Apriori-Scan emits in the map-phase for every document only the key-value pair a x b, but discards other trigrams (e.g., b x x) that contain an infrequent bigram (e.g., x x).

When implemented in MapReduce, every iteration corresponds to a separate job that needs to be run and comes with its administrative fix cost (e.g., for launching and finalizing the job). Another challenge in Apriori-Scan is the implementation of the dictionary that makes the output from the previous iteration available and accessible to cluster nodes. This dictionary can either be implemented locally, so that every cluster node receives a replica of the previous iteration’s output (e.g., implemented using the distributed cache in Hadoop), or, by loading the output from the previous iteration into a shared dictionary (e.g., implemented using a distributed key-value store) that can then be accessed remotely by cluster nodes. Either way, to make lookups in the dictionary efficient, significant main memory at cluster nodes is required.

An apparent shortcoming of Apriori-Scan is that it has to scan the entire input data in every iteration. Thus, although typically only few frequent -grams are found in later iterations, the cost of an iteration depends on the size of the input data. The number of iterations needed, on the other hand, is determined by the parameter or the length of the longest frequent -gram.

In the worst case, when and , Apriori-Scan emits key-value pairs per document , each consuming bytes, so that the method transfers bytes between the map- and reduce-phase. Again, we provide a complementary analysis based on the actual -gram statistics. To this end, let

denote the set of sequences that cannot be pruned based on the Apriori principle, i.e., whose true subsequences all occur at least times in the document collection. Apriori-Scan emits a total of key-value pairs, each of which amounts to bytes. Obviously, holds, so that Apriori-Scan emits at most as many key-value pairs as Naïve. Its concrete gains, though, depend on the value of and characteristics of the document collection.


The second Apriori-based method Apriori-Index does not repeatedly scan the input data but incrementally builds an inverted index of frequent -grams from the input data as a more compact representation. Operating on an index structure as opposed to the original data and considering -grams of increasing length, it resembles SPADE [44] when breadth-first traversing the sequence lattice.

int repeat
        if  then
              0 // Mapper #1 map(long , seq ) begin
                     11 hashmap<seq, int[]> pos for  to  do
                            2 add(get(pos, ), )
                     3for seq : keys(pos) do
                            4 emit(seq , posting , get(pos,))
              0// Reducer #1 reduce(seq list<posting> ) begin
                     1 if cf()  then
                            2 emit(seq , list<posting> )
              0 // Mapper #2 map(seq , list<posting> ) begin
                     1 emit(seq ,
                     22    r-seq, list<posting> )emit(seq ,
                     3    l-seq, list<posting> )
              0// Reducer #2 reduce(seq , list<seq, list<posting>> ) begin
                     1 for l-seq, list<posting> :  do
                            2 for r-seq, list<posting> :  do
                                   33 list<posting> = join(, ) if cf()  then
                                          44 seq emit(seq , list<posting> )
until isEmpty(output-) ;
Algorithm 3 Apriori-Index

Pseudo code of Apriori-Index is given in Algorithm 3. In its first phase, the method constructs an inverted index with positional information for all frequent -grams up to length (cf. Mapper #1 and Reducer #1 in the pseudo code). In its second phase, to identify frequent -grams beyond that length, Apriori-Index harnesses the output from the previous iteration. Thus, to determine a frequent -gram (e.g., b a x), the method joins the posting lists of its constituent -grams (i.e., b a and a x). In MapReduce, this can be accomplished as follows (cf. Mapper #2 and Reducer #2 in the pseudo code): The map-function emits for every frequent -gram two key-value pairs. The frequent -gram itself along with its posting list serves in both as a value. As keys the prefix and suffix of length are used. In the pseudo code, the method keeps track of whether the key is a prefix or suffix of the sequence in the value by using the r-seq and l-seq subtypes. The reduce-function identifies for a specific key all compatible sequences from the values, joins their posting lists, and emits the resulting -gram along with its posting list if its collection frequency is at least . Two sequences are compatible and must be joined, if one has the current key as a prefix, and the other has it as a suffix. In its nested for-loops, the method considers all compatible combinations of sequences. This second phase of Apriori-Index can be seen as a distributed candidate generation and pruning step.

Applied to our running example and assuming , the method only sees one pair of compatible sequences with their posting lists for the key x in its third iteration, namely:

a x :
x b :

By joining those, Apriori-Index obtains the only frequent -gram with its posting list

a x b :

For all , it would be enough to determine only collection frequencies, as opposed to, positional information of -grams. While a straightforward optimization in practice, we opted for simpler pseudo code. When implemented as described in Algorithm 3, the method produces an inverted index with positional information that can be used to quickly determine the locations of a specific frequent -gram.

One challenge when implementing Apriori-Index is that the number and size of posting-list values seen for a specific key can become large in practice. Moreover, to join compatible sequences, these posting lists have to be buffered, and a scalable implementation must deal with the case when this is not possible in the available main memory. This can, for instance, be accomplished by storing posting lists temporarily in a disk-resident key-value store.

The number of iterations needed by Apriori-Index is determined by the parameter or the length of the longest frequent -gram. Since every iteration, as for Apriori-Scan, corresponds to a separate MapReduce job, a non-negligible administrative fix cost is incurred.

In the worst case, when and , Apriori-Index emits key-value pairs per document , each consuming bytes, so that bytes are transferred the map- and reduce-phase. We assume for the complementary analysis. In its first iterations, Apriori-Index emits key-value pairs, where refers to the document frequency of the -gram , as mentioned in Section II. Each key-value pair consumes bytes. To analyze the following iterations, let

denote the set of frequent -grams that occur at least times. Apriori-Index emits a total of

key-value pairs, each of which consumes bytes. Like for Apriori-Scan, the concrete gains depend on the value of and characteristics of the document collection.

Iv Suffix sorting & aggregation

As already argued, the methods presented so far suffer from either excessive amounts of data that need to be transferred and sorted, requiring possibly many MapReduce jobs, or a high demand for main memory at cluster nodes. Our novel method Suffix- avoids these deficiencies: It requires a single MapReduce job, transfers only a modest amount of data, and requires little main memory at cluster nodes.

Consider again what the map-function in the Naïve approach emits for document from our running example. Emitting key-value pairs for all of the -grams b a x, b a, and b is clearly wasteful. The key observation here is that the latter two are subsumed by the first one and can be obtained as its prefixes. Suffix arrays [31] and other string processing techniques exploit this very idea.

Based on this observation, it is safe to emit key-value pairs only for a subset of the -grams contained in a document. More precisely, it is enough to emit at every position in the document a single key-value pair with the suffix starting at that position as a key. These suffixes can further be truncated to length – hence the name of our method.

0 // Mapper map(long , seq ) begin
       1 for  to  do
              2 emit(seq , long )
0// Reducer stack<int> stack<int> reduce(seq , list<long> ) begin
       1 while lcp(s,seq()) len() do
              2 if peek()  then
                     3emit(seq seq(), int peek())
              44pop() push(, pop() + pop())
       5if len()  then
              66push(, pop() + )
              8 for  lcp(s, seq()) to  do
                     99 push(, ) push(, ? : )
0 cleanup() begin
       1 reduce(seq , list<long> )
0// Partitioner partition(seq ) begin
       1 return hashcode() mod
0// Comparator compare(seq , seq ) begin
       1 for  to  do
              2 if  then
                     3 return
              4 else if  then
                     5 return
Algorithm 4 Suffix-

To determine the collection frequency of a specific -gram , we have to determine how many of the suffixes emitted in the map-phase are prefixed by . To do so correctly using only a single MapReduce job, we must ensure that all relevant suffixes are seen by the same reducer. This can be accomplished by partitioning suffixes based on their first term only, as opposed to, all terms therein. It is thus guaranteed that a single reducer receives all suffixes that begin with the same term. This reducer is then responsible for determining the collection frequencies of all -grams starting with that term. One way to accomplish this would be to enumerate all prefixes of a received suffix and aggregate their collection frequencies in main memory (e.g., using a hashmap or a prefix tree). Since it is unknown whether an -gram is represented by other yet unseen suffixes from the input, it cannot be emitted early along with its collection frequency. Bookkeeping is thus needed for many -grams and requires significant main memory.

How can we reduce the main-memory footprint and emit -grams with their collection frequency early on? The key idea is to exploit that the order in which key-value pairs are sorted and received by reducers can be influenced. Suffix- sorts key-value pairs in reverse lexicographic order of their suffix key, formally defined as follows for sequences and :

To see why this is useful, recall that each suffix from the input represents all -grams that can be obtained as its prefixes. Let denote the current suffix from the input. The reverse lexicographic order guarantees that we can safely emit any -gram such that , since no yet unseen suffix from the input can represent . Conversely, at this point, the only -grams for which we have to do bookkeeping, since they are represented both by the current suffix and potentially by yet unseen suffixes, are the prefixes of . We illustrate this observation with our running example. The reducer responsible for suffixes starting with b receives:

b x x :
b x :
b a x :
b :

When seeing the third suffix b a x, we can immediately finalize the collection frequency of the -gram b x and emit it, since no yet unseen suffix can have it as a prefix. On the contrary, the -grams b and b a cannot be emitted, since yet unseen suffixes from the input may have them as a prefix.

Building on this observation, we can do efficient bookkeeping for prefixes of the current suffix only and lazily aggregate their collection frequencies using two stacks. On the first stack , we keep the terms constituting . The second stack keeps one counter per prefix of . Between invocations of the reduce-function, we maintain two invariants. First, the two stacks have the same size . Second, reflects how often the -gram has been seen so far in the input. To maintain these invariants, when processing a suffix from the input, we first synchronously pop elements from both stacks until the contents of form a prefix of . Before each pop operation, we emit the contents of and the top element of , if the latter is above our minimum collection frequency . When popping an element from , its value is added to the new top element. Following that, we update , so that its contents equal the suffix . For all but the last term added, a zero is put on . For the last term, we put the frequency of , reflected by the length of its associated document-identifier list value, on . Figure 1 illustrates how the states of the two stacks evolve, as the above example input is processed.

x 1 x 2
x 0 x 2 a 0
b 0 b 0 b 2 b 4 _ _
Fig. 1: Suffix-’s bookkeeping illustrated

Pseudo code of Suffix- is given in Algorithm 4. The map-function emits for every document all its suffixes truncated to length if possible. The reduce-function reads suffixes in reverse lexicographic order and performs the bookkeeping using two separate stacks for -grams (terms) and their collection frequencies (counts), as described above. The function seq() returns the -gram corresponding to the entire terms stack. The function lcp() returns the length of the longest common prefix that two -grams share. In addition, Algorithm 4 contains a partition-function ensuring that suffixes are assigned to one of reducers solely based on their first term, as well as, a compare-function that ensures the reverse lexicographic order of input suffixes in the map-phase. When implemented in Hadoop, these two functions would materialize as a custom partitioner class and a custom comparator class. Finally, cleanup() is a method invoked once, when all input has been seen.

Suffix- emits key-value pairs per document . Each of these key-value pairs consumes bytes in the worst case when . The method thus transfer bytes between the map- and reduce-phase. For every term occurrence in the document collection, Suffix- emits exactly one key-value pair, so that in total key-value pairs are emitted, each consuming bytes.

V Efficient implementation

Having described the different methods at a conceptual level, we now provide details on aspects of their implementation, which we found to have a significant impact on performance in practice:

Document Splits. Collection frequencies of individual terms (i.e., unigrams) can be exploited to drastically reduce required work by splitting up every document at infrequent terms that it contains. Thus, assuming that z is an infrequent term given the current value of , we can split up a document like c b a z b a c into the two shorter sequences c b a and b a c. Again, this is safe due to the Apriori principle, since no frequent -gram can contain z. All methods profit from this – for large values of in particular.

Sequence Encoding. It is inefficient to operate on documents in a textual representation. As a one-time preprocessing, we therefore convert our document collections, so that they are represented as a dictionary, mapping terms to term identifiers, and one integer term-identifier sequence for every document. We assign identifiers to terms in descending order of their collection frequency to optimize compression. From there on, our implementation internally only deals with arrays of integers. Whenever serialized for transmission or storage, these are compactly represented using variable-byte encoding [42]. This also speeds up sorting, since -grams can now be compared using integer operations as opposed to operations on strings, thus requiring generally fewer machine instructions. Compact sequence encoding benefits all methods – in particular Apriori-Scan with its repeated scans of the document collection.

Key-Value Store. For Apriori-Scan and Apriori-Index, reducers potentially buffer a lot of data, namely, the dictionary of frequent -grams or the set of posting lists to be joined. Our implementation keeps this data in main memory as long as possible. Otherwise, it migrates the data into a disk-resident key-value store (Berkeley DB Java Edition [3]). Most main memory is then used for caching, which helps Apriori-Scan in particular, since lookups of frequent -grams typically hit the cache.

Hadoop-Specific Optimizations that we use in our implementation include local aggregation (cf. Mapper #1 in Algorithm 3), Hadoop’s distributed cache facility, raw comparators to avoid deserialization and object instantiation, as well as other best practices (e.g., described in [41]).

How easy to implement are the methods presented in previous sections? While hard to evaluate systematically, we still want to address this question based on our own experience. Naïve is the clear winner here. Implementations of the Apriori-based methods, as explained in Section III, require various tweaks (e.g., the use of a key-value store) to make them work. Suffix- does not require any of those and, when Hadoop is used as a MapReduce implementation, can be implemented using only on-board functionality.

Vi Extensions

In this section, we describe how Suffix- can be extended to consider only maximal/closed -grams and thus produce a more compact result. Moreover, we explain how it can support aggregations beyond occurrence counting, using -gram time series, recently made popular by [32], as an example.

Vi-a Maximality & Closedness

The number of -grams that occur at least times in the document collection can be huge in practice. To reduce it, we can adopt the notions of maximality and closedness common in frequent pattern mining. Formally, an -gram is maximal, if there is no -gram such that and . Similarly, an -gram is closed, if no -gram exists such that and . The sets of maximal or closed -grams are subsets of all -grams that occur at least times. Omitted -grams can be reconstructed – for closedness even with their accurate collection frequency.

Suffix- can be extended to produce maximal or closed -grams. Recall that, in its reduce-function, our method processes suffixes in reverse lexicographic order. Let denote the last -gram emitted. For maximality, we only emit the next -gram , if it is no prefix of (i.e., ). For closedness, we only emit , if it is no prefix of or if it has a different collection frequency (i.e., ). In our example, the reducer responsible for term a receives

a x b :

and, both for maximality and closedness, emits only the -gram a x b but none of its prefixes. With this extension, we thus emit only prefix-maximal or prefix-closed -grams, whose formal definitions are analogous to those of maximality and closedness above, but replace by . In our example, we still emit x b and b on the reducers responsible for terms x and b, respectively. For maximality, as subsequences of a x b, these -grams must be omitted. We achieve this by means of an additional post-filtering MapReduce job. As input, the job consumes the output produced by Suffix- with the above extensions. In its map-function, -grams are reversed (e.g., a x b becomes b x a). These reversed -grams are partitioned based on their first term and sorted in reverse lexicographic order, reusing ideas from Suffix-. In the reduce-function, we apply the same filtering as described above to keep only prefix-maximal or prefix-closed reversed -grams. Before emitting a reversed -gram, we restore its original order by reversing it. In our example, the reducer responsible for b receives

b x a :
b x :
b :

and, for maximality, only emits a x b. In summary, we obtain maximal or closed -grams by first determining prefix-maximal or prefix-closed -grams and, after that, identifying the suffix-maximal or suffix-closed among them.

Vi-B Beyond Occurrence Counting

Our focus so far has been on determining collection frequencies of -grams, i.e., counting their occurrences in the document collection. One can move beyond occurrence counting and aggregate other information about -grams, e.g.:

  • build an inverted index that records for every -gram how often or where it occurs in individual documents;

  • compute statistics based on meta-data of documents (e.g., timestamp or location) that contain a -gram.

In the following, we concentrate on the second type of aggregation and, as a concrete instance, consider the computation of -gram time series. Here, the objective is to determine for every -gram a time series whose observations reveal how often the -gram occurs in documents published, e.g., in a specific year. Suffix- can be extended to produce such -gram time series as follows: In the map-function we emit every suffix along with the document identifier and its associated timestamp. In the reduce-function, the counts stack is replaced by a stack of time series, which we aggregate lazily. When popping an element from the stack, instead of adding counts, we add time series observations. In the same manner, we can compute other statistics based on the occurrences of an -gram in documents and their associated meta-data. While these could also be computed by an extension of Naïve, the benefit of using Suffix- is that the required document meta-data is transferred only per suffix of a document, as opposed to, per contained -gram.

Vii Experimental evaluation

We conducted comprehensive experiments to compare the different methods and understand their relative benefits and trade-offs. Our findings from these experiments are the subject of this section.

Vii-a Setup & Implementation

Cluster Setup. All experiments were run on a local cluster consisting of ten Dell R410 server-class computers, each equipped with 64 GB of main memory, two Intel Xeon X5650 6-core CPUs, and four internal 2 TB SAS 7,200 rpm hard disks configured as a bunch-of-disks. Debian GNU/Linux 5.0.9 (Lenny) was used as an operating system. Machines in the cluster are connected via 1 GBit Ethernet. We use Cloudera CDH3u0 as a distribution of Hadoop 0.20.2 running on Oracle Java 1.6.0_26. One of the machines acts a master and runs Hadoop’s namenode and jobtracker; the other nine machines are configured to run up to ten map tasks and ten reduce tasks in parallel. To restrict the number of map/reduce slots, we employ a capacity-constrained scheduler pool in Hadoop. When we state that map/reduce slots are used, our cluster executes up to  map tasks and  reduce tasks in parallel. Java virtual machines to process tasks are always launched with 4 GB heap space.

Implementation. All methods are implemented in Java (JDK 1.6) applying the optimizations described in Section V to the extent possible and sensible for each of them.

Methods. We compare the methods Naïve, Apriori-Scan, Apriori-Index, and Suffix- in our experiments. For Apriori-Index, we set , so that the method directly computes collection frequencies of -grams having length four or less. We found this to be the best-performing parameter setting in a series of calibration experiments.

Measures. For our experiments in the following, we report as performance measures:

  • wallclock time as the total time elapsed between launching a method and receiving the final result (possibly involving multiple Hadoop jobs),

  • bytes transferred as the total amount of data transferred between map- and reduce-phase(s) (obtained from Hadoop’s MAP_OUTPUT_BYTES counter),

  • # records as the total number of key-value pairs transferred and sorted between map- and reduce-phase(s) (obtained from Hadoop’s MAP_OUTPUT_RECORDS counter).

For Apriori-Scan and Apriori-Index, measures (b) and (c) are aggregates over all Hadoop jobs launched. All measurements reported are based on single runs and were performed with exclusive access to the Hadoop cluster, i.e., without concurrent activity by other jobs, services, or users.

# documents
# term occurrence
# distinct terms
# sentences
sentence length (mean)
sentence length (stddev)
TABLE I: Dataset characteristics

Vii-B Datasets

We use two publicly-available real-world datasets for our experiments, namely:

  • The New York Times Annotated Corpus [7] consisting of more than 1.8 million newspaper articles from the period 1987–2007 (NYT);

  • ClueWeb09-B [6], as a well-defined subset of the ClueWeb09 corpus of web documents, consisting of more than 50 million web documents in English language that were crawled in 2009 (CW).

These two are extremes: NYT is a well-curated, relatively clean, longitudinal corpus, i.e., documents therein have a clear structure, use proper language with few typos, and cover a long time period. CW is a “World Wild Web” corpus, i.e., documents therein are highly heterogeneous in structure, content, and language.

For NYT a document consists of the newspaper article’s title and body. To make CW more handleable, we use boilerplate detection as described by Kohlschütter et al. [25] and implemented in boilerpipe’s [4] default extractor, to identify the core content of documents. On both datasets, we use OpenNLP [2] to detect sentence boundaries in documents. Sentence boundaries act as barriers, i.e., we do not consider -grams that span across sentences in our experiments. As described in Section V, in a pre-processing step, we convert both datasets into sequences of integer term-identifiers. The term dictionary is kept as a single text file; documents are spread as key-value pairs of 64-bit document identifier and content integer array over a total of 256 binary files. Table I summarizes characteristics of the two datasets.

Vii-C Output Characteristics

Let us first look at the -gram statistics that (or, parts of which) we expect as output from all methods. To this end, for both document collections, we determine all -grams that occur at least five times (i.e., and ). We bin -grams into 2-dimensional buckets of exponential width, i.e., the -gram with collection frequency goes into bucket where and . Figure 2 reports the number of -grams per bucket.

NYT                                           CW

Fig. 2: Output characteristics as # of -grams s with per -gram length and collection frequency

The figure reveals that the distribution is biased toward short and less frequent -grams. Consequently, as we lower the value of , all methods have to deal with a drastically increasing number of -grams. What can also be seen from Figure 2 is that, in both datasets, -grams exist that are very long, containing hundred or more terms, and occur more than ten times in the document collection. Examples of long -grams that we see in the output include ingredient lists of recipes (e.g.,…1 tablespoon cooking oil…) and chess openings (e.g., e4 e5 2 nf3…) in NYT; in CW they include web spam (e.g., travel tips san miguel tourism san miguel transport san miguel…) as well as error messages and stack traces from web servers and other software (e.g., …php on line 91 warning…) that also occur within user discussions in forums. For the Apriori-based methods, such long -grams are unfavorable, since they require many iterations to identify them.

Vii-D Use Cases

As a first experiment, we investigate how the methods perform for parameter settings chosen to reflect two typical use cases, namely, training a language model and text analytics. For the first use case, we set on NYT and on CW, as relatively low minimum collection frequencies, in combination with . The -gram statistics made public by Google [5], as a comparison, were computed with parameter settings and on parts of the Web. For the second use case, we choose , as a relatively high maximum sequence length, combined with on NYT and on CW. The idea in the analytics use case is to identify recurring fragments of text (e.g., quotations or idioms) to be analyzed further (e.g., their spread over time).

Fig. 3: Wallclock times in minutes for (a) training a language model (, NYT: / CW: ) and (b) text analytics (, NYT: / CW: ) as two typical use cases

Figure 3 reports wallclock-time measurements obtained for these two use cases with 64 map/reduce slots. For our language-model use case, Suffix- outperforms Apriori-Scan as the best competitor by a factor 3x on both datasets. For our analytics use case, we see a factor 12x improvement over Apriori-Index as the best competitor on NYT; on CW Suffix- still outperforms the next best Apriori-Scan by a factor 1.5x. Measurements for Naïve on CW in are missing, since the method did not complete in reasonable time.

Vii-E Varying Minimum Collection Frequency

Our second experiment studies how the methods behave as we vary the minimum collection frequency . We use a maximum length and apply all methods to the entire datasets. Measurements are performed using 64 map/reduce slots and reported in Figure 4.

(a) Wallclock times
(b) Bytes transferred
(c) # of records
(d) Wallclock times
(e) Bytes transferred
(f) # of records
Fig. 4: Varying the minimum collection frequency

We observe that for high minimum collection frequencies, Suffix- performs as well as the best competitor Apriori-Scan. For low minimum collection frequencies, it significantly outperforms the other methods. Both Apriori-based method show steep increases in wallclock time as we lower the minimum collection frequency – especially when we reach the lowest value of on each document collection. This is natural, because for both methods the work that has to be done in the -th iteration depends on the number of -grams output in the previous iteration, which have to be joined or kept in a dictionary, as described in Section III. As observed in Figure 2 above, the number of -grams grows drastically as we decrease the value of . When looking at the number of bytes and the number of records transferred, we see analogous behavior. For low values of , Suffix- transfers significantly less data than its competitors.

Vii-F Varying Maximum Length

In this third experiment, we study the methods’ behavior as we vary the maximum length . The minimum collection frequency is set as for NYT and for CW to reflect their different scale. Measurements are performed on the entire datasets with 64 map/reduce slots and reported in Figure 5. Measurements for are missing for Naïve on CW, since the method did not finish within reasonable time for those parameter settings.

(a) Wallclock times
(b) Bytes transferred
(c) # of records
(d) Wallclock times
(e) Bytes transferred
(f) # of records
Fig. 5: Varying the maximum length

Suffix- is on par with the best-performing competitor on CW, when considering -grams of length up to . For , it outperforms the next best Apriori-Scan by a factor 1.5x. On NYT, Suffix- consistently outperforms all competitors by a wide margin. When we increase the value of , the Apriori-based methods need to run more Hadoop jobs, so that their wallclock times keep increasing. For Naïve and Suffix-, on the other hand, we observe a saturation of wallclock times. This is expected, since these methods have to do additional work only for input sequences longer than consisting of terms that occur at least times in the document collection. When looking at the number of bytes and the number of records transferred, we observe a saturation for Naïve for the reason mentioned above. For Suffix- only the number of bytes saturates, the number of records transferred is constant, since it depends only on the minimum collection frequency . Further, we see that Suffix- consistently transfers fewest records.

Vii-G Scaling the Datasets

Next, we investigate how the methods react to changes in the scale of the datasets. To this end, both from NYT and CW, we extract smaller datasets that contain a random , , or subset of the documents. Again, the minimum collection frequency is set as for NYT and for CW. The maximum length is set as . Wallclock times are measured using 64 map/reduce slots.

(a) Wallclock times
(b) Wallclock times
Fig. 6: Scaling the datasets

From Figure 6, we observe that Naïve handles additional data equally well on both datasets. The other methods’ scalability is comparable to that of Naïve on CW, as can be seen from their almost-identical slopes. On NYT, in contrast, Apriori-Scan, Apriori-Index, and Suffix- cope slightly better with additional data than Naïve. This is due to the different characteristics of the two datasets.

Vii-H Scaling Computational Resources

Our final experiment explores how the methods behave as we scale computational resources. Again, we set for NYT and for CW. All methods are applied to the samples of documents from the collections. We vary the number of map/reduce slots as 16, 32, 48, and 64. The number of cluster nodes remains constant in this experiment, since we cannot add/remove machines to/from the cluster due to organizational restrictions. We thus only vary the amount of parallel work every machine can do; their total number remains constant throughout this experiment.

(a) Wallclock times
(b) Wallclock times
Fig. 7: Scaling computational resources

We observe from Figure 7 that all methods show comparable behavior as we make additional computational resources available. Or, put differently, all methods make equally effective use of them. What can also be observed across all methods is that the gains of adding more computational resources are diminishing – because of mappers and reducers competing for shared devices such as hard disks and network interfaces. This phenomenon is more pronounced on NYT than CW, since methods take generally less time on the smaller dataset, so that competition for shared devices is fiercer and has no chance to level out over time.


What we see in our experiments is that Suffix- outperforms its competitors when long and/or less frequent -grams are considered. Even otherwise, when the focus is on short and/or very frequent -grams, Suffix- performs never significantly worse than the other methods. It is hence robust and can handle a wide variety of parameter choices. To substantiate this, consider that Suffix- could compute statistics about arbitrary-length -grams that occur at least five times (i.e., and ), as reported in Figure 2, in less than six minutes on NYT and six hours on CW.

Viii Related Work

We now discuss the connection between this work and existing literature, which can broadly be categorized into:

Frequent Pattern Mining goes back to the seminal work by Agrawal et al. [8] on identifying frequent itemsets in customer transactions. While the Apriori algorithm described therein follows a candidate generation & pruning approach, Han et al. [20] have advocated pattern growth as an alternative approach. To identify frequent sequences, which is a problem closer to our work, the same kinds of approaches can be used. Agrawal and Srikant [10, 38] describe candidate generation & pruning approaches; Pei et al. [37] propose a pattern-growth approach. SPADE by Zaki [44] also generates and prunes candidates but operates on an index structure as opposed to the original data. Parallel methods for frequent pattern mining have been devised both for distributed-memory [19] and shared-memory machines [36, 45]. Little work exists that assumes MapReduce as a model of computation. Li et al. [26] describe a pattern-growth approach to mine frequent itemsets in MapReduce. Huang et al. [22] sketch an approach to maintain frequent sequences while sequences in the database evolve. Their approach is not applicable in our setting, since it expects input sequences to be aligned (e.g, based on time) and only supports document frequency. For more detailed discussions, we refer to Ceglar and Roddick [14] for frequent itemset mining, Mabroukeh and Ezeife [30] for frequent sequence mining, and Han et al. [21] for frequent pattern mining in general.

Natural Language Processing & Information Retrieval. Given their role in NLP, multiple efforts [11, 15, 18, 23, 39] have looked into -gram statistics computation. While these approaches typically consider document collections of modest size, recently Lin et al. [27] and Nguyen et al. [34] targeted web-scale data. Among the aforementioned work, Huston et al. [23] is closest to ours, also focusing on less frequent -grams and using a cluster of machines. However, they only consider -grams consisting of up to eleven words and do not provide details on how their methods can be adapted to MapReduce. Yamamoto and Church [43] augment suffix arrays, so that the collection frequency of substrings in a document collection can be determined efficiently. Bernstein and Zobel [12] identify long -grams as a means to spot co-derivative documents. Brants et al. [13] and Wang et al. [40] describe the -gram statistics made available by Google and Microsoft, respectively. Zhai [46] gives details on the use of -gram statistics in language models. Michel et al. [32] demonstrated recently that -gram time series are powerful tools to understand the evolution of culture and language.

MapReduce Algorithms. Several efforts have looked into how specific problems can be solved using MapReduce, including all-pairs document similarity [28], processing relational joins [35], coverage problems [16], content matching [33]. However, no existing work has specifically addressed computing -gram statistics in MapReduce.

Ix Conclusions

In this work, we have presented Suffix-, a novel method to compute -gram statistics using MapReduce as a platform for distributed data processing. Our evaluation on two real-world datasets demonstrated that Suffix- outperforms MapReduce adaptations of Apriori-based methods significantly, in particular when long and/or less frequent -grams are considered. Otherwise, Suffix- is robust, performing at least on par with the best competitor. We also argued that our method is easier to implement than its competitors, having been designed with MapReduce in mind. Finally, we established our method’s versatility by showing that it can be extended to produce maximal/closed -grams and perform aggregations beyond occurrence counting.


  • [1] Apache Hadoop
  • [2] Apache OpenNLP
  • [3] Berkeley DB Java Edition
  • [4] Boilerpipe
  • [5] Google -Gram Corpus
  • [6] The ClueWeb09 Dataset
  • [7] The New York Times Annotated Corpus
  • [8] R. Agrawal et al., “Mining association rules between sets of items in large databases,” SIGMOD 1993.
  • [9] R. Agrawal and R. Srikant, “Fast algorithms for mining association rules in large databases,” VLDB 1994.
  • [10] R. Agrawal and R. Srikant, “Mining sequential patterns,”
    ICDE 1995.
  • [11] S. Banerjee and T. Pedersen, “The design, implementation, and use of the ngram statistics package,” CICLing 2003.
  • [12] Y. Bernstein and J. Zobel, “Accurate discovery of co-derivative documents via duplicate text detection,” Inf. Syst., 31(7):595–609, 2006.
  • [13] T. Brants et al., “Large language models in machine translation,” EMNLP-CoNLL 2007.
  • [14] A. Ceglar and J. F. Roddick, “Association mining,”
    ACM Comput. Surv. 38(2), 2006.
  • [15] H. Ceylan and R. Mihalcea, “An efficient indexer for large n-gram corpora,” ACL 2011.
  • [16] F. Chierichetti et al., “Max-cover in map-reduce,” WWW 2010.
  • [17] J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large clusters,” OSDI 2004.
  • [18] M. Federico et al., “Irstlm: an open source toolkit for handling large scale language models,” INTERSPEECH 2008.
  • [19] V. Guralnik and G. Karypis, “Parallel tree-projection-based sequence mining algorithms,” Parallel Computing 30(4):443–472, 2004.
  • [20] J. Han et al., “Mining frequent patterns without candidate generation: A frequent-pattern tree approach,” DMKD 8(1):53–87, 2004.
  • [21] J. Han et al., “Frequent pattern mining: current status and future directions,” DMKD 15(1):55-86, 2007.
  • [22] J.-W. Huang et al., “Dpsp: Distributed progressive sequential pattern mining on the cloud,” PAKDD 2010.
  • [23] S. Huston, A. Moffat, and W. B. Croft, “Efficient indexing of repeated n-grams,” WSDM 2011.
  • [24] S. Katz, “Estimation of probabilities from sparse data for the language model component of a speech recognizer,” ASSP 35(3):400–401, 1987.
  • [25] C. Kohlschütter et al., “Boilerplate detection using shallow text features,” WSDM 2010.
  • [26] H. Li et al., “Pfp: parallel fp-growth for query recommendation,”
    RecSys 2008.
  • [27] D. Lin et al., “New tools for web-scale n-grams,”
    LREC 2010.
  • [28] J. Lin, “Brute force and indexed approaches to pairwise document similarity comparisons with mapreduce,” SIGIR 2009.
  • [29] J. Lin and C. Dyer, “Data-Intensive Text Processing with MapReduce”, Morgan & Claypool, 2010.
  • [30] N. R. Mabroukeh and C. I. Ezeife, “A taxonomy of sequential pattern mining algorithms,” ACM Comput. Surv. 43(1), 2010.
  • [31] U. Manber and E. W. Myers, “Suffix arrays: A new method for on-line string searches,” SIAM J. Comput. 22(5):935–948, 1993.
  • [32] J.-B. Michel et al., “Quantitative Analysis of Culture Using Millions of Digitized Books,” Science 2010
  • [33] G. D. F. Morales et al., “Social content matching in mapreduce,”
    PVLDB 4(7):460–469, 2011.
  • [34] P. Nguyen et al., “Msrlm: a scalable language modeling toolkit,” Microsoft Research, MSR-TR-2007-144, 2007.
  • [35] A. Okcan and M. Riedewald, “Processing theta-joins using mapreduce,” SIGMOD 2011.
  • [36] S. Parthasarathy et al., “Parallel data mining for association rules on shared-memory systems,” Knowl. Inf. Syst. 3(1):1–29, 2001.
  • [37] J. Pei et al., “Mining sequential patterns by pattern-growth: The prefixspan approach,” TKDE 16(11):1424–1440, 2004.
  • [38] R. Srikant and R. Agrawal, “Mining sequential patterns: Generalizations and performance improvements,” EDBT 1996.
  • [39] A. Stolcke, “Srilm - an extensible language modeling toolkit,”
  • [40] K. Wang et al., “An Overview of Microsoft Web N-gram Corpus and Applications,” NAACL-HLT 2010.
  • [41] T. White, “Hadoop: The Definitive Guide”, O’Reilly Media Inc., 2010.
  • [42] I. H. Witten et al., “Managing Gigabytes: Compressing and Indexing Documents and Images”, Morgan Kaufmann, 1999.
  • [43] M. Yamamoto and K. W. Church, “Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus,” Comput. Linguist. 27(1):1–30, 2001.
  • [44] M. J. Zaki, “Spade: An efficient algorithm for mining frequent sequences,” Machine Learning 42(1/2):31–60, 2001.
  • [45] M. J. Zaki, “Parallel sequence mining on shared-memory machines,” J. Parallel Distrib. Comput., vol. 61, no. 3, pp. 401–426, 2001.
  • [46] C. Zhai, “Statistical language models for information retrieval a critical review,” Found. Trends Inf. Retr. 2(1):137–213, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description