# Sparse Suffix Tree Construction with Small Space

###### Abstract

We consider the problem of constructing a sparse suffix tree (or suffix array) for suffixes of a given text of size , using only words of space during construction time. Breaking the naive bound of time for this problem has occupied many algorithmic researchers since a different structure, the (evenly spaced) sparse suffix tree, was introduced by Kärkkäinen and Ukkonen in 1996. While in the evenly spaced sparse suffix tree the suffixes considered must be evenly spaced in , here there is no constraint on the locations of the suffixes.

We show that the sparse suffix tree can be constructed in time. To achieve this we develop a technique, which may be of independent interest, that allows to efficiently answer longest common prefix queries on suffixes of , using only space. We expect that this technique will prove useful in many other applications in which space usage is a concern. Furthermore, additional tradeoffs between the space usage and the construction time are given.

## 1 Introduction

In the sparse suffix tree^{1}^{1}1All of the results within apply to the sparse suffix array as well. problem we are given a string of size , and a list of interesting indices of . The goal is to construct the suffix tree for only those indices, while using little space during the construction process, which will hopefully be words. Such a construction can be helpful in situations where an extremely large string is saved in read only memory, and we are interested in indexing only a small set of its suffixes, or if the index of all of the text cannot all fit in available memory. Natural examples are indexing a genomic sequence where only part of the locations are of interest for searching for a given gene, or indexing a book where we are only interested in appearances of a pattern which is a beginning of a paragraph, sentence, or word.

A naive algorithm with running time can be easily produced by inserting each suffix one at a time into the suffix tree. However, breaking the naive bound has been a problem that has baffled many algorithmic researchers since a similar flavored problem was introduced by Kärkkäinen and Ukkonen in[KU96]. The authors there introduced the sparse suffix tree, and showed an efficient construction for the evenly spaced sparse suffix tree, which is a suffix tree for every suffix in the text. In addition, they discussed how to search for a pattern in a sparse suffix tree, and those ideas were later improved by Kolpakov et. al. in [KKS11]. However, the question of constructing a sparse suffix tree with no restriction on the sparseness while breaking the naive bound remained open. It should be noted that an efficient solution for a suffix tree on words was already introduced by Andersson et.al in [ALS99], and later extended to suffix arrays on words by Ferragina and Fischer in [FF07], but their model is restrictive as it assumes that there is a delimiter after each word. In the sparse suffix tree there is no such assumption, and hence it is more general.

#### Results

We are the first to break the naive algorithm for general sparse suffix trees, by showing how to construct a sparse suffix tree in time, using only words of space. To achieve this, we develop a novel technique for performing efficient batched longest common prefix (LCP) queries, using little space. In particular, we show how to answer a batch of LCP queries using only words of space, in time. This technique may be of independent interest, and we expect it to be helpful in other applications in which space usage is a factor. In addition, we show some tradeoffs of construction time and space usage, which are based on time-space tradeoffs of the batched LCP queries. In particular we show that using space the construction time is reduced to . So, for example, if for a small constant , then the cost for constructing the sparse suffix tree becomes , using words of space.

## 2 Preliminaries

For a string of size , denote by the suffix of . The LCP of two suffixes and is denoted by , but we will slightly abuse notation and write . We denote by the substring .

We assume the reader is familiar with the suffix tree data structure. For any node in a (sparse) suffix tree, let denote the length of the substring corresponding path from the root of the suffix tree to .

#### Fingerprinting

We make use of the fingerprinting techniques of Rabin and Karp from [KR87]. We assume that is over the integer alphabet , as this will be needed for the fingerprinting. If this is not the case, then we can use perfect hashing (For example a –wise independent hash function into the integers bounded by for some constant will suffice) for the purpose of the fingerprinting, which works with high probability. This suffices as for fingerprinting purposes we only care if strings are equal, and not about their lexicographical order.

Let be a prime between and . A fingerprint for a substring , denoted by , is the number . Two equal substrings will always have the same fingerprint, however the converse is not true. Luckily, it can be shown that the probability of any two different substrings having the same fingerprint is at most by [KR87]. The exponent in the polynomial can be amplified by a standard constant number of repetitions.

We utilize two important properties of fingerprints. The first is that can be computed from in constant time. This is done by the formula . The second is that the fingerprint of can be computed in time from the fingerprint of and , for . This is done by the formula . Notice however that in order to perform this computation, we must have stored as computing it on the fly may be costly.

Our algorithm will be using fingerprinting, and therefore will be correct with high probability. Being that the running time is polynomial in , is is possible to guarantee that the algorithm works with probability at least , via repeating the fingerprints enough times (but still constant), and the union bound.

## 3 Batch LCP Queries

### 3.1 The Algorithm

Given a string of size and a list of pairs of indices , we wish to compute for all . To do this we perform rounds of computation, where at the round the input is a set of pairs denoted by , where we are guaranteed that for any . The goal of the iteration is to decide for any if or not. In addition, the round will prepare , which is the input for the round. To begin the execution of the procedure we set , as we are always guaranteed that for any , . We will first provide a description of what happens during each of the rounds, and after we will explain how the algorithm uses to derive for all .

#### A Single Round

The round, for , is executed as follows. We begin by constructing the set of size , and construct a perfect hash table for the values in , using a -wise independent hash function into a world of size for some constant (which with high probability guarantees that there are no collisions). Notice if two elements in have the same value, then we store them in a list at their hashed value. In addition, for every value in we store which index created it, so for example, for and we remember that they were created from .

Next, we scan from till . When we reach we compute in constant time from . In addition, if then we store together with in the hash table. Once the scan of is completed, for every we compute in constant time, as we stored and . Similarly we compute . Notice that to do this we need to compute in time which can be easily afforded within our bounds, as one computation suffices for all pairs.

If then it must be that , and so we add to . Otherwise, with high probability and so we add to . Notice there is a natural bijection between pairs in and pairs in following from the method of constructing the pairs for the next round. For each pair in we will remember which pair in originated it, which can be easily transferred when is constructed from .

#### LCP on Small Strings

After the rounds have taken place, we know that for every , . For each such pair, we spend time in order to exactly compute . Notice that this is performed for pairs, so the total cost is for this last phase. We then construct . For each denote by the pair which originated . We claim that for any , .

### 3.2 Runtime and Correctness

Each round takes time, and the number of rounds is for a total of for all rounds. In addition, the work executed for computing is an additional .

The following lemma on LCPs will be helpful in proving the correctness of the batched LCP query.

###### Lemma 3.1.

For any , for any , it holds that .

###### Proof.

This follows directly from the definition of LCP. ∎

We now proceed on to prove that for any , . Lemma 3.2 shows that the algorithm behaves as expected during the rounds, and Lemma 3.3 proves that the work done in the final round suffices for computing the LCPs.

###### Lemma 3.2.

At round , for any , , assuming the fingerprints do not give a false positive.

###### Proof.

The proof is by induction on . For the base, and so meaning that . Therefore, , which is always true. For the step, we assume correctness for and we prove for as follows. By the induction hypothesis, for any , . Let be the pair in corresponding to in . If then . Therefore,

If then , and being as we assume that the fingerprints do not give produce false positives, . Therefore,

where the third equality holds from Lemma 3.1, and the fourth inequality holds as (which is the third equality), and by the induction hypothesis. ∎

###### Lemma 3.3.

For any , .

###### Proof.

Using Lemma 3.2 with we have that for any , . Being that it must be that . Notice that . Therefore, as required. ∎

Notice that the space used in each round is the set of pairs and the hash table for , both of which require only words of space. Thus, we have obtained the following.

###### Theorem 3.4.

It is possible to compute the of pairs of suffixes of a string of size in time using space.

We discuss several other time/space tradeoffs in Section 5

## 4 Constructing the Sparse Suffix Tree

The procedure for constructing the sparse suffix tree using only space is split into two stages. In the first stage, we lexicographically sort the suffixes. In the second stage, we compute the of every two consecutive suffixes in the ordered list, and use those LCPs to simulate a DFS traversal on the sparse suffix tree, constructing the sparse suffix tree as we go along.

### 4.1 Stage 1: Suffix Sorting

We can use batched LCP queries in order to compare pairs of suffixes, as once the LCP of two suffixes is known, deciding which of the two is lexicographically smaller than the other takes constant time by examining the first two characters that differ in said suffixes. So we are interested in performing roughly sets of comparisons each in order to sort the suffixes, where each set of comparisons is performed via batched LCP queries. One way to do this is to simulate a sorting network on the suffixes of depth [AKS83]. Unfortunately, such known networks have very large constants hidden in them, and are generally considered impractical [Pat90]. There are some practical networks with depth such as [Bat68], however, we wish to do better.

What we chose to do is simulate the quick-sort algorithm by each time picking a random suffix called the pivot, and lexicographically comparing all of other suffixes to the pivot. Once a partition is made to the set of suffixes which are lexicographically smaller than the pivot, and the set of suffixes which are lexicographically larger than the pivot, we recursively sort each set in the partition with the following modification. Each level of the recursion tree is performed concurrently using one batched LCP query for the entire level. The number of comparisons performed in each level is always bounded by , so we may use Theorem 3.4. Furthermore, with high probability, the number of levels in the randomized quicksort is . Thus the total amount of time spent, with high probability is . Notice that from a theoretical point of view, it is possible to have a deterministic runtime of the same magnitude using sorting networks.

Notice that once the suffixes have been sorted, then we have in fact computed the sparse suffix array for the suffixes. Hence we have obtained the following.

###### Theorem 4.1.

There exists a randomized algorithm that with high probability constructs the sparse suffix array for a string of size and a set of any indices in in time in the worst case.

### 4.2 Stage 2: Traversing the Sparse Suffix Tree

Let be the ordered list of suffixes for which we wish to construct the sparse suffix tree. Then we begin by computing for all . This takes time using Theorem 3.4. Now we wish to simulate a DFS traversal on the sparse suffix tree in order to construct it. This is done as follows.

The algorithm begins by creating a node which will be the root of the sparse suffix tree, and denoted by . Denote by the set of first suffixes in , taken by lexicographical order. We will iteratively construct the sparse suffix tree for for each . Denote the sparse suffix tree for by . For , is simply with one child that is the single node for . Assume we have ; we show how to use it to construct . We need to locate the location of the node which will be the lowest common ancestor of the leaf corresponding to and the leaf corresponding to . To do this we traverse the path in from the leaf corresponding to to , and each time we reach a node on this path, we compare the length of its label to . If the two are equal, then this is the node we are searching for, and we insert as a child of this node. If then we need to continue up the path. If then the node needs to be inserted as a child of , breaking the edge going from towards the leaf corresponding to , which is the node we previously encountered while traversing the path. When is inserted, we set , and add the leaf corresponding to as a child of . Notice that the so this process will in the worst case end at , with .

This process simulates a DFS search on the sparse suffix tree, and so the total time cost for this DFS is . Thus we have obtained the following.

###### Theorem 4.2.

There exists a randomized algorithm that with high probability constructs the sparse suffix tree for a string of size and a set of any indices in in time in the worst case.

## 5 Time-Space Tradeoffs for Batched LCP Queries

We provide an overview of the technique used to obtain the time-space tradeoff for the batched LCP process, as it closely follow those of Section 3. In Section 3 the algorithm simulates concurrent binary searches in order to determine the of each input pair (with some extra work at the end). The idea for obtaining the tradeoff is to generalize the binary search to an -ary search. So in the round the input is a set of pairs denoted by , where we are guaranteed that for any , and the goal of the iteration is to decide for any if or not. From a space perspective, this means that we need space in order to compute fingerprints per each index in any . From a time perspective, we only need to perform rounds before we may begin the final round. However, each round now costs . So the total cost for a batched LCP query is , and the total time cost for constructing the sparse suffix tree is .

If, for example, for a small constant , then the cost for constructing the sparse suffix tree becomes , using words of space.

## References

- [AKS83] M. Ajtai, J. Komlós, and E. Szemerédi. An o(n log n) sorting network. In Proceedings of the 15th Annual ACM Symposium on Theory of Computing, pages 1–9, 1983.
- [ALS99] A. Andersson, N. J. Larsson, and K. Swanson. Suffix trees on words. Algorithmica, 23(3):246–260, 1999.
- [Bat68] K. E. Batcher. Sorting networks and their applications. In AFIPS Spring Joint Computing Conference, pages 307–314, 1968.
- [FF07] P. Ferragina and J. Fischer. Suffix arrays on words. In CPM, pages 328–339, 2007.
- [KKS11] R. Kolpakov, G. Kucherov, and T. A. Starikovskaya. Pattern matching on sparse suffix trees. In First International Conference on Data Compression, Communications and Processing, pages 92–97, 2011.
- [KR87] R. M. Karp and M. O. Rabin. Efficient randomized pattern-matching algorithms. IBM Journal of Research and Development, 31(2):249–260, 1987.
- [KU96] J. Kärkkäinen and E. Ukkonen. Sparse suffix trees. In Computing and Combinatorics, Second Annual International Conference, pages 219–230, 1996.
- [Pat90] M. Paterson. Improved sorting networks with o(log n) depth. Algorithmica, 5(1):65–92, 1990.