[

# [

## Abstract

We explore various techniques to compress a permutation over integers, taking advantage of ordered subsequences in , while supporting its application and the application of its inverse in small time. Our compression schemes yield several interesting byproducts, in many cases matching, improving or extending the best existing results on applications such as the encoding of a permutation in order to support iterated applications of it, of integer functions, and of inverted lists and suffix arrays.

Compression, Permutations, Succinct Data Structures, Adaptive Sorting.

2009111–122Freiburg \firstpageno111

Compressed Representations of Permutations]Compressed Representations of Permutations,
and Applications

Jérémy Barbay Gonzalo Navarro

1

## 1 Introduction

Permutations of the integers are a basic building block for the succinct encoding of integer functions [38], strings [1, 18, 39, 41], and binary relations [5, 4], among others. A permutation is trivially representable in bits, which is within bits of the information theory lower bound of bits.2 In many interesting applications, efficient computation of both the permutation and its inverse is required.

The lower bound of bits yields a lower bound of comparisons to sort such a permutation in the comparison model. Yet, a large body of research has been dedicated to finding better sorting algorithms which can take advantage of specificities of each permutation to sort. Trivial examples are permutations sorted such as the identity, or containing sorted blocks [32] (e.g. or ), or containing sorted subsequences [28] (e.g. ): algorithms performing only comparisons on such permutations, yet still comparisons in the worst case, are achievable and obviously preferable. Less trivial examples are classes of permutations whose structure makes them interesting for applications: see Mannila’s seminal paper [32] and Estivil-Castro and Wood’s review [14] for more details.

Each sorting algorithm in the comparison model yields an encoding scheme for permutations: It suffices to note the result of each comparison performed to uniquely identify the permutation sorted, and hence to encode it. Since an adaptive sorting algorithm performs comparisons on many classes of permutations, each adaptive algorithm yields a compression scheme for permutations, at the cost of losing a constant factor on some other “bad” classes of permutations. We show in Section 4 some examples of applications where only “easy” permutations arise. Yet such compression schemes do not necessarily support in reasonable time the inverse of the permutation, or even the simple application of the permutation: this is the topic of our study. We describe several encodings of permutations so that on interesting classes of instances the encoding uses bits while supporting the operations and in time . Later, we apply our compression schemes to various scenarios, such as the encoding of integer functions, text indexes, and others, yielding original compression schemes for these abstract data types.

## 2 Previous Work

{definition}

The entropy of a sequence of positive integers adding up to is . By convexity of the logarithm, .

#### Succinct Encodings of Sequences

Let be a sequence over an alphabet . This includes bitmaps when (where, for convenience, the alphabet will be ). We will make use of succinct representations of that support operations and : gives the number of occurrences of in and gives the position in of the th occurrence of .

For the case , requires bits of space and and can be supported in constant time using bits on top of [36, 10, 17]. The extra space is more precisely for some parameter , which is chosen to be, say, to achieve the given bounds. In this paper, we will sometimes apply the technique over sequences of length ( will be the length of the permutations). Still, we will maintain the value of as a function of , not , which ensures that the extra space will be of the form , i.e., it will tend to zero when divided by as grows, even if stays constant. All of our terms involving several variables in this paper can be interpreted in this strong sense: asymptotic in . Thus we will write the above space simply as .

Raman et al. [40] devised a bitmap representation that takes bits, while maintaining the constant time for the operations. Here , where is the number of occurrences of symbol in , is the so-called zero-order entropy of . For the binary case this simplifies to , where is the number of bits set in .

Grossi et al. [19] extended the result to larger alphabets using the so-called wavelet tree, which decomposes a sequence into several bitmaps. By representing those bitmaps in plain form, one can represent using bits of space, and answer , as well as and queries on , in time . By, instead, using Raman et al.’s representation for the bitmaps, one achieves bits of space, and the same times. Ferragina et al. [15] used multiary wavelet trees to maintain the same compressed space, while improving the times for all the operations to .

#### Measures of Disorder in Permutations

Various previous studies on the presortedness in sorting considered in particular the following measures of order on an input array to be sorted. Among others, Mehlhorn [34] and Guibas et al. [21] considered the number of pairs in the wrong order, Knuth [27] considered the number of ascending substrings (runs), Cook and Kim [12], and later Mannila [32] considered the number of elements which have to be removed to leave a sorted list, Mannila [32] considered the smallest number of exchanges of arbitrary elements needed to bring the input into ascending order, Skiena [44] considered the number of encroaching sequences, obtained by distributing the input elements into sorted sequences built by additions to both ends, and Levcopoulos and Petersson [28] considered Shuffled UpSequences and Shuffled Monotone Sequences. Estivil-Castro and Wood [14] list them all and some others.

## 3 Compression Techniques

We first introduce a compression method that takes advantage of (ascending) runs in the permutation. Then we consider a stricter variant of the runs, which allows for further compression in applications when those runs arise, and in particular allows the representation size to be sublinear in . Next, we consider a more general type of runs, which need not be contiguous.

### 3.1 Wavelet Tree on Runs

One of the best known sorting algorithm is merge sort, based on a simple linear procedure to merge two already sorted arrays, resulting in a worst case complexity of . Yet, checking in linear time for down-step positions in the array, where an element is followed by a smaller one, partitions the original arrays into ascending runs which are already sorted. This can speed up the algorithm when the array is partially sorted [27]. We use this same observation to encode permutations.

{definition}

A down step of a permutation over is a position such that . A run in a permutation is a maximal range of consecutive positions which does not contain any down step. Let be the list of consecutive down steps in . Then the number of runs of is noted , and the sequence of the lengths of the runs is noted .

For example, permutation contains runs, of lengths . Whereas previous analyses [32] of adaptive sorting algorithms considered only the number of runs, we refine them to consider the distribution of the sizes of the runs.

{theorem}

There is an encoding scheme using at most bits to encode a permutation over covered by runs of lengths . It supports and in time for any value of . If is chosen uniformly at random in then the average time is .

{proof}

The Hu-Tucker algorithm [23] (see also Knuth [27, p. 446]) produces in time a prefix-free code from a sequence of frequencies adding up to , so that (1) the -th lexicographically smallest code is that for frequency , and (2) if is the bit length of the code assigned to the -th sequence element, then is minimal and moreover [27, p. 446, Eq. (27)].

We first determine in time, and then apply the Hu-Tucker algorithm to . We arrange the set of codes produced in a binary trie (equivalent to a Huffman tree [24]), where each leaf corresponds to a run and points to its two endpoints in . Because of property (1), reading the leaves left-to-right yields the runs also in left-to-right order. Now we convert this trie into a wavelet-tree-like structure [19] without altering its shape, as follows. Starting from the root, first process recursively each child. For the leaves do nothing. Once both children of an internal node have been processed, the invariant is that they point to the contiguous area in covering all their leaves, and that this area of has already been sorted. Now we merge the areas of the two children in time proportional to the new area created (which, again, is contiguous in because of property (1)). As we do the merging, each time we take an element from the left child we append a 0 bit to a bitmap we create for the node, and a 1 bit when we take an element from the right list.

When we finish, we have the following facts: (1) has been sorted, (2) the time for sorting has been plus the total number of bits appended to all bitmaps, (3) each of the elements of leaf (at depth ) has been merged times, contributing bits to the bitmaps of its ancestors, and thus the total number of bits is .

Therefore, the total number of bits in the Hu-Tucker-shaped wavelet tree is at most . To this we must add the bits of the tree pointers. We preprocess all the bitmaps for and queries so as to spend extra bits (§2).

To compute we start at offset at the root bitmap , with position , and bitmap size . If we go down to the left child with and . Otherwise we go down to the right child with , , and . When we reach a leaf, the answer is .

To compute we do the reverse process, but we must first determine the leaf and offset within corresponding to position : We start at the root bitmap , with bitmap size and position . If we go down to the left child with . Otherwise we go down to the right child with and . We eventually reach leaf , and the offset within is . We now start an upward traversal using the nodes that are already in the recursion stack (those will be limited to soon). If is a left child of its parent , then we set , else we set , where is the bitmap of . Then we set until reaching the root, where .

In both cases the time is , where is the depth of the leaf arrived at. If is chosen uniformly at random in , then the average cost is . However, the worst case can be in a fully skewed tree. We can ensure in the worst case while maintaining the average case by slightly rebalancing the Hu-Tucker tree: If there exist nodes at depth , we rebalance their subtrees, so as to guarantee maximum depth . This affects only marginally the size of the structure. A node at depth cannot add up to a frequency higher than (see next paragraph). Added over all the possible nodes we have a total frequency of . Therefore, by rebalancing those subtrees we add at most bits. This is if , and otherwise the cost was anyway. For the same reasons the average time stays as it increases at most by .

The bound on the frequency at depth is proved as follows. Consider the node at depth , and its grandparent . Then the uncle of cannot have smaller frequency than . Otherwise we could improve the already optimal Hu-Tucker tree by executing either a single (if is left-left or right-right grandchild of ) or double (if is left-right or right-left grandchild of ) AVL-like rotation that decreases the depth of by 1 and increases that of the uncle of by 1. Thus the overall frequency at least doubles whenever we go up two nodes from , and this holds recursively. Thus the weight of is at most .

The general result of the theorem can be simplified when the distribution is not particularly favorable.

{corollary}

There is an encoding scheme using at most bits to encode a permutation over with a set of runs. It supports and in time for any value of .

As a corollary, we obtain a new proof of a well-known result on adaptive algorithms telling that one can sort in time  [32], now refined to consider the entropy of the partition and not only its size.

{corollary}

We can sort an array of length covered by runs of lengths in time , which is worst-case optimal in the comparison model among all permutations with runs of lengths so that .

### 3.2 Stricter Runs

Some classes of permutations can be covered by a small number of runs of a stricter type. We present an encoding scheme which uses bits for encoding the permutations from those classes, and still bits for all others.

{definition}

A strict run in a permutation is a maximal range of positions satisfying . The head of such run is its first position. The number of strict runs of is noted , and the sequence of the lengths of the strict runs is noted . We will call the sequence of run lengths of the sequence formed by the strict run heads of .

For example, permutation contains strict runs, of lengths . The run heads are , and contain 2 runs, of lengths . Instead, contains strict runs, all of length 1.

{theorem}

There is an encoding scheme using at most bits to encode a permutation over covered by strict runs and by runs, and with being the run lengths in the permutation of strict run heads. It supports and in time for any value of . If is chosen uniformly at random in then the average time is .

{proof}

We first set up a bitmap marking with a 1 bit the beginning of the strict runs. Set up a second bitmap such that . Now we create a new permutation of which collapses the strict runs of , . All this takes time and the bitmaps take bits using Raman et al.’s technique, where and are solved in constant time (§2).

Now build the structure of Thm. 3.1 for . The number of down steps in is the same as for the sequence of strict run heads in , and in turn the same as the down steps in . So the number of runs in is also and their lengths are . Thus we get at most bits to encode , and can compute and its inverse in worst case and average time.

To compute , we find and then compute . The final answer is . To compute , we find and then compute . The final answer is . This adds only constant time on top of that to compute and its inverse.

Once again, we might simplify the results when the distribution is not particularly favorable, and we also obtain interesting algorithmic results on sorting.

{corollary}

There is an encoding scheme using at most bits to encode a permutation over covered by strict runs and by runs. It supports and in time for any value of .

{corollary}

We can sort a permutation of , covered by strict runs and by runs, and being the run lengths of the strict run heads, in time , which is worst-case optimal, in the comparison model, among all permutations sharing these , , and values, such that .

### 3.3 Shuffled Sequences

Levcopoulos and Petersson [28] introduced the more sophisticated concept of partitions formed by interleaved runs, such as Shuffled UpSequences (SUS). We discuss here the advantage of considering permutations formed by shuffling a small number of runs.

{definition}

A decomposition of a permutation over into Shuffled UpSequences is a set of, not necessarily consecutive, subsequences of increasing numbers that have to be removed from in order to reduce it to the empty sequence. The minimum number of shuffled upsequences in such a decomposition of is noted , and the sequence of the lengths of the involved shuffled upsequences, in arbitrary order, is noted .

For example, permutation contains shuffled upsequences of lengths , but runs, all of length 2. Whereas the decomposition of a permutation into runs or strict runs can be computed in linear time, the decomposition into shuffled upsequences requires a bit more time. Fredman [16] gave an algorithm to compute the size of an optimal partition, claiming a worst case complexity of . In fact his algorithm is adaptive and takes time. We give here a variant of his algorithm which computes the partition itself within the same complexity, and we achieve even better time on favorable sequences .

{lemma}

Given a permutation over covered by shuffled upsequences of lengths , there is an algorithm finding such a partition in time . {proof} Initialize a sequence , and a splay tree [45] with the node , ordered by the rightmost value of the sequence contained by each node. For each further element , search for the sequence with the maximum ending point smaller than . If any, add to this sequence, otherwise create a new sequence and add it to . Fredman [16] already proved that this algorithm computes an optimal partition. The adaptive complexity results from the mere observation that the splay tree (a simple sorted array in Fredman’s proof) contains at most elements, and that the node corresponding to a subsequence is accessed once per element in it. Hence the total access time is [45, Thm. 2].

The complete description of the permutation requires to encode the computation of both the partitioning algorithm and the sorting one, and this time the encoding cost of partitioning is as important as that of merging.

{theorem}

There is an encoding scheme using at most bits to encode a permutation over covered by shuffled upsequences of lengths . It supports the operations and in time for any value of . If is chosen uniformly at random in the average time is .

{proof}

Partition the permutation into shuffled upsequences using Lemma 3.3, resulting in a string of length over alphabet which indicates for each element of the permutation the label of the upsequence it belongs to. Encode with a wavelet tree using Raman et al.’s compression for the bitmaps, so as to achieve bits of space and support retrieval of any , as well as symbol and on , in time 2). Store also an array so that is the accumulated length of all the upsequences with label less than . Array requires bits. Finally, consider the permutation formed by the upsequences taken in label order: has at most runs and hence can be encoded using bits using Thm. 3.1, as in corresponds to in . This supports and in time .

Now can be computed in time . Similarly, , where is such that , can also be computed in time. Thus the whole structure uses bits and supports and in time .

The obstacles to achieve the claimed average time are the operations on the wavelet tree of , and the binary search in . The former can be reduced to by using the improved wavelet tree representation by Ferragina et al. (§2). The latter is reduced to constant time by representing with a bitmap with the bits set at the values , so that , and the binary search is replaced by . With Raman et al.’s structure (§2), needs bits and operates in constant time.

Again, we might prefer a simplified result when has no interesting distribution, and we also achieve an improved result on sorting, better than the known .

{corollary}

There is an encoding scheme using at most bits to encode a permutation over covered by shuffled upsequences. It supports the operations and in time for any value of .

{corollary}

We can sort an array of length , covered by shuffled upsequences of lenghts , in time , which is worst-case optimal, in the comparison model, among all permutations decomposable into shuffled upsequences of lenghts such that .

## 4 Applications

### 4.1 Inverted Indexes

Consider a full-text inverted index which gives the word positions of any word in a text. This is a popular data structure for natural language text retrieval [3, 46], as it permits for example solving phrase queries without accessing the text. For each different text word, an increasing list of its text positions is stored.

Let be the total number of words in a text collection and the vocabulary size (i.e., number of different words). An uncompressed inverted index requires bits. It has been shown [31] that, by -encoding the differences between consecutive entries in the inverted lists, the total space reduces to , where is the zero-order entropy of the text if seen as a sequence of words (§2). We note that the empirical law by Heaps [22], well accepted in Information Retrieval, establishes that is small: for some constant depending on the text type.

Several successful methods to compress natural language text take words as symbols and use zero-order encoding, and thus the size they can achieve is lower bounded by [35]. If we add the differentially encoded inverted index in order to be able of searching the compressed text, the total space is at least .

Now, the concatenation of the inverted lists can be seen as a permutation of with runs, and therefore Thm. 3.1 lets us encode it in bits. Within the same space we can add numbers telling where the runs begin, in an array . Now, in order to retrieve the list of the -th word, we simply obtain , each in time. Moreover we can extract any random position from a list, which enables binary-search-based strategies for list intersection [2, 42, 13]. In addition, we can also obtain a text passage from the (inverse) permutation: To find out , gives its position in the inverted lists, and a binary search on finds the interval , to output that th word, in time.

This result is very interesting, as it constitutes a true word-based self-index [39] (i.e., a compressed text index that contains the text). Similar results have been recently obtained with rather different methods [9, 11]. The cleanest one is to build a wavelet tree over with compression [15], which achieves bits of space, and permits obtaining , as well as extracting the th element of the inverted list of the th word with , all in time .

Yet, one advantage of our approach is that the extraction of consecutive entries takes time if we do the process for all the entries as a block: Start at range at the root bitmap , with position , and bitmap size . Go down to both left and right children: to the left with , same , and ; to the right with , , and . Stop when the range becomes empty or when we reach a leaf, in which case report all answers , . By representing the inverted list as , we can extract long inverted lists faster than the existing methods.

{corollary}

There exists a representation for a text of integers in (regarded as word identifiers), with zero-order entropy , that takes bits of space, and can retrieve the text position of the th occurrence of the th text word, as well as the value , in time. It can also retrieve any range of successive occurrences of the th text word in time .

We could, instead, represent the inverted list as , so as to extract long text passages efficiently, but the wavelet tree representation can achieve the same result. Another interesting functionality that both representations share, and which is useful for other list intersection algorithms [6, 4], is that to obtain the first entry of a list which is larger than . This is done with and on the wavelet tree representation. In our permutation representation, we can also achieve it in time by finding out the position of a number within a given run. The algorithm is similar to those in Thm. 3.1 that descend to a leaf while maintaining the offset within the node, except that the decision on whether to descend left or right depends on the leaf we want to arrive at and not on the bitmap content (this is actually the algorithm to compute on binary wavelet trees [39]).

Finally, we note that our inverted index data structure supports in small time all the operations required to solve conjunctive queries on binary relations.

### 4.2 Suffix Arrays

Suffix arrays are used to index texts that cannot be handled with inverted lists. Given a text of symbols over an alphabet of size , the suffix array is a permutation of so that is lexicographically smaller than . As suffix arrays take much space, several compressed data structures have been developed for them [39]. One of interest for us is the Compressed Suffix Array (CSA) of Sadakane [41]. It builds over a permutation of , which satisfies (and thus lets us move virtually one position forward in the text) [20]. It turns out that, using just and extra bits, one can count the number of times a pattern occurs in using applications of ; locate any such occurrence using applications of , by spending extra bits of space; and extract a text substring using at most applications of . Hence this is another self-index, and its main burden of space is that to represent permutation .

Sadakane shows that has at most runs, and gives a representation that accesses in constant time by using bits of space. It was shown later [39] that the space is actually bits, for any and constant . Here is the th order empirical entropy of [33].

With Thm. 3.1 we can encode using bits of space, whose extra terms aside from entropy are better than Sadakane’s. Those extra terms can be very significant in practice. The price is that the time to access is instead of constant. On the other hand, an interesting extra functionality is that to compute , which lets us move (virtually) one position backward in . This allows, for example, displaying the text context around an occurrence without having to spend any extra space. Still, although interesting, the result is not competitive with recent developments [15, 30].

An interesting point is that contains strict runs, for any [29]. Therefore, Cor. 3.2 lets us represent it using bits of space. For limited as above, this is at most bits, which is similar to the space achieved by another self-index [29, 43], yet again it is slightly superseded by its time performance.

### 4.3 Iterated Permutation

Munro et al. [37] described how to represent a permutation as the concatenation of its cycles, completed by a bitvector of bits coding the lengths of the cycles. As the cycle representation is itself a permutation of , we can use any of the permutation encodings described in §3 to encode it, adding the binary vector encoding the lengths of the cycles. It is important to note that, for a specific permutation , the difficulty to compress its cycle encoding is not the same as the difficulty to encode the original permutation .

Given a permutation with cycles of lengths , there are several ways to encode it as a permutation , depending on the starting point of each cycle ( choices) and the order of the cycles in the encoding ( choices). As a consequence, each permutation with cycles of lengths can be encoded by any of the corresponding permutations.

{corollary}

Any of the encodings from Theorems 3.1, 3.2 and 3.3 can be combined with an additional cost of at most bits to encode a permutation over composed of cycles of lengths to support the operation for any value of , in time and space function of the order in the permutation encoding of the cycles of .

The space “wasted” by such a permutation representation of the cycles of is bits. To recover some of this space, one can define a canonical cycle encoding by starting the encoding of each cycle with its smallest value, and by ordering the cycles in order of their starting point. This canonical encoding always starts with a and creates at least one shuffled upsequence of length : it can be compressed as a permutation over with at least one shuffled upsequence of length through Thm 3.3.

### 4.4 Integer Functions

Munro and Rao [38] extended the results on permutations to arbitrary functions from to , and to their iterated application , the function iterated times starting at . Their encoding is based on the decomposition of the function into a bijective part, represented as a permutation, and an injective part, represented as a forest of trees whose roots are elements of the permutation: the summary of the concept is that an integer function is just a “hairy permutation”. Combining the representation of permutations from [37] with any representation of trees supporting the level-ancestor operator and an iterator of the descendants at a given level yields a representation of an integer function using bits to support in time, for any fixed , integer and .

Janssen et al. [25] defined the degree entropy of an ordered tree with nodes, having nodes with children, as , and proposed a succinct data structure for using bits to encode the tree and support, among others, the level-ancestor operator. Obviously, the definition and encoding can be generalized to a forest of trees by simply adding one node whose children are the roots of the trees.

Encoding the injective parts of the function using Janssen et al.’s [25] succinct encoding, and the bijective parts of the function using one of our permutation encodings, yields a compressed representation of any integer function which supports its application and the application of its iterated variants in small time.

{corollary}

There is a representation of a function that uses bits to support in time, for any integer and for any , where is the forest representing the injective part of the function, and is the number of runs in the bijective part of the function.

## 5 Conclusion

Bentley and Yao [8], when introducing a family of search algorithms adaptive to the position of the element searched (aka the “unbounded search” problem), did so through the definition of a family of adaptive codes for unbounded integers, hence proving that the link between algorithms and encodings was not limited to the complexity lower bounds suggested by information theory.

In this paper, we have considered the relation between the difficulty measures of adaptive sorting algorithms and some measures of “entropy” for compression techniques on permutations. In particular, we have shown that some concepts originally defined for adaptive sorting algorithms, such as runs and shuffled upsequences, are useful in terms of the compression of permutations; and conversely, that concepts originally defined for data compression, such as the entropy of the sets of sizes of runs, are a useful addition to the set of difficulty measures that one can consider in the study of adaptive algorithms.

It is easy to generalize our results on runs and strict runs to take advantage of permutations which are a mix of up and down runs or strict runs (e.g. , with only a linear extra computational and/or space cost. The generalization of our results on shuffled upsequences to SMS [28], permutations containing mixes of subsequences sorted in increasing and decreasing orders (e.g. ) is sligthly more problematic, because it is NP hard to optimally decompose a permutation into such subsequences [26], but any approximation scheme [28] would yield a good encoding.

Refer to the associated technical report [7] for a longer version of this paper, in particular including all the proofs.

### Footnotes

1. thanks: Second author partially funded by Fondecyt Grant 1-080019, Chile.
2. In this paper we use the notations and .

### References

1. D. Arroyuelo, G. Navarro, and K. Sadakane. Reducing the space requirement of LZ-index. In Proc. 17th CPM, LNCS 4009, pages 319–330, 2006.
2. R. Baeza-Yates. A fast set intersection algorithm for sorted sequences. In Proc. 15th CPM, LNCS 3109, pages 400–408, 2004.
3. R. Baeza-Yates and B. Ribeiro. Modern Information Retrieval. Addison-Wesley, 1999.
4. J. Barbay, A. Golynski, J. I. Munro, and S. S. Rao. Adaptive searching in succinctly encoded binary relations and tree-structured documents. Theor. Comp. Sci., 2007.
5. J. Barbay, M. He, J. I. Munro, and S. S. Rao. Succinct indexes for strings, binary relations and multi-labeled trees. In Proc. 18th SODA, pages 680–689, 2007.
6. J. Barbay, A. López-Ortiz, and T. Lu. Faster adaptive set intersections for text searching. In Proc. 5th WEA, LNCS 4007, pages 146–157, 2006.
7. J. Barbay and G. Navarro. Compressed representations of permutations, and applications. Technical Report TR/DCC-2008-18, Department of Computer Science (DCC), University of Chile, December 2008.
8. J. L. Bentley and A. C.-C. Yao. An almost optimal algorithm for unbounded searching. Inf. Proc. Lett., 5(3):82–87, 1976.
9. N. Brisaboa, A. Fariña, S. Ladra, and G. Navarro. Reorganizing compressed text. In Proc. 31st SIGIR, pages 139–146, 2008.
10. D. Clark. Compact Pat Trees. PhD thesis, University of Waterloo, Canada, 1996.
11. F. Claude and G. Navarro. Practical rank/select queries over arbitrary sequences. In Proc. 15th SPIRE, LNCS 5280, pages 176–187, 2008.
12. C. Cool and D. Kim. Best sorting algorithm for nearly sorted lists. Comm. ACM, 23:620–624, 1980.
13. J. Culpepper and A. Moffat. Compact set representation for information retrieval. In Proc. 14th SPIRE, pages 137–148, 2007.
14. V. Estivill-Castro and D. Wood. A survey of adaptive sorting algorithms. ACM Comp. Surv., 24(4):441–476, 1992.
15. P. Ferragina, G. Manzini, V. Mäkinen, and G. Navarro. Compressed representations of sequences and full-text indexes. ACM Trans. on Algorithms (TALG), 3(2):article 20, 2007.
16. M. L. Fredman. On computing the length of longest increasing subsequences. Discrete Math., 11:29–35, 1975.
17. A. Golynski. Optimal lower bounds for rank and select indexes. In Proc. 33th ICALP, LNCS 4051, pages 370–381, 2006.
18. A. Golynski, J. I. Munro, and S. S. Rao. Rank/select operations on large alphabets: a tool for text indexing. In Proc. 17th SODA, pages 368–373, 2006.
19. R. Grossi, A. Gupta, and J. Vitter. High-order entropy-compressed text indexes. In Proc. 14th SODA, pages 841–850, 2003.
20. R. Grossi and J. Vitter. Compressed suffix arrays and suffix trees with applications to text indexing and string matching. SIAM J. on Computing, 35(2):378–407, 2006.
21. L. Guibas, E. McCreight, M. Plass, and J. Roberts. A new representation of linear lists. In Proc. 9th STOC, pages 49–60, 1977.
22. H. Heaps. Information Retrieval - Computational and Theoretical Aspects. Academic Press, NY, 1978.
23. T. Hu and A. Tucker. Optimal computer-search trees and variable-length alphabetic codes. SIAM J. of Applied Mathematics, 21:514–532, 1971.
24. D. Huffman. A method for the construction of minimum-redundancy codes. Proceedings of the I.R.E., 40(9):1090–1101, 1952.
25. J. Jansson, K. Sadakane, and W.-K. Sung. Ultra-succinct representation of ordered trees. In Proc. 18th SODA, pages 575–584, 2007.
26. A. E. Kézdy, H. S. Snevily, and C. Wang. Partitioning permutations into increasing and decreasing subsequences. J. Comb. Theory Ser. A, 73(2):353–359, 1996.
27. D. E. Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching. Addison-Wesley, 2nd edition, 1998.
28. C. Levcopoulos and O. Petersson. Sorting shuffled monotone sequences. Inf. Comp., 112(1):37–50, 1994.
29. V. Mäkinen and G. Navarro. Succinct suffix arrays based on run-length encoding. Nordic J. of Computing, 12(1):40–66, 2005.
30. V. Mäkinen and G. Navarro. Implicit compression boosting with applications to self-indexing. In Proc. 14th SPIRE, LNCS 4726, pages 214–226, 2007.
31. V. Mäkinen and G. Navarro. Rank and select revisited and extended. Theor. Comp. Sci., 387(3):332–347, 2007.
32. H. Mannila. Measures of presortedness and optimal sorting algorithms. In IEEE Trans. Comput., volume 34, pages 318–325, 1985.
33. G. Manzini. An analysis of the Burrows-Wheeler transform. J. of the ACM, 48(3):407–430, 2001.
34. K. Mehlhorn. Sorting presorted files. In Proc. 4th GI-Conference on Theoretical Computer Science, LNCS 67, pages 199–212, 1979.
35. E. Moura, G. Navarro, N. Ziviani, and R. Baeza-Yates. Fast and flexible word searching on compressed text. ACM Trans. on Information Systems (TOIS), 18(2):113–139, 2000.
36. I. Munro. Tables. In Proc. 16th FSTTCS, LNCS 1180, pages 37–42, 1996.
37. J. I. Munro, R. Raman, V. Raman, and S. S. Rao. Succinct representations of permutations. In Proc. 30th ICALP, LNCS 2719, pages 345–356, 2003.
38. J. I. Munro and S. S. Rao. Succinct representations of functions. In Proc. 31st ICALP, LNCS 3142, pages 1006–1015, 2004.
39. G. Navarro and V. Mäkinen. Compressed full-text indexes. ACM Comp. Surv., 39(1):article 2, 2007.
40. R. Raman, V. Raman, and S. Rao. Succinct indexable dictionaries with applications to encoding -ary trees and multisets. In Proc. 13th SODA, pages 233–242, 2002.
41. K. Sadakane. New text indexing functionalities of the compressed suffix arrays. J. of Algorithms, 48(2):294–313, 2003.
42. P. Sanders and F. Transier. Intersection in integer inverted indices. In Proc. 9th ALENEX, 2007.
43. J. Sirén, N. Välimäki, V. Mäkinen, and G. Navarro. Run-length compressed indexes are superior for highly repetitive sequence collections. In Proc. 15th SPIRE, LNCS 5280, pages 164–175, 2008.
44. S. S. Skiena. Encroaching lists as a measure of presortedness. BIT, 28(4):775–784, 1988.
45. D. Sleator and R. Tarjan. Self-adjusting binary search trees. J. of the ACM, 32(3):652–686, 1985.
46. I. Witten, A. Moffat, and T. Bell. Managing Gigabytes. Morgan Kaufmann, 2nd edition, 1999.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters