Layered Working-Set Trees

Layered Working-Set Trees

Prosenjit Bose School of Computer Science, Carleton University. {jit,karim,vida,jhowat}@cg.scs.carleton.ca. This research was partially supported by NSERC and MRI.    Karim Douïeb    Vida Dujmović    John Howat
Abstract

The working-set bound [Sleator and Tarjan, J. ACM, 1985] roughly states that searching for an element is fast if the element was accessed recently. Binary search trees, such as splay trees, can achieve this property in the amortized sense, while data structures that are not binary search trees are known to have this property in the worst case. We close this gap and present a binary search tree called a layered working-set tree that guarantees the working-set property in the worst case. The unified bound [Bădoiu et al., TCS, 2007] roughly states that searching for an element is fast if it is near (in terms of rank distance) to a recently accessed element. We show how layered working-set trees can be used to achieve the unified bound to within a small additive term in the amortized sense while maintaining in the worst case an access time that is both logarithmic and within a small multiplicative factor of the working-set bound.

1 Introduction

Let be a set of keys from a totally ordered universe and let be a sequence of elements from . Typically, one is required to store elements of in some data structure such that accessing the elements of using in the order defined by is “fast.” Here, “fast” can be defined in many different ways, some focusing on worst case access times and others on amortized access times. For example, the search times of splay trees [8] can be stated in terms of the rank difference between the current and previous elements of ; this is the dynamic finger property [3, 4].

If is the -th element of , we say that is accessed at time in . The working-set number of at time , denoted , is the number of distinct elements accessed since the last time was accessed or inserted, or if is either not in or has not been accessed by time .

The working-set property states the time to access at time is .111In this paper, is defined to be . Splay trees were shown by Sleator and Tarjan [8] to have the working-set property in the amortized sense. One drawback of splay trees, however, is that most of the access bounds hold only in an amortized sense. While the amortized cost of a query can be stated in terms of its rank difference between successive queries or the number of distinct queries since a query was last made, any particular operation could take time. In order to address this situation, attention has turned to finding data structures that maintain the distribution-sensitive properties of splay trees but guarantee good performance in the worst case.

The data structure of Bădoiu et al. [2], called the working-set structure, guarantees this property in the worst case. However, this data structure departs from the binary search tree model and is instead a collection of binary search trees and queues.

Bădoiu et al. [2] also describe a data structure called the unified structure that achieves the unified property, which states that searching for at time takes time where is the rank difference between and . Again, this data structure is not a binary search tree. The skip-splay algorithm of Derryberry and Sleator [6] fits into the binary search tree model and comes within a small additive term of the unified bound in an amortized sense.

Our Results.

We present a binary search tree that is capable of searching for a query in worst-case time and performs insertions and deletions in worst-case time , where is the number of keys stored by the tree at the time of the access. This fills in the gap between binary search trees that offer these query times in only an amortized sense and data structures which guarantee these query times in the worst-case but do not fit in the binary search tree model. We have also shown how to use this binary search tree to achieve the unified bound to within a small additive term in the amortized sense while maintaining in the worst case an access time that is both logarithmic and within a small multiplicative factor of the working-set bound.

Organization.

The rest of this paper is organized in the following way. We complete the introduction by summarizing the way the working-set structure of Bădoiu et al. [2] operates, since this will play a key role in our binary search tree. In Section 2, we describe our binary search tree and explain the way in which operations are performed. In Section 3, we show how to combine our results with those of Derryberry and Sleator [6] on the unified bound to achieve an improved worst-case search cost. We conclude with Section 4 which summarizes our results and explains possible directions for future research.

1.1 The Working-Set Structure

We now describe the working-set structure of Bădoiu et al. [2]. The structure maintains a dynamic set under the operations Insert, Delete and Search. Denote by the set of keys stored in the data structure at time .

The structure is composed of balanced binary search trees and the same number of doubly linked lists . For any , the contents of and are identical, and pointers (in both directions) are maintained between their common elements. Every element in the set is contained in exactly one tree and in its corresponding list. For , the size of and is , whereas the size of and is . Figure 1 shows a schematic of the structure.

Figure 1: The working-set structure of Bădoiu et al. [2]. The pointers between corresponding elements in and are not shown.

The working-set structure achieves its stated query time of by ensuring that an element with working-set number is stored in a tree with . Every list orders the elements of by the time of their last access, starting with the youngest (most recently accessed) and ending with the oldest (least recently accessed).

Operations in the working-set structure are facilitated by an operation called a shift. A shift is performed between two trees and . Assume , since the other case is symmetric. To perform a shift, we begin at . We look in to determine the oldest element and remove it from and delete it from . We then insert it into and (as the youngest element) and repeat the process by shifting from to . This process continues until we attempt to shift from one tree to itself. Observe that a shift causes the size of to decrease by one and the size of to increase by one. All of the trees between and will end up with the same size, but the elements contained in them change, since the oldest element from the previous tree is always added as the youngest element of the next tree.

We are now ready to describe how to make queries in the working-set structure. To search for an element , we search sequentially in until we find or search all of the trees and fail to find . If for any , then we will search every tree at a total cost of and then report that is not in the structure. Otherwise, assume . We delete from and and insert it in and place it at the front of . We now have that the size of and has increased by one and the size of and has decreased by one. We therefore perform a shift from to to restore the sizes of the trees and lists. The time required for a search is dominated by the search time in . Observe that if and , then it must have been removed as the oldest element from , at which point at least distinct queries had been made. Therefore, and so the search time is .

Insertions are performed by inserting the element into and (as the youngest element). Again, this causes and to be too large. Since no other tree has space for one more element, we must shift to the last tree . Thus, a shift from to is performed at total cost . Note that it is possible that a new tree may need to be created if the size of grows past . Deletions are performed by first searching for the element to be deleted. Once found, say in , it is removed from and . To restore these sizes, we perform a shift from to at total cost . If the last tree becomes empty, it can be removed.

2 The Binary Search Tree

In this section, we describe a binary search tree that has the working-set property in the worst case.

2.1 Model

Recall the binary search tree model of Wilber [10]. Each node of the tree stores the key associated with it and has a pointer to its left and right children and its parent. The keys stored in the tree are from a totally ordered universe and are stored such that at any node, all of the keys in the left subtree are less than that stored in the node and all of the keys in the right subtree are greater than that stored at the node. Furthermore, each node may keep a constant222By standard convention, bits are considered to be “constant.” amount of additional information called fields, but no additional pointers may be stored.

To perform an access to a key, we are given a pointer initialized to the root of the tree. An access consists of moving this pointer from a node to one of its adjacent nodes (through the parent pointer or one of the children pointers) until the pointer reaches the desired key. Along the way, we are allowed to update the fields and pointers in any nodes that the pointer reached. The access cost is the number of nodes reached by the pointer.

2.2 Tree Decomposition

Our binary search tree will adapt the working-set structure described in the previous section to the binary search tree model. Let denote the binary search tree as a whole. At a high level, our binary search tree layers the trees of the working-set structure together to form , and then augments nodes with enough information to recover which is the oldest in each tree at any given time.

Consider a labelling of where each node has a label from such that no node has an ancestor with a label greater than its own label. This labelling partitions the nodes of . We say that the nodes with label form a layer . A layer will play the same role as in the working-set structure. Like , contains exactly elements for , and contains the remaining elements. Unlike , is typically a collection of subtrees of . We refer to a subtree of a layer as a layer-subtree. Figure 2 shows this decomposition. Every node stores as a field the value such that which we denote by . We also record the total number of layers and the size of at the root as fields of each node.

Figure 2: The decomposition of the tree into layers. Here, the layer-subtrees of are denoted . Observe that layer can be connected to any layer with . In this case, all of the elements of the layer-subtree are less than the elements in , and so the layer-subtree must be connected to a leaf of .

Each layer-subtree is maintained independently as a tree that guarantees that each node of has depth in at most . This can be done using, e.g., a red-black tree [1, 7]. By “independently”, we mean that balance criteria are applied only to the elements within one layer-subtree.

Our first observation concerns the depth of a node in a given layer.

Lemma 1.

The depth of a node is .

Proof.

In the worst case, we must traverse a layer-subtree of each of to reach and then locate in . Each layer has size and thus each layer-subtree we pass through has size at most . Since each layer-subtree guarantees depth logarithmic in the size of the layer-subtree and thus the layer, the total depth is . ∎

The main obstacle in creating our tree comes from the fact that the core operations are performed on subtrees rather than trees, as is the case for the working-set structure. Consequently, standard red-black tree operations can not be used for the operations spanning more than one layer as described in Section 2.4. We break the operations into those restricted to one layer, those spanning two neighbouring layers, and finally those performed on the tree as a whole. These operations are described in the following sections.

Another difficulty arises from the having to implement the queues of the working-set structure in the binary search tree model. The queues are needed in order to determine the oldest element in a layer at any given time.

We encode the linked lists in our tree as follows. Each node stores the key of the node inserted into directly before and after it. This information is stored in the fields and , respectively. We also store a key value in the field . If is the oldest element in layer , then no element was inserted before it and so we set . In this case, we use to store the key of the oldest element in layer . Similarly, if is the youngest element in layer , then no element was inserted after it and so we set and use to store the key of the youngest element in layer . If is neither the youngest nor the oldest element in , then we have .

Before we describe how operations are performed on this binary search tree, we must make a brief note on storage. By the above description, each node stores three pointers (parent and children) and a key, as per the usual binary search tree model. The root also maintains the number of trees and the size of . In addition, we must store balance information (one bit for red-black trees) and three additional key values (exactly one of which is ): , and . If keys are assumed to be of size , then it is clear our binary search tree fits the model of Section 2.1. Note that we are storing key values, not pointers. Given a key value stored at a node, we do not have a pointer to it, so we must instead search for it by traversing to the root and performing a standard search in a binary search tree. If keys have size , it is true that we use more than additional space per node. However, since any node would then store a key of size , we are only increasing the size of a node by a constant factor.

2.3 Intra-Layer Operations

The operations we perform within a single layer are essentially the same as those we perform on any balanced binary search tree. We need notions of restoring balance after insertions and deletions and of splitting and joining. As mentioned before, we are not necessarily restricting ourselves to using any particular implementation of layer-subtrees. Instead, we will state the intra-layer operations and the required time bounds, and then show how red-black trees [1, 7] can be used to fulfill this role. Other binary search trees that meet the requirements of each operation could also be used. Layer-subtrees must also ensure that their operations do not leave the layer-subtree; this can be done by checking the layer number of a node before visiting it.

Intra-layer operations rearrange layer-subtrees in some way. Observe that layer-subtrees hanging off a given node are maintained even after rearranging the layer-subtree, since the roots of such layer-subtrees can be viewed as the results of unsuccessful searches. Therefore, when describing these operations, we need not concern ourselves with explicitly maintaining layer-subtrees below the current one.

In our binary tree , for each node in a layer-subtree of , we define the following operations. They are straightforward, but mentioned here for completeness and as a basis for the operations performed between layers.

This operation is responsible for ensuring that each node of has depth after the node has been inserted into the layer-subtree. For red-black trees, this operation is precisely the RB-Insert-Fixup operation presented by Cormen et al. [5, Section 13.3]. Although the version presented there does not handle colouring , it is straightforward to modify it to do so.

This operation is responsible for ensuring that each node of has depth after a deletion in the layer-subtree. The exact node given to the operation is implementation dependent. For red-black trees, this operation is precisely the RB-Delete-Fixup operation presented by Cormen et al. [5, Section 13.4]. In this case, the node is the child of the node spliced out by the deletion algorithm; we will elaborate on this when describing the layer operations in Section 2.4.

This operation will cause the node to be moved to the root of . The rest of the layer-subtree will be split between the left and right side of such that each side is independently balanced and thus guarantee depth of their respective nodes; this may mean that the layer-subtree is no longer balanced as a whole. For red-black trees, this operation is described by Tarjan [9, Chapter 4], except we do not destroy the original trees, but rather stop when is the root of the layer-subtree.

This operation is the inverse of : given a node , we will restructure to consist of at the root of the and the remaining elements in subtrees rooted at the children of such that all nodes in the layer-subtree have depth . For red-black trees, this operation is described by Cormen et al. [5, Problem 13-2].

Lemma 2.

The operations , , and on a node can be implemented to take worst-case time when red-black trees are used as layer-subtrees.

Proof.

Immediate from the operations given by Cormen et al. [5] and Tarjan [9]. ∎

2.4 Inter-Layer Operations

The operations performed on layers correspond to the queue and shift operations of the working-set structure. The four operations performed on layers are and for a layer and and for a node .

As we did with the intra-layer operations, we will describe the requirements of the operations independently of the actual layer-subtree implementation. In fact, only the operation will require knowledge of the implementation of the layer-subtrees; the remaining operations simply make use of the operations defined in Section 2.3.

This operation returns the key of the youngest node in layer . We first examine all elements in (of which there are ). Once we find the element that is the youngest (by looking for the element for which ), say , we go back to the root and search for , which will bring us to the youngest element in , say . We then go back to the root and search for , and so on. This repeats until we find the youngest element in , as desired. The process for is the same, except our initial search in is for the oldest element, i.e., the element for which .

This operation will move from its current layer to the next higher layer . To accomplish this, we first split to the root of its layer-subtree using . We remove from by setting . We now must restore balance properties. Observe that, by the definition of split, both of the layer-subtrees rooted at the children of are balanced. Therefore, we only need to ensure the balance properties . Since we have just inserted into the layer , this can be done by performing the intra-layer operation . Finally, we must remove from the implicit queue structure of and place it in the implicit queue structure of .

To do this, we look at both and . If they are both non-, then we go to the root and perform searches for and , setting and . Otherwise, if only is , then we conclude that is the youngest in its former layer. After removing it from that layer, will be the new youngest element in that layer, so we go to the root search for and set . Since is the youngest element in that layer, we also copy into . We must also update the key stored by the youngest element in the next higher layer. In order to do this, we run to find this element, say , and set . The case for when only is is symmetric: the new oldest element in the layer is , so we update , we copy into , and update the pointer to the oldest element in this layer that is stored in in the same was as we did for the youngest.

We now must insert into the implicit queue structure of layer . To do this, we search for the youngest node in , say . We then set , and . We then go to the next layer and update its pointer to the youngest element in this layer the same way we did before.

This operation will move from its current layer to the next lower layer . We describe how to perform this operation for red-black trees; other implementations of the layer-subtrees will need to define different implementations but must respect the stated worst-case time bound of . Let denote the predecessor of in . If does not have a predecessor in , set . Similarly, let denote the successor of in , and if does not have a successor in , set . Our first goal is to move such that it becomes a leaf of its layer-subtree. If is not already a leaf in , then has at least one child in its layer-subtree. To make it a leaf of it layer-subtree, we splice out the node by making the parent of point to the right child of instead of itself. Note that this is well-defined since has no left child in as it is the smallest element greater than . We then move to the location of . Finally, we make a child of and make the new children of the old children of and . Figure 3 explains this process.

Figure 3: The first part of the operation. On the left is the initial layer-subtree and the on the right is the layer-subtree after the nodes have been moved and layers changed but before the . The dotted lines to nodes and subtrees indicate layer boundaries and the dotted line over the old node indicates a splice.

Observe that we now have that is a leaf of its layer-subtree. The layer-subtree is configured exactly as if we had deleted using the deletion operation described by Cormen et al. [5, Section 13.4]. Therefore, we can perform , where is the (only) child of , to restore the balance properties of the nodes of the layer-subtree. Thus, is exactly the child of the node spliced out by the deletion (), as required by the operation of Cormen et al. [5, Section 13.4].

To complete the movement to the next layer, we change the layer number of and execute to create a single balanced layer-subtree from and its children.333Note that if these children have larger layer numbers than the new layer number for , nothing is performed and becomes the lone element in its (new) layer-subtree; this follows from the fact that only joins nodes that are in the same layer. We then update the implicit queue structure as we did before. Observe that once has been removed from its original layer-subtree, layer-subtree balance has been restored because no node on that path was changed.

Lemma 3.

The operations and , and for a layer or a node each take worst-case time .

Proof.

The operations and find the youngest (respectively oldest) element in layers . Given the youngest (respectively oldest) element in layer , we can determine the youngest (respectively oldest) element in layer in constant time since such an element maintains the key of the youngest (respectively oldest) element in the next layer. We then need to traverse from the root to that element. By Lemma 1, the total time is .

The and operations, where , consist of searching for , performing a constant number of intra-layer operations and then making series of queries for the youngest elements in several layers and updating the queue structures. The search can be done is time by Lemma 1 and the intra-layer operations each take time by Lemma 2 for a total of . Finally, the queries for the youngest elements and the cost of updating the queues is dominated by the cost of the query in the deepest layer since each layer is twice the size of the previous one. Since , this cost is by the above argument. The total cost of and is thus . ∎

2.5 Tree Operations

We are now ready to describe how to perform the operations , and on the tree as a whole. Such operations are independent of the layer-subtree implementation given the inter-layer and intra-layer operations defined in the previous sections.

To perform a search for , we begin by performing the usual method of searching in a binary search tree. Once we have found , we execute a total of times to bring into . We then restore the sizes of the layers as was done in the working-set structure. We run to find the oldest element in layer and then run . We then perform the same operation in by running to find the oldest element in layer , then run . This process of moving elements down layer-by-layer continues until we reach a layer such that .444Note that for an ordinary search, we have . However, thinking of the algorithm this way gives us a clean way to describe insertions. Note that efficiency can be improved by remembering the oldest elements of previous layers instead of finding the oldest element in each of when running . Such an improvement does not alter the asymptotic running time, however.

To insert into the tree, we first examine the index and size of the deepest layer, which we have stored at the root. If , then we increment and set . Otherwise, if , we simply increment . We now insert into the tree (ignoring layers for now) using the usual algorithm where is placed in the tree as a leaf. We set (i.e., a temporary layer larger than any other) and update the implicit queue structure for (and the youngest and oldest elements of ) as we did before. Finally, we run to bring to . Note that since stops moving down elements once the first non-full layer is reached, we do not place another element in layer . Thus, this layer is now empty and we update the youngest and oldest elements in layer to indicate that there is no layer below.

To delete from the tree, we look at the total number of layers in the tree that is stored at the root. We then locate and perform a total of times. This will cause to be moved to a new (temporary) layer that is guaranteed to have no other nodes in it. Therefore, must be a leaf of the tree, and we can simply remove it by setting the corresponding child pointer of its parent to . As was the case for insertion, this temporary layer is now empty and so we update the youngest and oldest elements in layer to indicate that there is no layer below. We then perform operations for the youngest element of each layer from to to restore the sizes of the layers. At this point, it could be the case that . If this happens, we decrement the number of layers which is stored at the root, and update the youngest and oldest elements in the new deepest layer to indicate that there is no layer below.

Theorem 4.

Searching for at time takes worst-case time and insertion and deletion each take worst-case time .

Proof.

A search consists of a regular search in a binary search tree followed by several layer operations. Suppose at time . By Lemma 1, we can find in time . We then perform in time by Lemma 3. We then run OldestInLayer and MoveDown operations for every layer from to . By Lemma 3, this has total cost . The total time is therefore . Observe that, by the same analysis as that of the working-set structure of Bădoiu et al. [2], we have that , and so .

An insertion consists of traversing through all layers. By Lemma 1, this takes time . We then perform a search at cost by the above argument, since the element searched for is in the deepest layer. The total cost is thus .

A deletion consists of traversing the tree to find and then performing MoveDown and MoveUp at most once per layer. The traversal takes time by Lemma 1 and the MoveDown and MoveUp operations each cost for by Lemma 3. The total cost is thus . ∎

3 Skip-Splay and the Unified Bound

In this section, we show how to use layered working-set trees in the skip-splay structure of Derryberry and Sleator [6] in order to achieve the unified bound to within a small multiplicative factor. The unified bound [2] requires that the time to search an element at time is

where is the working-set number of at time (as in Section 1) and is defined as the rank distance between and . This property implies the working-set and the dynamic finger properties. Informally, the unified bound states that an access is fast if the current access is close in term of rank distance to some element that has been accessed recently. Bădoiu et al. [2] introduced a data structure achieving the unified bound in the amortized sense. This structure does not fit into the binary search tree model, but the splay tree [8], which does fit into this model, is conjectured to achieve the unified bound [2]

Recently, Derryberry and Sleator [6] developed the first binary search tree that guarantees an access time close to the unified bound. Their algorithm, called skip-splay, performs an access to the element in amortized time. Insertions and deletions are not supported. In the remainder of this section, we briefly describe skip-splay and then show how to modify it using the layered working-set tree presented in Section 2 in order to achieve a new bound in the binary search tree model.

The skip-splay algorithm works in the following way. Assume for simplicity that the tree stores the set where for some integer and that is initially perfectly balanced. Nodes of height (where the leaves of have height ) for are marked as the root of a subtree. Such nodes partition into a set of splay trees called auxiliary trees. Each auxiliary tree is maintained as an independent splay tree. Observe that the -th auxiliary tree encountered on a path from the root to a leaf in has size . Define to be the auxiliary tree containing the node .

To access an element , we perform a standard binary search in to locate . We then perform a series of splay operations on some of the auxiliary trees of . We begin by splaying to the root of using the usual splay algorithm. If is now the root of , the operation is complete. Otherwise, we skip to the new parent of , say , and splay to the root of . This process is repeated until we reach the root of .

By using layered working-set trees as auxiliary trees in place of splay trees, we can get the following result.

Theorem 5.

There exists a binary search tree that performs an access to the element in worst-case time and in amortized time.

Proof.

As suggested by Derryberry and Sleator [6], instead of using splay trees to maintain the auxiliary trees, we could use any data structure that satisfies the working-set property. Thus, by maintaining the auxiliary trees as layered working-set, we straightforwardly guarantee an amortized time of to search for an element . Note that the splay in the auxiliary tree corresponds to the operation in our structure.

Now we show that this modified version of the skip-splay has the additional property that the worst case search time is . A search consists of traversing a maximum of auxiliary trees where the size of the -th encountered auxiliary tree is . In the worst case, the amount of work performed in an auxiliary tree is . Since the auxiliary trees are maintained independently from each other, the total worst-case search cost in the tree is . ∎

By doubling the access to an element, we also obtain the following result.

Theorem 6.

The binary search tree described in Theorem 5 performs an access to the element in worst-case time .

Proof.

Doubling the access to an element increases by at most twice its worst-case access time. Thus, the asymptotic performance of the structure still holds for both the worst-case access time and amortized access time.

In order to reach an element in the tree, we have to traverse several auxiliary trees. Let be the ordered sequence of trees traversed during an access to the element (note that ). The number of accesses performed independently in each of those trees is bounded above by .

For , define to be the distance between the root node of and the root node of in the structure at time . More generally, define as the distance between the root node of and the element where is a descendent of the root of . Let (and ) be the greatest (smallest) element of that is smaller (greater) than any element in . Thus the cost of accessing is .

By the definition of a search tree we know that the parent of the root node of is either or . Thus

(1)

When we access twice, we independently access both and in each traversed auxiliary tree . By Theorem 4, we have

And we also have . Hence, by applying equation (1), the result follows. ∎

Note that this last property is not satisfied by the original unified structure [2]. Theorems 5 and 6 thus show

Corollary 7.

There exists a binary search tree that performs an access to the element in worst-case time and in amortized time.

4 Conclusion and Open Problems

We have given the first binary search tree that guarantees the working-set property in the worst-case. We have also shown how to combine this binary search tree with the skip-splay algorithm of Derryberry and Sleator [6] to achieve the unified bound to within a small additive term in the amortized sense while maintaining in the worst case an access time that is both logarithmic and within a small multiplicative factor of the working-set bound. Several directions remain for future research.

For layered working-set trees, it seems that by forcing the working-set property to hold in the worst case, we sacrifice good performance on some other access sequences. Is it the case that a binary search tree that has the working-set property in the worst case cannot achieve other properties of splay trees? For example, what kind of scanning bound can we achieve if we require the working-set property in the worst case? It would also be interesting to bound the number of rotations performed per access. Can we guarantee at most rotations to access ? Red-black trees guarantee rotations per update, for instance.

For the results on the unified bound, the most obvious improvement would be to remove the term from the amortized access cost, as posed by Derryberry and Sleator [6]. Another improvement would be to remove the factor from the worst-case access cost.

Acknowledgements.

We thank Jonathan Derryberry and Daniel Sleator for sending us a preliminary version of their skip-splay paper [6] and Stefan Langerman for stimulating discussions.

References

  • Bayer [1972] R. Bayer. Symmetric binary b-trees: Data structures and maintenance algorithms. Acta Informatica, 1:290–306, 1972.
  • Bădoiu et al. [2007] Mihai Bădoiu, Richard Cole, Erik D. Demaine, and John Iacono. A unified access bound on comparison-based dynamic dictionaries. Theoretical Computer Science, 382(2):86–96, 2007.
  • Cole [2000] Richard Cole. On the dynamic finger conjecture for splay trees. Part II: The proof. SIAM J. Comput., 30(1):44–85, 2000.
  • Cole et al. [2000] Richard Cole, Bud Mishra, Jeanette Schmidt, and Alan Siegel. On the dynamic finger conjecture for splay trees. Part I: Splay sorting -block sequences. SIAM J. Comput., 30(1):1–43, 2000.
  • Cormen et al. [2001] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms. MIT Press, 2nd edition, 2001.
  • Derryberry and Sleator [2009] Jonathan C. Derryberry and Daniel D. Sleator. Skip-splay: Toward achieving the unified bound in the BST model. In WADS ’09: Proceedings of the 16th Annual International Workshop on Algorithms and Data Structures, 2009.
  • Guibas and Sedgewick [1978] Leonidas J. Guibas and Robert Sedgewick. A dichromatic framework for balanced trees. In FOCS ’78: Proceedings of the 19th Annual IEEE Symposium on Foundations of Computer Science, pages 8–21, 1978.
  • Sleator and Tarjan [1985] Daniel Dominic Sleator and Robert Endre Tarjan. Self-adjusting binary search trees. J. ACM, 32(3):652–686, 1985.
  • Tarjan [1983] Robert Endre Tarjan. Data structures and network algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1983.
  • Wilber [1989] Robert Wilber. Lower bounds for accessing binary search trees with rotations. SIAM Journal on Computing, 18(1):56–67, 1989.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
294427
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description