A Dynamic I/OEfficient Structure
for OneDimensional Topk Range Reporting^{†}^{†}thanks: This paper supersedes an earlier version on arXiv with the title “On Topk Search and Range Reporting”.
Abstract
We present a structure in external memory for top range reporting, which uses linear space, answers a query in I/Os, and supports an update in amortized I/Os, where is the input size, and is the block size. This improves the state of the art which incurs amortized I/Os per update.
1 Introduction
In the top range reporting problem, the input is a set of points in , where each point carries a distinct^{1}^{1}1This is a standard assumption [1, 14] to guarantee the uniqueness of a top result. See [14] for two semantic extensions to remove the assumption, and how to reduce those extensions to the standard top problem with distinct weights. realvalued score, denoted as . Given an interval and an integer , a query returns the points in with the highest scores. If , the entire should be returned. The goal is to store in a structure so that queries can be answered efficiently.
Motivation. Top search in general is widely acknowledged as an important operation in a large variety of information systems (see an excellent survey [9]). It plays a central role in applications where an end user wants only a small number of elements with the best competitive quality, as opposed to all the elements satisfying a query predicate. Top range reporting—being an extension of classic range reporting—is one of the most fundamental forms of top search. A representative query on a hotel database is “find the 10 bestrated hotels whose prices are between 100 and 200 dollars per night”. Here, each point represents the price of a hotel, with corresponding to the hotel’s user rating. In fact, queries like the above are so popular that database systems nowadays strive to make them firstclass citizens with direct algorithm support. This calls for a spaceeconomic structure that can guarantee attractive query and update efficiency.
Computation Model. We study the problem in the external memory (EM) model [2]. A machine is equipped with words of memory, and a disk of unbounded size that has been formatted into blocks of size words. An I/O either reads a block of data from the disk to memory, or conversely, writes words in memory to a disk block. The space of a structure is the number of blocks it occupies, whereas the time of an algorithm is the number of I/Os it performs. CPU calculation is for free. A word has bits, where is the input size of the problem at hand. The values of and satisfy the condition .^{2}^{2}2 can be as small as in the model defined in [2]. However, any algorithm that works on with constant can be adapted to work on with only a constant blowup in space and time. Therefore, one might as well consider that .
Throughout this paper, a space/time complexity holds in the worst case by default. A logarithm is defined as , and if omitted. Linear cost should be understood as whereas logarithmic cost as .
1.1 Previous Work
Top range reporting was first studied by Afshani, Brodal and Zeh [1], who gave a static structure of space that answers a query in I/Os. The query cost is optimal, as can be shown via a reduction from predecessor search [12]. They also analyzed the spacequery tradeoff for an ordered variant of the problem, where the top elements need to be sorted by score. Their result suggests that when the space usage is linear, one can achieve nearly the best query efficiency by simply solving the unordered version in I/Os, and then sorting the retrieved elements (see [14] for more details). For the unordered version, Sheng and Tao [14] proposed a dynamic structure that has the same space and query cost as [1], but supports an update in amortized I/Os.
In internal memory, by combining a priority search tree [10] and Frederickson’s selection algorithm [7] on heaps, one can obtain a pointermachine structure that uses words, answers a query in time, and supports an update in time. In RAM, Brodal, Fagerberg, Greve and LopezOrtiz [5] considered a special instance of the problem where the input points of are from the domain . They gave a linearsize structure with query time (which holds also for the ordered version).
1.2 Our Results
We improve the state of the art [14] by presenting a new structure with logarithmic update cost:
Theorem 1.
For top range reporting, there is a structure of space that answers a query in I/Os, and supports an insertion and a deletion in I/Os amortized.
We achieve logarithmic updates by combining three methods. The first one adapts the aforementioned pointer machine structure—which combines a priority search tree with Frederickson’s heap selection algorithm—to external memory. This gives a linearsize structure that can be updated in amortized I/Os, but answers a query in I/Os (note that the log base is 2). We use the structure to handle in which case its query cost is .
The second method applies directly the structure of [14]. Looking at their analysis carefully, one sees that their amortized update cost is in fact . In other words, when , the structure already achieves logarithmic update cost.
The most difficult case arises when , or equivalently, . We observe that, since has already been taken care of, it remains to target . Motivated by this, we develop a linearsize structure that can be updated in I/Os, and answers a query with in I/Os. The most crucial idea behind this structure is to use a suite of “RAMreminiscent” techniques to unleash the power of manipulating individual bits.
Theorem 1 can now be established by putting together the above three structures using standard global rebuilding techniques.
2 A Structure for
In this section, we will prove:
Lemma 1.
For top range reporting, there is a structure of space that answers a query in I/Os, and supports an insertion and a deletion in I/Os amortized.
Top range reporting has a geometric interpretation. We can convert to a set of points, by mapping each element to a 2d point . Then, a top query with equivalently reports the highest points of in the vertical slab . This is the perspective we will take to prove Lemma 1.
Our structure is essentially an external priority search tree [3] on with a constant fanout. However, we make two contributions. First, we develop an algorithm using this structure to answer top range queries. Second, we explain how to update the structure in I/Os. Note that an update by the standard algorithm of [3] requires I/Os.
Structure. Let be a weight balanced Btree (WBBtree) [4] on the xcoordinates of the points in . The leaf capacity and branching parameter of are both set to . We number the levels of bottom up, with the leaves at level 0. For each node in , we use to denote the set of points whose xcoordinates are stored in the subtree of . As a property of the WBBtree, if is at level , then falls between and ; if is outside this range, becomes unbalanced and needs to be remedied.
Each node naturally corresponds to a vertical slab with .^{3}^{3}3Precisely, the slab of a leaf node is where is the smallest xcoordinate stored at , and is the smallest xcoordinate in the leaf node succeeding . If does not exist, . The slab of an internal node unions those of all its child nodes. Let be child nodes of the same parent. We say that is a right sibling of if is to the right of . Otherwise, is a left sibling of . Note that a node can have multiple left/right siblings, or none (if it is already the left/right most child).
Consider now as an internal node with child nodes where (we always follow the lefttoright order in listing out child nodes). We associate with a binary search tree of leaves, which correspond to , respectively. Let be an internal node in . We define , where are the leaves of below , and accordingly, define .
Notice that we can view insteads as one big tree that concatenates the secondary binary trees of all the nodes in . Specifically, if is a child of in , the concatenation makes the root of the only child of the leaf of . See Figure 1. is almost a binary tree except that some internal nodes have only one child which is an internal node itself. However, this is only a minor oddity because any path in of 3 nodes must contain at least one node with two children. The height of is .
Each node in is associated with a set—denoted as —of pilot points satisfying two conditions:

The points of are the highest among all points that are not stored in any , where is a proper ancestor of in .

If less than points satisfy the above condition, includes all of them. Otherwise, . In any case, is stored in blocks.
The lowest point in is called the representative of .
Finally, for each internal node in , we collect the representatives of the pilot sets of all the nodes in , and store these representatives in blocks—referred to as the representative blocks of .
Query. Given a top query with range , we descend two roottoleaf paths and in to reach the leaf nodes and whose slabs’ xranges cover and , respectively. In I/Os, we retrieve all the pilot points of the nodes on , and eliminate those outside . Let be the set of remaining points.
Let be the least common ancestor of and . Define () as the path from to (). Let be the set of nodes satisfying two conditions:


, but the parent of is in ;

The xrange of is covered by .

For every such , we can regard its subtree as a maxheap as follows. First, includes all the nodes in the subtree of (in ) with nonempty pilot sets. Second, the sorting key of is the ycoordinate of the representative of . In this way, we have identified at most nonempty maxheaps, each rooted at a distinct node in . Concatenate these heaps into one, by organizing their roots into a binary maxheap based on the sorting keys of those roots. This can be done in I/Os^{4}^{4}4Using a lineartime “makeheap” algorithm; see [6].. Denote by the resulting maxheap after concatenation. See Figure 2.
Set to a sufficiently large constant. We now invoke Frederickson’s algorithm to extract the set of representatives in with the largest ycoordinates; this entails I/Os. Let be the set of nodes whose representatives are collected in . Gather all the pilot points of the nodes of into a set .
Define a set of nodes as follows. For each node , we first add to all such siblings of (in ) that (i) , and (ii) the xrange of is contained in . Second, if is an internal node, add all its child nodes in to . Note that . We now collect the pilot points of all the nodes of into a set .
At this moment, we have collected three sets with a total size of . We can now report the highest points in in I/Os. The query algorithm performs I/Os in total. Its correctness is ensured by the fact below:
Lemma 2.
Setting ensures that includes the highest points in .
Proof.
We will focus on the scenario that the heap has at least representatives. Otherwise, has points in , and all of them are in ( is empty).
We will first show that . This is very intuitive because collects the contents of pilot sets. However, a formal proof requires some effort because the pilot set of a node can have arbitrarily few points (in this case all the nodes in the proper subtree of must have empty pilot sets). We need a careful argument to address this issue.
We say that a representative in is poor if its pilot set has less than points; otherwise, it is rich. Consider a poor representative in ; and suppose that it is a pilot point of node , and its xcoordinate is stored in leaf node . Note that stores the xcoordinates of at least points, all of which fall in . By the fact that represents less than points, we know that at least points (with xcoordinates) in are pilot points of some proper ancestors of in , and therefore, appear in either or . We associate those points with . On the other hand, we associate each rich representative with the at least points in its pilot set.
Thus, the representatives in are associated with at least points in . Each point , on the other hand, can be associated with at most 2 representatives: the representative of the node where is a pilot point, and a poor representative whose xcoordinate is stored in the same leaf as .^{5}^{5}5No two poor representatives can have their xcoordinates stored in the same leaf. This implies . Hence, ensures .
Finally, the inclusion of ensures that no pilot point in but outside can be higher than the lowest point in . The lemma then follows. ∎
Insertion. To insert a point , first update the Btree by inserting the xcoordinate of . Let us assume for the time being that no rebalancing in is required. Then, we identify the node in whose pilot set should incorporate . This can be achieved in I/Os by descending a single roottoleaf path in (note: not ), and inspect the representative blocks of the nodes on the path. We add to .
We say that a pilot set overflows if it has more than points. If overflows, we carry out a pushdown operation at , which moves the lowest points of to the pilot sets of its at most 2 child nodes in . The resulting has size . If the pilot set of a child now overflows, we treat it in the same manner by performing a pushdown at . We will analyze the cost of pushdowns later.
Deletion. To delete a point , we identify the node in whose pilot set contains . This can be done in I/Os by inspecting the representative blocks. We then remove from .
We say that a pilot set underflows if it has less than points, and yet, one of its child nodes has a nonempty pilot set. To remedy this, we define a pullup operation at node in as one that moves the highest points from
(1) 
to . If (1) has less than the requested number of points, the pullup moves all the points of (1) into , after which all proper descendants of have empty subsets; we call such a pullup a draining one.
In general, if the pilot set of a node underflows, we carry out at most two pullups at until either , or a draining pullup has been performed. After the first pullup, if the pilot set at a child node of underflows, we should remedy that first (in the same manner recursively) before continuing with the second pullup at . We will analyze the cost of pullups later.
It is worth mentioning that we do not remove the xcoordinate of from the base tree . This does not create a problem because we will rebuild the whole periodically, as clarified later.
Rebalancing. It remains to clarify how to rebalance . Let be the highest node in that becomes unbalanced after inserting . Let be the parent of . We rebuild the whole subtree of in , and the corresponding portion in . Let be the level of in . Our goal is to complete the reconstruction in I/Os. A standard argument with the WBBtree shows that every insertion accounts for I/Os of all the reconstructions.
Let be the root of . Essentially, we need to rebuild the subtree of in , which has nodes. The first step of our algorithm is to distribute all the pilot points stored in the subtree of down to the leaves where their xcoordinates are stored, respectively. For this purpose, we simply push down all the pilot points of to its child nodes in , and do so recursively at each child. We call this a pilot grounding process.
We now reconstruct the subtrees of and . First, it is standard to create all the nodes of in the subtree of , and all the nodes of in the subtree of in I/Os. What remains to do is to fill in the pilot sets. We do so in a bottom up manner. Suppose that we are to fill in the pilot set of , knowing that the pilot sets of all the proper descendants of (in ) have been computed properly. We populate using the same algorithm as treating a pilot set underflow at .
Next, we prove that the whole reconstruction takes I/Os. Let us first analyze the pilot grounding process. We say that a demotion event occurs when a point moves from the pilot set of a parent node to that of a child. If represents the number of such events, we can bound the total cost of pilot grounding as .
To bound , first consider a level1 node in . A node at level of triggers demotion events. Hence, the number of demotion events triggered by all the nodes of is . As the subtree of has level1 nodes, they trigger demotion events in total.
Now consider as a level node of with . Each of the nodes in can trigger demotion events, resulting in a total event count of for . Since there are nodes at level , the number of demotion events due to the nodes from level 2 to level is at most
Therefore, . It follows that the pilot grounding process requires I/Os.
The cost of filling pilot sets can be analyzed in the same fashion, by looking at promotion events—namely, a point moves from the pilot set of a child to that of the parent. If represents the number of such events, we can bound the cost of pilot set filling as . By an argument analogous to the one on , one can derive that .
PushDowns and PullUps. Next, we will prove that each update accounts for only I/Os incurred by pushdowns and pullups. At first glance, this is quite intuitive: inserting a point into a pilot set may “edge out” an existing point there to the next level of , which may then create a cascading effect every level down. Viewed this way, an insertion creates demotion events, and reversely, a deletion creates promotion events. As such events are handled by a pushdown or pullup using I/Os, the cost amortized on an update should be . What complicates things, however, is the fact that pilot points may bounce up and down across different levels. Below we give an argument to account for this complication.
We imagine some conceptual tokens that can be passed by a node to a child in , but never the opposite direction. Specifically, the rules for creating, passing, and deleting tokens are:

When a point is being inserted into , we give an insertion token if is placed in .

When a point is deleted from , we give a deletion token if is removed from .

In a pushdown, when a point is moved from to (where is a child of ), we take away an insertion token from , and give it to . We will prove shortly that always has enough tokens to make this possible.

In a pullup, when a point is moved from to (where is a child of ), we take away a deletion token from , and give it to . Again, we will prove shortly that this is always doable.

When an insertion/deletion token reaches a leaf node, it disappears.

After a draining pullup is performed at , all the tokens in the subtree of disappear.

When the subtree of a node is reconstructed, all the tokens in the subtree disappear.
Lemma 3.
Our update algorithms enforce two invariants at all times:

Invariant 1: every internal node in has at least insertion tokens.

Invariant 2: every internal node in has at least deletion tokens, unless all proper descendants of in have empty pilot sets.
Notice that, by Invariant 1, a node with is not required to hold any insertion tokens; likewise, by Invariant 2, a node with is not required to hold any deletion tokens. Furthermore, the two invariants ensure that the token passing described in Rules 3 and 4 is always doable.
Proof of Lemma 3.
Both invariants hold on right after the subtree of has been reconstructed because at this moment either (i) , or (ii) and meanwhile all proper descendants of in have empty pilot sets.
Inductively, assuming that the invariants are valid currently, next we will prove that they remain valid after applying our update algorithms.

Putting a newly inserted point into gives a new insertion token, which accounts for the increment of . Hence, Invariant 1 still holds. Invariant 2 also holds because has decreased.

Physically deleting a point from gives a new deletion token, which accounts for the increment of . Hence, Invariant 2 still holds. Invariant 1 also holds because has decreased.

Consider a pushdown at node . After the pushdown, ; thus, Invariants 1 and 2 trivially hold on . Let be a child of . Invariant 1 still holds on because gains as many insertion tokens as the increase of . Invariant 2 also continues to hold on because the value of has decreased.

Consider a pullup at node . After the pullup, ; hence, Invariant 1 trivially holds on . Invariant 2 also holds on because loses as many deletion tokens as the decrease of . Let be a child of . Invariant 1 continues to hold on because the value of has decreased. Invariant 2 also holds on because gains as many deletion tokens as the increase of .
∎
Recall that a pushdown is necessitated at a node only if . Therefore, by Invariant 1, after the operation insertion tokens must have descended to the next level of . The operation itself takes I/Os; after amortization, each of those insertion tokens bears only I/Os of that cost.
Now consider the moment when a pilot set underflow happens at . By Invariant 2, must be holding at least deletion tokens at this time. Our algorithm performs one or two pullups at using I/Os. We account for such cost as follows. If neither of the two pullups is a draining one, at least deletion tokens must have descended to the next level; we charge the cost on those tokens, each of which bears I/Os. On the other hand, if a draining pullup occurred, at least deletion tokens must have disappeared; each of them is asked to bear I/Os.
In summary, each token before its disappearance is charged I/Os in total. Since an update creates only one token, the amortized update cost only needs to increase by to cover the cost of pushdowns and pullups.
Remark. The above analysis has assumed that the height of remains . This assumption can be removed by the standard technique of global rebuilding. With this, we have completed the proof of Lemma 1.
3 A Structure for
In this section, we will prove:
Lemma 4.
For top range reporting with , there is a structure of space that answers a query in I/Os, and supports an insertion and a deletion in I/Os amortized.
As explained in Section 1.2, Theorem 1 follows from the combination of Lemmas 1 and 4, and a structure of [14]. To prove Lemma 4, we will first introduce two relevant problems in Sections 3.1 and 3.2. Our final structure—presented in Section 3.3—is built upon solutions to those problems.
3.1 Approximate UnionRank Selection
Let be a set of real values. Given a real value , we define its rank in as . Note that the largest element of has rank 1.
In approximate unionrank selection (AURS), we are given disjoint sets of real values, such that each () can be accessed only by the following operators:

Max: Returns the largest element of in I/Os.

Rank: Given a realvalued parameter where is a constant, this operator returns in I/Os an element whose rank in falls in .
Given an integer satisfying
(2) 
a query returns an element whose rank in falls in , where is a constant dependent only on .
AURS is reminiscent of a rank selection problem defined by Frederickson and Johnson [8]. However, their algorithm assumes a more powerful Rank operator that returns an element in with a precise rank. In the appendix, we show how to adapt their algorithm to obtain the result below:
Lemma 5.
Each query in the AURS problem can be answered in I/Os.
3.2 Approximate Group Selection
Given integers and , we define an group as a sequence of disjoint sets , where each () is a set of at most real values. Let be an integer such that a word has bits.
In the approximate group selection problem—henceforth, the problem for short—the input is an group , where the values of , , and (block size) satisfy all of the following:


where is a constant satisfying .
A query is given:

an interval with ,

and a real value ;
it returns a real value whose rank in falls in , where is a constant. It is required that should be either or an element in .
The following lemma is a crucial result that stands at the core of our final structure. Its proof is nontrivial and delegated to Section 4.
Lemma 6.
For the problem, we can store in a structure of space that answers a query in I/Os, and supports an insertion and deletion in I/Os amortized.
3.3 Proof of Lemma 4
We are now ready to elaborate on the structure claimed in Lemma 4. It suffices to focus on the approximate range selection problem:
The input is the same set of points as in top range reporting. Given an interval and an integer satisfying , a query returns a point such that between and points in have scores at least .
Suppose that there is a structure solving the above problem with query time and amortized update time . Then, we immediately obtain a structure of asymptotically the same space for top range reporting with query time and amortized update time (see [14]). A structure with and was given in [14].
Fix an integer . Next, assuming , we describe a linearsize structure with , which therefore yields a structure of Lemma 4.
Structure. We build a WBBtree on with branching parameter , and leaf capacity . Each node naturally corresponds to an xrange in . If is an internal node with child nodes , define a multislab to be the union of the xranges of for some meaningful .
Given an (internal/leaf) node , let be the set of elements stored in the subtree of . Define as the set of highest scores of the elements in , where is the constant mentioned in the definition of the problem in Section 3.2.
For each leaf node , maintain a structure of [14] to support approximate range selection on . Consider now as an internal node with child nodes . We

maintain an structure of Lemma 6 on the group , with fixed to some integer in (this will be guaranteed by our update algorithms).

store in a (slightly augmented) Btree so that, for any , the maximum score in can be found in I/Os.
There are internal nodes, each of which occupies blocks. Hence, all the internal nodes use altogether space. The overall space cost is therefore .
Query. Given a query with parameters and , search in a standard way to identify a minimum set of disjoint canonical ranges whose union covers , such that each canonical range is either the xrange of a leaf node or a multislab.
Define for each multislab . Perform AURS with parameter on . At each internal node on which a multislab is defined, the structure of and the Btree on allow us to implement the Rank and Max operators on in I/Os, respectively. Therefore, by Lemma 5^{6}^{6}6The constant in the Rank operator’s definition (see Section 3.1) equals here, as is guaranteed by the structures. Given that we focus on while each has size , we know that the condition stated in (2) always holds., the AURS finishes in I/Os. Denote by the element returned^{7}^{7}7The AURS returns only the score of , but it is easy to fetch by the score in I/Os..
For each leaf node whose xrange is in , perform approximate range selection on using in I/Os. There are at most two such leaf nodes; let be the results of approximate range selection on them, respectively. We return as the final answer.
Update. The update algorithm (which is relatively standard) can be found in the appendix.
4 Solving the Problem
We devote this section to proving Lemma 6. Henceforth, by “query”, we refer to a query in the problem. When no ambiguity can arise, we use to denote also the union of , … .
4.1 A Static Structure
We will need a tool called the logarithmic sketch—henceforth, sketch—developed in [14]. Let be a set of real values. Its sketch is an array of size , where the th () entry —called a pivot—is an element in whose rank in falls in ; any such element can be used as .
Lemma 7 ([14]).
Let be disjoint sets of real values. Given their sketches and a real value satisfying , we can find in I/Os a real value whose rank in is between and (where is a constant). Furthermore, is either or an element in .
Create a sketch for each (). Call the set a sketch set. We store a compressed form of the sketch set as follows. Describe each pivot by its global rank in using bits, and by its local rank in using bits. Hence, each requires bits. A compressed sketch set occupies bits, and thus fits in a block (which has bits).
Given a query, we first spend an I/O reading the compressed sketched set, and then run the algorithm of Lemma 7 on it in memory. Suppose that this algorithm outputs . If , we simply return as our final answer. Otherwise, is equal to the global rank of an element in . To convert the global rank to an actual element, we index all the elements of with a Btree, which supports such a conversion in I/Os. The overall space is (due to the Btree); and the query cost is . Notice that the constant in Section 3.2 equals the constant stated in Lemma 7.
4.2 Supporting Insertions
To facilitate updates, we store the elements of each () in a Btree that allows us to obtain the element of any specific local rank in I/Os. In addition, we also maintain a structure of the following lemma, whose proof is deferred to Section 4.4:
Lemma 8.
We can store an group in a structure of space such that, in one I/O, we can read into memory a single block, from which we can obtain for free the global rank of the element with local rank in , for every and every . The structure supports an insertion and a deletion in I/Os.
Suppose that an element is to be inserted in for some . Let be the rank of in . We observe that, except perhaps a single pivot, the new compressed sketch set (after the update) can be deduced from: the current compressed sketch set, and . To understand this, consider first a compressed sketch where . Each pivot whose global rank is at least now has its global rank increased by 1 (its local rank is unaffected). Regarding the compressed , the same is true, but additionally every such pivot should also have its local rank increased by 1. Furthermore, a new pivot is needed in if reaches a power of 2 after the insertion—in such a case we say that expands; the new pivot is the only one in the compressed sketch set that cannot be deduced (because its global rank is unknown).
Motivated by this observation, to insert in , we first obtain from the Btree of in I/Os, and then update the new compressed sketch set as described earlier in 1 I/O. Next, is inserted in the Btrees of and using I/Os. If now is a power of 2, we retrieve the global rank of the smallest element in in I/Os, and add the element to in memory.
Recall that, the th () pivot of should have its local rank confined to . If this is not true, we say that it is invalidated. The insertion may have invalidated one or more pivots, (all of which can be found with no I/O because in memory). Upon the invalidation of , we replace it as the element with local rank so that updates in are needed to invalidate again. For the replacement to proceed, it remains to obtain the global rank of . We do so by distinguishing two cases:

Case . We simply fetch from the Btree on , and obtain its global rank from the Btree on . We can now update in memory.
In total, the invalidated pivot is fixed with I/Os. Since updates must have occurred in to trigger the invalidity of , each of those updates accounts for I/Os of the pivot recomputation. As an update can be charged at most times this way (i.e., once for every ), its amortized cost is increased by only .

Case . There are such invalidated pivots in . We can recompute all of them together in I/Os using Lemma 8.
Overall, an insertion requires I/Os amortized.
4.3 Supporting Deletions
Suppose that an element is to be deleted from for some . Let be the rank of in . Except possibly for only one pivot, the new compressed sketch set can be deduced based only on the current compressed sketch set, , and . To see this, consider first where . Each pivot whose global rank is larger than now needs to have its global rank decreased by 1. Regarding , the same is true, and every such pivot should also have its local rank decreased by 1. Furthermore, the last pivot of should be discarded if was a power of 2 before the deletion: in such a case, we say that shrinks. Finally, if happens to be a pivot of , a new pivot needs to be computed to replace it—this is the only pivot that cannot be deduced; we call it a dangling pivot.
The concrete steps of deleting are as follows. After fetching its global rank in I/Os, we update the compressed sketch set in memory according to the above discussion. If shrinks, we delete the last pivot in memory. If was a pivot (say, the th one for some ), we retrieve the element with local rank in , and obtain its global rank using I/Os. We then replace the dangling pivot with in memory.
Finally, recompute the invalidated pivots (if any) in the same way as in an insertion. As analyzed in Section 4.2, such recomputation increases the amortized update cost by only .
4.4 Proof of Lemma 8
Let us define the list of the largest elements of () as the prefix of , and denote it as . Let be the union of ; we refer to as a prefix set. contains at most points.
We compress by describing each element (say, for some ) in using its global rank in and its local rank in , for which purpose bits suffice. Hence, can be described by bits, which fit in a block. After loading this block into memory, we can obtain the global rank of the th largest element of for free, regardless of and .
Besides the aforementioned block, we also maintain a Btree on each () and a Btree on . The space consumed is .
Insertion. Suppose that we need to insert an element into . First, we update the Btrees of and in I/Os. With the same cost, we can also decide whether should enter . If not, the insertion is complete.
Otherwise, we find the global rank of and its local rank in with I/Os. Load the compressed prefix set into memory with 1 I/O. Then, the new compressed prefix set can be determined for free based on , , , and . To see this, first consider a compressed prefix with : if an element has global rank at least , it should have its global rank increased by 1. Regarding the compressed prefix , the same is true; furthermore, all such elements in should also have their local ranks increased by 1. Finally, we add into ; if has a size over , we discard its smallest element.
Deletion. Suppose that we need to delete an element from . Using the Btree on , we find its global rank in I/Os. Then, is removed from the Btrees of and in I/Os.
If , the deletion is done. Otherwise, we load the compressed prefix set in 1 I/O, and then update it, except for a single element, in memory. Specifically, in a compressed prefix with , if an element has global rank at least , it should have its global rank decreased by 1. Regarding the compressed prefix , the same is true; furthermore, all such elements in should also have their local ranks decreased by 1.
The last element of is the only one that cannot be inferred directly at this point. But it can be filled in simply by retrieving the element with local rank in , and then its global rank in , all in I/Os.
References
 [1] P. Afshani, G. S. Brodal, and N. Zeh. Ordered and unordered topk range reporting in large data sets. In Proceedings of the Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 390–400, 2011.
 [2] A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. Communications of the ACM (CACM), 31(9):1116–1127, 1988.
 [3] L. Arge, V. Samoladas, and J. S. Vitter. On twodimensional indexability and optimal range search indexing. In Proceedings of ACM Symposium on Principles of Database Systems (PODS), pages 346–357, 1999.
 [4] L. Arge and J. S. Vitter. Optimal external memory interval management. SIAM Journal of Computing, 32(6):1488–1508, 2003.
 [5] G. S. Brodal, R. Fagerberg, M. Greve, and A. LopezOrtiz. Online sorted range reporting. In International Symposium on Algorithms and Computation (ISAAC), pages 173–182, 2009.
 [6] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms, Second Edition. The MIT Press, 2001.
 [7] G. N. Frederickson. An optimal algorithm for selection in a minheap. Information and Computation, 104(2):197–214, 1993.
 [8] G. N. Frederickson and D. B. Johnson. The complexity of selection and ranking in x+y and matrices with sorted columns. Journal of Computer and System Sciences (JCSS), 24(2):197–208, 1982.
 [9] I. F. Ilyas, G. Beskales, and M. A. Soliman. A survey of top query processing techniques in relational database systems. ACM Computing Surveys, 40(4), 2008.
 [10] E. M. McCreight. Priority search trees. SIAM Journal of Computing, 14(2):257–276, 1985.
 [11] G. Navarro and Y. Nekrich. Top document retrieval in optimal time and linear space. In Proceedings of the Annual ACMSIAM Symposium on Discrete Algorithms (SODA), pages 1066–1077, 2012.
 [12] M. Patrascu and M. Thorup. Timespace tradeoffs for predecessor search. In Proceedings of ACM Symposium on Theory of Computing (STOC), pages 232–240, 2006.
 [13] R. Shah, C. Sheng, S. V. Thankachan, and J. S. Vitter. Topk document retrieval in external memory. In Proceedings of European Symposium on Algorithms (ESA), pages 803–814, 2013.
 [14] C. Sheng and Y. Tao. Dynamic topk range reporting in external memory. In Proceedings of ACM Symposium on Principles of Database Systems (PODS), 2012.
Appendix
Proof of Lemma 5
In this proof, set and . Given an element (), we refer to its rank in as its local rank, and its rank in as its global rank.
Case . Our algorithm executes in rounds. In the th round (), sets among are active, while the others are inactive. At the beginning, are all active.
In round , we execute Rank on each active set with parameter . Remember that the operator can return any element whose local rank falls in .^{8}^{8}8Such an element definitely exists because . Let be the set of elements fetched. We call each element in a marker, and assign it a weight equal to

if ;

if .
The largest markers in are taken as pivots, among which the smallest is the cutoff pivot of this round. An active set remains active in the next round if its marker is a pivot, whereas the other active sets become inactive.
Denote by the set of pivots taken in the th round (), and by