Dynamic Bridge-Finding in \widetilde{O}(\log^{2}n) Amortized Time

Dynamic Bridge-Finding in Amortized Time

Jacob Holm This research is supported by Mikkel Thorup’s Advanced Grant DFF-0602-02499B from the Danish Council for Independent Research under the Sapere Aude research career programme. University of Copenhagen (DIKU),
jaho@di.ku.dk, roden@di.ku.dk, mthorup@di.ku.dk
Eva Rotenberg University of Copenhagen (DIKU),
jaho@di.ku.dk, roden@di.ku.dk, mthorup@di.ku.dk
Mikkel Thorup University of Copenhagen (DIKU),
jaho@di.ku.dk, roden@di.ku.dk, mthorup@di.ku.dk
Abstract

We present a deterministic fully-dynamic data structure for maintaining information about the bridges in a graph. We support updates in amortized time, and can find a bridge in the component of any given vertex, or a bridge separating any two given vertices, in worst case time. Our bounds match the current best for bounds for deterministic fully-dynamic connectivity up to factors.

The previous best dynamic bridge finding was an amortized time algorithm by Thorup [STOC2000], which was a bittrick-based improvement on the amortized time algorithm by Holm et al.[STOC98, JACM2001].

Our approach is based on a different and purely combinatorial improvement of the algorithm of Holm et al., which by itself gives a new combinatorial amortized time algorithm. Combining it with Thorup’s bittrick, we get down to the claimed amortized time.

Essentially the same new trick can be applied to the biconnectivity data structure from [STOC98, JACM2001], improving the amortized update time to .

We also offer improvements in space. We describe a general trick which applies to both of our new algorithms, and to the old ones, to get down to linear space, where the previous best use . Finally, we show how to obtain query time, matching the optimal trade-off between update and query time.

Our result yields an improved running time for deciding whether a unique perfect matching exists in a static graph.

1 Introduction

In graphs and networks, connectivity between vertices is a fundamental property. In real life, we often encounter networks that change over time, subject to insertion and deletion of edges. We call such a graph fully dynamic. Dynamic graphs call for dynamic data structures that maintain just enough information about the graph in its current state to be able to promptly answer queries.

Vertices of a graph are said to be connected if there exists a path between them, and -edge connected if no sequence of edge deletions can disconnect them. A bridge is an edge whose deletion would disconnect the graph. In other words, a pair of connected vertices are -edge connected if they are not separated by a bridge. By Menger’s Theorem [17], this is equivalent to saying that a pair of connected vertices are two-edge connected if there exist two edge-disjoint paths between them. By edge-disjoint is meant that no edge appears in both paths.

For dynamic graphs, the first and most fundamental property to be studied was that of dynamic connectivity. In general, we can assume the graph has a fixed set of vertices, and we let denote the current number of edges in the graph. The first data structure with sublinear update time is due to Frederickson [5] and Eppstein et al. [4]. Later, Frederickson [6] and Eppstein et al. [4] gave a data structure with update time for two-edge connectivity. Henzinger and King achieved poly-logarithmic expected amortized time [8], that is, an expected amortized update time of , and query time for connectivity. And in [9], expected amortized update time and worst case query time for -edge connectivity. The first polylogarithmic deterministic result was by Holm et al in [10]; an amortized deterministic update time of for connectivity, and for -edge connectivity. The update time for deterministic dynamic connectivity has later been improved to by Wulff-Nilsen [21]. Sacrificing determinism, an structure for connectivity was presented by Thorup [20], and later improved to by Huang et al. [12]. In the same paper, Thorup obtains an update time of for deterministic two-edge connectivity. Interestingly, Kapron et al. [13] gave a Monte Carlo-style randomized data structure with polylogarithmic worst case update time for dynamic connectivity, namely, per edge insertion, per edge deletion, and per query. We know of no similar result for bridge finding. The best lower bound known is by Pǎtraşcu et al. [18], which shows a trade-off between update time and query time of and .

1.1 Our results

We obtain an update time of and a query time of for the bridge finding problem:

Theorem 1.

There exists a deterministic data structure for dynamic multigraphs in the word RAM model with word size, that uses space, and can handle the following updates, and queries for arbitrary vertices or arbitrary connected vertices :

  • insert and delete edges in amortized time,

  • find a bridge in ’s connected component or determine that none exists, or find a bridge that separates from or determine that none exists. Both in worst-case time for the first bridge, or worst case time for the first bridges.

  • find the size of ’s connected component in worst-case time, or the size of its -edge connected component in worst-case time.

Since a pair of connected vertices are two-edge connected exactly when there is no bridge separating them, we have the following corollary:

Corollary 2.

There exists a data structure for dynamic multigraphs in the word RAM model with word size, that can answer two-edge connectivity queries in worst case time and handle insertion and deletion of edges in amortized time, with space consumption .

Note that the query time is optimal with respect to the trade-off by Pǎtraşcu et al. [18]

As a stepping stone on the way to our main theorem, we show the following:

Theorem 3.

There exists a combinatorial deterministic data structure for dynamic multigraphs on the pointer-machine without the use of bit-tricks, that uses space, and can handle insertions and deletions of edges in amortized time, find bridges and determine connected component sizes in worst-case time, and find -edge connected component sizes in worst-case time.

Our results are based on modifications to the -edge connectivity data structure from [11]. Applying the analoguous modification to the biconnectivity data structure from the same paper yields a structure with amortized update time and worst case query time. The details of this modification are beyond the scope of this paper.

1.2 Applications

While dynamic graphs are interesting in their own right, many algorithms and theorem proofs for static graphs rely on decremental or incremental graphs. Take for example the problem of whether or not a graph has a unique perfect matching? The following theorem by Kotzig immediately yields a near-linear algorithm if implemented together with a decremental two-edge connectivity data structure with poly-logarithmic update time:

Theorem 4 (A. Kotzig ’59 [16]).

Let be a connected graph with a unique perfect matching . Then has a bridge that belongs to .

The near-linear algorithm for finding a unique perfect matching by Gabow, Kaplan, and Tarjan [7] is straight-forward: Find a bridge and delete it. If deleting it yields connected components of odd size, it must belong to the matching, and all edges incident to its endpoints may be deleted—if the components have even size, the bridge cannot belong to the matching. Recurse on the components. Thus, to implement Kotzig’s Theorem, one has to implement three operations: One that finds a bridge, a second that deletes an edge, and a third returning the size of a connected component.

Another example is Petersen’s theorem [19] which states that any cubic, two-edge connected graph contains a perfect matching. An algorithm by Biedl et al. [2] finds a perfect matching in such graphs in time, by using the Holm et al two-edge connectivity data structure as a subroutine. In fact, one may implement their algorithm and obtain running time , by using as subroutine a data structure for amortized decremental two-edge connectivity with update-time . Here, we thus improve the running time from to .

In 2010, Diks and Stanczyk [3] improved Biedl et al.’s algorithm for perfect matchings in two-edge connected cubic graphs, by having it rely only on dynamic connectivity, not two-edge connectivity, and thus obtaining a running time of for the deterministic version, or expected running time for the randomized version. However, our data structure still yields a direct improvement to the original algorithm by Biedl et al.

Note that all applications to static graphs have in common that it is no disadvantage that our running time is amortized.

1.3 Techniques

As with the previous algorithms, our result is based on top trees [1] which is a hierarchical tree structure used to represent information about a dynamic tree — in this case, a certain spanning tree of the dynamic graph. The original algorithm of Holm et al. [11] stores counters with each top tree node, where each counter represent the size of a certain subgraph. Our new algorithm applies top trees the same way, representing the same sizes with each top tree node, but with a much more efficient implicit representation of the sizes.

Reanalyzing the algorithm of Holm et al. [11], we show that many of the sizes represented in the top nodes are identical, which implies that that they can be represented more efficiently as a list of actual differences. We then need additional data structures to provide the desired sizes, and we have to be very careful when we move information around as the the top tree changes, but overall, we gain almost a log-factor in the amortized time bound, and the algorithm remains purely combinatorial.

Our combinatorial improvement can be composed with the bittrick improvement of Thorup [20]. Thorup represents the same sizes as the original algorithm of Holm et al., but observes that we don’t need the exact sizes, but just a constant factor approximation. Each approximate size can be represented with only bits, and we can therefore pack of them together in a single -bit word. This can be used to reduce the cost of adding two -dimensional vectors of approximate sizes from time to time. It may not be obvious from the current presentation, but it was a significant technical difficulty when developing our algorithm to make sure we could apply this technique and get the associated speedup to .

The “natural” query time of our algorithm is the same as its update time. In order to reduce the query time, we observe that we can augment the main algorithm to maintain a secondary structure that can answer queries much faster. This can be used to reduce the query time for the combinatorial algorithm to , and for the full algorithm to the optimal .

The secondary structure needed for the optimal query time uses top trees of degree . While the use of non-binary trees is nothing new, we believe we are the first to show that such top trees can be maintained in the “natural” time.

Finally, we show a general technique for getting down to linear space, using top trees whose base clusters have size .

1.4 Article outline

In Section 2, we recall how [11] fundamentally solves two-edge connectivity via a reduction to a certain set of operations on a dynamic forest. In Section 3, we recall how top trees can be used to maintain information in a dynamic forest, as shown in [1]. In Sections 45, and 6, we describe how to support the operations on a dynamic tree needed to make a combinatorial algorithm for bridge finding, as stated in Theorem 3. Then, in Section 7, we show how to use Approximate Counting to get down to update time, thus, reaching the update time of Theorem 1. We then revisit top trees in Section 8, and introduce the notion of -ary top trees, as well as a general trick to save space in complex top tree applications. We proceed to show how to obtain the optimal query time in Section 9. Finally, in Section 10, we show how to achieve optimal space, by only storing cluster information with large clusters, and otherwise calculating it from scratch when needed.

2 Reduction to operations on dynamic trees

In [11], two-edge connectivity was maintained via operations on dynamic trees, as follows. For each edge of the graph, the algorithm explicitly maintains a level, , between and such that the edges at level form a spanning forest , and such that the -edge-connected components in the subgraph induced by edges at level at least have at most vertices. For each edge in the spanning forest, define the cover level, , as the maximum level of an edge crossing the cut defined by removing from , or if no such edge exists. The cover levels are only maintained implicitly, because each edge insertion and deletion can change the cover levels of edges. Note that the bridges are exactly the edges in the spanning forest with cover level . The algorithm explicitly maintains the spanning forest using a dynamic tree structure supporting the following operations:

  1. [itemsep=-1pt,topsep=2pt]

  2. Link. Add the edge to the dynamic tree, implicitly setting its cover level to .

  3. Cut. Remove the edge from the dynamic tree.

  4. Connected. Returns if and are in the same tree, otherwise.

  5. Cover. For each edge on the tree path from to whose cover level is less than , implicitly set the cover level to .

  6. Uncover. For each edge on the tree path from to whose cover level is at most , implicitly set the cover level to .

  7. CoverLevel. Return the minimal cover level of any edge in the tree containing .

  8. CoverLevel. Return the minimal cover level of an edge on the path from to . If , we define CoverLevel.

  9. MinCoveredEdge. Return any edge in the tree containing with minimal cover level.

  10. MinCoveredEdge. Returns a tree-edge on the path from to whose cover level is CoverLevel.

  11. AddLabel. Associate the user label to the vertex at level .

  12. RemoveLabel. Remove the user label from its vertex .

  13. FindFirstLabel. Find a user label at level such that the associated vertex has CoverLevel and minimizes the distance from to .

  14. FindSize. Find the number of vertices such that CoverLevel. Note that FindSize is just the number of vertices in the tree containing .

Lemma 5 (Essentially the high level algorithm from [11]).

There exists a deterministic reduction for dynamic graphs with nodes, that, when starting with an empty graph, supports any sequence of Insert or Delete operations using:

  • calls to Link, Cut, Uncover, and CoverLevel.

  • calls to Connected, Cover, AddLabel, RemoveLabel, FindFirstLabel, and FindSize.

And that can answer FindBridge queries using a constant number of calls to Connected, CoverLevel, and MinCoveredEdge.

Proof.

See Appendix A for a proof and pseudocode. ∎

{adjustbox}

max width=, Trim= 0pt # Operation Asymptotic worst case time per call, using structure in section 4 5 6 7 9 1 Link 2 Cut 3 Connected 4 Cover 5 Uncover 6 CoverLevel 7 CoverLevel 8 MinCoveredEdge 9 MinCoveredEdge 10 AddLabel - - - - 11 RemoveLabel 12 FindFirstLabel 13 FindSize - - - FindSize

Table 1: Overview of the worst case times achieved for each tree operation by the data structures presented in this paper. In the last column, can be chosen arbitrarily.

The algorithm in [11] used a dynamic tree structure supporting all the operations in time, leading to an algorithm for bridge finding. Thorup [20] showed how to improve the time for the dynamic tree structure to  leading to an algorithm for bridge finding.

Throughout this paper, we will show a number of data structures for dynamic trees, implementing various subsets of these operations while ignoring the rest (See Table 1). Define a CoverLevel structure to be one that implements operations 19, and a FindSize structure to be a CoverLevel structure that additionally implements the FindSize operation. Finally, we define a FindFirstLabel structure to be one that implements operations 112 (all except for FindSize).

The point is that we can get different trade-offs between the operation costs in the different structures, and that we can combine them into a single structure supporting all the operations using the following

Lemma 6 (Folklore).

Given two data structures and for the same problem consisting of a set of update operations and a set of query operations. If the respective update times are and for , and the query times are and for , we can create a combined data structure running in time for update operation , and time for query operation .

Proof.

Simply maintain both structures in parallel. Call all update operations on both structures, and call only the fastest structure for each query. ∎

Proof of Theorem 3.

Use the CoverLevel structure from Section 4, the FindSize structure from Section 5, and the FindFirstLabel structure from Section 6, and combine them into a single structure using Lemma 6. Then the reduction from Lemma 5 gives the correct running times but uses space. To get linear space, modify the FindSize and FindFirstLabel structures as described in Section 10. ∎

Proof of Theorem 1.

Use the CoverLevel structure from Section 9, the FindSize structure from Section 5, as modified in Section 7 and 10, and the FindFirstLabel structure from Section 6, and combine them into a single structure using Lemma 6. Then the reduction from Lemma 5 gives the required bounds. ∎

3 Top trees

A top tree is a data structure for maintaining information about a dynamic forest. Given a tree , a top tree is a rooted tree over subtrees of , such that each non-leaf node is the union of its children. The root of is , its leaves are the edges of , and its nodes are clusters, which we will define in two steps. For any subgraph of a graph , the boundary consists of the vertices of that have a neighbour in . A cluster is a connected subgraph with a boundary of size no larger than . We denote them by point clusters if the boundary has size , and path clusters otherwise. For a path cluster with boundary , denote by the tree path between and , also denoted the cluster path of . Similarly, for a point cluster with boundary vertex , is the trivial path consisting solely of . The top forest supports dynamic changes to the forest: insertion (link) or deletion (cut) of edges. Furthermore, it supports the expose operation: expose(), or expose(), returns a top tree where , or , are considered boundary vertices of every cluster containing them, including the root cluster. All operations are supported by performing a series of destroy, create, split, and merge operations: split destroys a node of the top tree and replaces it with its two children, while merge creates a parent as a union of its children. Destroy and create are the base cases for split and merge, respectively. Note that clusters can only be merged if their union has a boundary of size at most .

A top tree is binary if each node has at most two children. We call a non-leaf node heterogeneous if it has both a point cluster and a path cluster among its children, and homogeneous otherwise.

Theorem 7 (Alstrup, Holm, de Lichtenberg, Thorup [1]).

For a dynamic forest on vertices we can maintain binary top trees of height supporting each link, cut or expose with a sequence of calls to create or destroy, and calls to merge or split. These top tree modifications are identified in time. The space usage of the top trees is linear in the size of the dynamic forest.

4 A CoverLevel structure

In this section we show how to maintain a top tree supporting the CoverLevel operations. This part is is essentially the same as in [10, 11] (with minor corrections), but is included here for completeness because the rest of the paper builds on it. Pseudocode for maintaining this structure is given in Appendix B.

For each cluster we want to maintain the following two integers and up to two edges:

Then

where is the point cluster returned by
where is the path cluster returned by

The problem is that when handling Cover or Uncover we cannot afford to propagate the information all the way down to the edges. When these operations are called on a path cluster , we instead implement them directly in , and then store “lazy information” in about what should be propagated down in case we want to look at the descendants of . The exact additional information we store for a path cluster is

We maintain the invariant that , and if then .

This allows us to implement Cover by first calling , and then updating the returned path cluster as follows:

Similarly, we can implement Uncover by first calling , and then updating the returned path cluster as follows if :

Together, and represent the fact that for each path descendant of , if 111In [10, 11] this condition is erroneously stated as ., we need to set . In particular whenever a path cluster is split, for each path child of , if we need to set

Furthermore, if we need to set

Note that only is affected. None of , , or depend directly on the lazy information.

Now suppose we have clusters222 for now, but we will reuse this in section 9 with a higher-degree top tree. that we want to merge into a single new cluster . For define

Note that for a point-cluster , is always .

We then have the following relations between the data of the parent and the data of its children:

Analysis

For any constant-degree top tree, Merge and Split with this information takes constant time, and thus, all operations in the CoverLevel structure in this section take time. Each cluster uses space, so the total space used is .

Note that we can extend this so for each cluster , if all the least-covered edges (on or off the cluster path) lie in the same child of , we have a pointer to the closest descendant of that is either a base cluster or has more than one child containing least-covered edges. We can use this structure to find the first bridges in time.

5 A FindSize Structure

We now proceed to show how to extend the CoverLevel structure from Section 4 to support FindSize in time per Merge and Split. Later, in Section 7 we will show how to reduce this to time per Merge and Split. See Appendix C for pseudocode.

We will use the idea of having a single vertex label for each vertex, which is a point cluster with no edges, having that vertex as boundary vertex and containing all relevant information about the vertex. The advantage of this is that it simplifies handling of the common boundary vertex during a merge by making sure it is uniquely assigned to (and accounted for by) one of the children.

Let be a cluster in , let be a vertex in , and let . Define

For convenience, we will combine all the levels together into a single vector333All vectors and matrices in this section have indices ranging from to .

Let be the point clusters that would result from deleting the edges of from . Then we can define the vector

Note that with this definition, if then so even when we have

So for any cluster , the vector is what we want to maintain.

The main difficulty turns out be computing the vector for the heterogeneous point clusters. To help with that we will for each cluster and boundary vertex additionally maintain the following two size vectors for each :

Where is a diagonal matrix whose entries are defined (using Iverson brackets, see [14]) by

Note that these vectors are independent of and as defined in Section 4. The corresponding “clean” vectors are not explicitly stored, but computed when needed as follows

The point of these definitions is that each path cluster inherits most of its and vectors from its children, and we can use this fact to get an speedup compared to [11].

Merging along a path (the general case)

Let be clusters that we want to merge into a new cluster , and suppose . This covers all types of merge in a normal binary top tree, except for the heterogeneous point clusters. Let . If , let , otherwise let with , . Then

Merging off the path (heterogeneous point clusters)

Now let be a path cluster with , let be a point cluster with , and suppose we want to merge into a new point cluster with . Then

Analysis

The advantage of our new approach is that each merge or split is a constant number of splits, concatenations, searches, and sums over -length lists of -dimensional vectors. By representing each list as an augmented balanced binary search tree (see e.g. [15, pp. 471–475]), we can implement each of these operations in time, and using space per cluster, as follows. Let be a cluster and let . The tree has one node for each key such that is nonzero, augmented with the following additional information:

Each split, concatenate, search, or sum operation can be implemented such that it touches nodes, and the time for each node update is dominated by the time it takes to add two -dimensional vectors, which is . The total time for each Cover, Uncover, Link, Cut, or FindSize is therefore , and the total space used for the structure is .

Comparison to previous algorithms

For any path cluster and vertex , let be the matrix whose th column is defined by

Then is essentially the matrix maintained for path clusters in [10, 20, 11]. Notice that

which explains our choice of the “diag” prefix.

6 A FindFirstLabel Structure

We will show how to maintain information that allows us to implement FindFirstLabel; the function that allows us to inspect the replacement edge candidates at a given level. The implementation uses a “destructive binary search, with undo” strategy, similar to the non-local search introduced in [1].

The idea is to maintain enough information in each cluster to determine if there is a result. Then we can start by using , and repeatedly split the root containing the answer until we arrive at the correct label. After that, we simply undo the splits (using the appropriate merges), and finally undo the .

Just as in the FindSize structure, we will use vertex labels to store all the information pertinent to a vertex. We store all the added user labels for each vertex in the label object for that vertex in the base level of the top tree. For each level where the vertex has an associated user label, we keep a doubly linked list of those labels, and we keep a singly-linked list of these nonempty lists. Thus, boils down to finding the first vertex label that has an associated user label at the right level. Once we have that vertex label, the desired user label can be found in time.

Let be a cluster in , and let . Define bit vectors444Here, is the Iverson Bracket (see [14]), and denotes bitwise OR.

Maintaining the bit vectors, and the corresponding and bit vectors, can be done completely analogous to the way we maintain the vectors used for FindSize, with the minor change that we use bitwise OR on bit vectors instead of vector addition.

Updating the vertex label cluster in the top tree during , or a where and can be done by first calling detach, then updating the linked lists containing the user labels and setting

and then reattaching . Finally FindFirstLabel(v,w,i) can be implemented in the way already described, by examining for each cluster. Note that even though we don’t explicitly maintain it, for any cluster and any we can easily compute

In general, let be the clusters resulting from an expose or split, let (not necessarily distinct). Then we can define

where
and

Analysis

By the method described in this section, AddLabel, RemoveLabel, and FindFirstLabel are maintained in worst-case time.

This can be reduced to by realizing that each -dimensional bit vector fits into words, and that each bitwise OR therefore only takes constant time.

The total space used for a FindFirstLabel structure with vertices and labels is plus the space for bit vectors. If we assume a word size of , this is just in total. If we disallow bit packing tricks, we may have to use space.

7 Approximate counting

As noted in [20], we don’t need to use the exact component sizes at each level. If is the actual correct size, it is sufficient to store an approximate value such that , for some constant . Then we are no longer guaranteed that component sizes drop by a factor of at each level, but rather get a factor of . This increases the number of levels to (which is still ), but leaves the algorithm otherwise unchanged. Suppose we represent each size as a floating point value with a -bit mantissa, for some to be determined later. For each addition of such numbers the relative error increases. The relative error at the root of a tree of additions of height is , thus to get the required precision it is sufficient to set . In our algorithm(s) the depth of calculation is clearly upper bounded by , where is the height of the top tree. It follows that some is sufficient. Since the maximum size of a component is , the exponent has size at most , and can be represented in bits. Thus storing the sizes as bit floating point values is sufficient to get the required precision. Assuming a word size of this lets us store sizes in a single word, and to add them in parallel in constant time.

Analysis

We will show how this applies to our FindSize structure from Section 5. The bottlenecks in the algorithm all have to do with operations on -dimensional size vectors. In particular, the amortized update time is dominated by the time to do vector additions, and multiplications of a vector by the matrix. With approximate counting, the vector additions each take time. Multiplying a size vector by we get:

And clearly this operation can also be done on sizes in parallel when they are packed into a single word. With approximate counting, each multiplication by therefore also takes time. Thus the time per operation is reduced to .

The space consumption of the data structure is plus the space needed to store of the -dimensional size vectors. With approximate counting that drops to per vector, or in total.

Comparison to previous algorithms

Combining the modified FindSize structure with the CoverLevel structure from Section 4 and the FindFirstLabel structure from Section 6 gives us the first bridge-finding structure with amortized update time. This structure uses space, and uses time for FindBridge and Size queries, and for -size queries.

For comparison, applying this trick in the obvious way to the basic time and algorithm from [10, 11] gives the time and algorithm briefly mentioned in [20].

8 Top trees revisited

We can combine the tree data structures presented so far to build a data structure for bridge-finding that has update time , query time , and uses space.

In order to get faster queries and linear space, we need to use top-trees in an even smarter way. For this, we need the full generality of the top trees described in [1].

8.1 Level-based top trees, labels, and fat-bottomed trees

As described in [1], we may associate a level with each cluster, such that the leaves of the top tree have level , and such that the parent of a level cluster is on level . As observed in Alstrup et al. [1, Theorem 5.1], one may also associate one or more labels with each vertex. For any vertex, , we may handle the label(s) of as point clusters with as their boundary vertex and no edges. Furthermore, as described in [1], we need not have single edges on the bottom most level. We may generalize this to instead have clusters of size as the leaves of the top tree.

Theorem 8 (Alstrup, Holm, de Lichtenberg, Thorup [1]).

Consider a fully dynamic forest and let be a positive integer parameter. For the trees in the forest, we can maintain levelled top trees whose base clusters are of size at most and such that if a tree has size s, it has height and clusters on level . Here, is a positive constant. Each link, cut, attach, detach, or expose operation is supported with creates and destroys, and joins and splits on each positive level. If the involved trees have total size , this involves top tree modifications, all of which are identified in time. For a composite sequence of updates, each of the above bounds are multiplied by . As a variant, if we have parameter bounding the size of each underlying tree, then we can choose to let all top roots be on the same level .

8.2 High degree top trees

Top trees of degree two are well described and often used. However, it turns out to be useful to also consider top trees of higher degree , especially for .

Lemma 9.

Given any , one can maintain top trees of degree and height . Each expose, link, or cut is handled by calls to create or destroy and calls to split or merge. The operations are identified in time.

Proof.

Given a binary levelled top tree of height , we can create a -ary levelled top tree , where the leaves of are the leaves of , and where the clusters on level of are the clusters on level of . Edges in correspond to paths of length in . Thus, given a binary top tree, we may create a -ary top tree bottom-up in linear time.

We may implement link, cut and expose by running the corresponding operation in . Each cut, link or expose operation will affect clusters on a constant number of root-paths in . There are thus only calls to split or merge of a cluster on a level divisible by . Thus, since each split or merge in corresponds to a split or merge of a cluster in whose level is divisible by , we have only calls to split and merge in .

However, since there are clusters whose parent pointers need to be updated after a merge, the total running time becomes . ∎

8.3 Saving space with fat-bottomed top trees

In this section we present a general technique for reducing the space usage of a top tree based data structure to linear. The properties of the technique are captured in the following:

Lemma 10.

Given a top tree data structure of height that uses space per cluster, and worst case time per merge or split.

Suppose that the complete information for a cluster of size , including information that is shared with its children, has total size and can be computed directly in time . Suppose further that there exists a function of such that .

Then there exists a top tree data structure, maintaining the same information, that uses linear space in total and has update time for link, cut, and expose.

Proof.

This follows directly from Theorem 8 by setting . Then the top tree will have clusters of size at most so the total size is linear. The time per update follows because the top tree uses merges of split and create and destroy per link cut and expose. These take and time respectively. ∎

9 A Faster CoverLevel Structure

If we allow ourselves to use bit tricks, we can improve the data structure from Section 4. The main idea is, for some , to use top trees of degree . Such top trees have height , and finding the sequence of merges and splits for a given link, cut or expose takes time.

The high-level algorithm makes at most a constant number of calls to link and cut for each insert or delete, so we are fine with the time for these operations. However, we can no longer use to implement , , and , as that would take too long.

In this section, we will show how to overcome this limitation by working directly with the underlying tree.

The data

The basic idea is to maintain a buffer with all the , , and values one level up in the tree, in the parent cluster. Since the degree is , and each value uses at most bits, these fit into a constant number of words, and so we can use bit tricks to operate on the values for all children of a node in parallel.

Let be a cluster with children . Since , we can define the following vectors that each fit into a constant number of words.