Fully-Dynamic Minimum Spanning Forest with Improved Worst-Case Update Time

Fully-Dynamic Minimum Spanning Forest with Improved Worst-Case Update Time

Christian Wulff-Nilsen 111Department of Computer Science, University of Copenhagen, koolooz@di.ku.dk, http://www.diku.dk/koolooz/.
Abstract

We give a Las Vegas data structure which maintains a minimum spanning forest in an -vertex edge-weighted dynamic graph undergoing updates consisting of any mixture of edge insertions and deletions. Each update is supported in expected worst-case time for some constant and this worst-case bound holds with probability at least where is a constant that can be made arbitrarily large. This is the first data structure achieving an improvement over the deterministic worst-case update time of Eppstein et al., a bound that has been standing for nearly years. In fact, it was previously not even known how to maintain a spanning forest of an unweighted graph in worst-case time polynomially faster than . Our result is achieved by first giving a reduction from fully-dynamic to decremental minimum spanning forest preserving worst-case update time up to logarithmic factors. Then decremental minimum spanning forest is solved using several novel techniques, one of which involves keeping track of low-conductance cuts in a dynamic graph. An immediate corollary of our result is the first Las Vegas data structure for fully-dynamic connectivity where each update is handled in worst-case time polynomially faster than w.h.p.; this data structure has worst-case query time.

1 Introduction

A minimum spanning forest (MSF) of an edge-weighted undirected graph is a forest consisting of MSTs of the connected components of . Dynamic MSF is one of the most fundamental dynamic graph problems with a history spanning more than three decades. Given a graph with a set of vertices and an initially empty set of edges, a data structure for this problem maintains an MSF under two types of updates to , namely the insertion or the deletion of an edge in . After each update to , the data structure needs to respond with the updates to , if any.

An MSF of a graph with edges and vertices can be computed in deterministic time [2] and in randomized expected time [13]. Hence, each update can be handled within either of these time bounds by recomputing an MSF from scratch after each edge insertion or deletion. By exploiting the fact that the change to the dynamic graph is small in each update, better update time can be achieved.

The first non-trivial data structure for fully-dynamic MSF was due to Frederickson [4] who achieved deterministic worst-case update time where is the number of edges in the graph at the time of the update. Using the sparsification technique, Eppstein et al. [3] improved this to where is the number of vertices.

Faster amortized update time bounds exist. Henzinger an King [8] showed how to maintain an MSF in amortized expected update time in the restricted setting where the number of distinct edge weights is . The same authors later showed how to solve the general problem using amortized update time [7]. Holm et al. [9] presented a data structure for fully-dynamic connectivity with amortized update time and showed how it can easily be adapted to handle decremental (i.e., deletions only) MSF within the same time bound. They also gave a variant of a reduction of Henzinger and King [6] from fully-dynamic to decremental MSF and combining these results, they obtained a data structure for fully-dynamic MSF with amortized update time. This bound was slightly improved to in [10]. A lower bound of was shown in [17] and this bound holds even for just maintaining the weight of an MSF in a plane graph with unit weights.

1.1 Our results

In this paper, we give a fully-dynamic MSF data structure with a polynomial speed-up over the worst-case time bound of Eppstein et al. Our data structure is Las Vegas, always correctly maintaining an MSF and achieving the polynomial speed-up w.h.p. in each update. The following theorem states our main result.

Theorem 1.

There is a Las Vegas data structure for fully-dynamic MSF which for an -vertex graph has an expected update time of for some constant ; in each update, this bound holds in the worst-case with probability at least for a constant that can be made arbitrarily large.

We have not calculated the precise value of constant but it is quite small. From a theoretical perspective however, the bound is an important barrier to break. Furthermore, a polynomial speed-up is beyond what can be achieved using word parallelism alone unless we allow a word size polynomial in . Indeed, our improvement does not rely on a more powerful model of computation than what is assumed in previous papers. To get our result, we develop several new tools some of which we believe could be of independent interest. We sketch these tools later in this section.

As is the case for all randomized algorithms and data structures, it is important that the random bits used are not revealed to an adversary. It is well-known that if all edge weights in a graph are unique, its MSF is uniquely defined. Uniqueness of edge weights can always be achieved using some lexicographical ordering in case of ties. This way, our data structure can safely reveal the MSF after each update without revealing any information about the random bits used.

Dynamic connectivity:

An immediate corollary of our result is a fully-dynamic data structure for maintaining a spanning forest of an unweighted graph in worst-case time with high probability. The previous best worst-case bound for this problem was by Eppstein et al.[3]; if word-parallelism is exploited it, a slightly better bound of was shown by Kejlberg-Rasmussen et al. [15]. There are Monte Carlo data structures for fully-dynamic connectivty by Kapron et al.[11] and by Gibb et al.[5] which internally maintain a spanning forest in polylogarithmic time per update. However, contrary to our data structure, these structures cannot reveal the spanning forest to an adversary. Kapron et al. extend their result to maintaining an MSF in time222We use , , and when suppressing -factors. per update where is the number of distinct weights. However, their data structure can only reveal the weight of this MSF. Furthermore, if all edge weights are unique, this bound becomes .

From our main result, we also immediately get the first Las Vegas fully-dynamic connectivity structure achieving w.h.p. a worst-case update time polynomially faster than , improving the previous best Las Vegas bounds of Eppstein et al.[3] and Kejlberg-Rasmussen et al. [15]. By maintaining the spanning forest using a standard dynamic tree data structure with polynomial fan-out, our connectivity structure achieves constant worst-case query time.

Monte Carlo data structure:

It is easy to modify our Las Vegas structure to a Monte Carlo structure which is guaranteed to handle each update in worst-case time. This is done by simply terminating an update if the time bound is exceeded by some constant factor . By picking sufficiently large, we can ensure that this termination happens only with low probability in each update. An issue here is that once the Monte Carlo structure makes an error, subsequent updates are very likely to also maintain an incorrect MSF. This can be remedied somewhat by periodically rebuilding new MSF structures so that after a small number of updates, the data structure again maintains a correct MSF with high probability; we omit the details as our focus is on obtaining a Las Vegas structure.

1.2 High-level description and overview of paper

In the rest of this section, we give an overview of our data structure as well as how the paper is organized. The description of our data structure here will not be completely accurate and we only highlight the main ideas.

Section 2 introduces some definitions and notation that will be used throughout the paper.

Restricted Decremental MSF Structure (Section 3)

In Section 3, we present a data structure for a restricted version of decremental MSF where the initial graph has max degree at most and where there is a bound on the total number of edge deletions where may be smaller than the initial number of edges.

The data structure maintains a recursive clustering of the dynamic graph where each cluster is a subgraph of . This clustering forms a laminar family (w.r.t. subgraph containment) and can be represented as a rooted tree where the root corresponds to the entire graph ; for technical reasons, we refer to the root as a level -cluster and the children of an -cluster are referred to as level -clusters. The decremental MSF structure of Holm et al. [9] also maintains a recursive clustering but ours differs significantly from theirs, as will become clear.

In our recursive clustering, the vertex sets of the level -clusters form a partition and w.h.p., each level -cluster is an expander graph and the number of inter-cluster edges is small. More specifically, the expansion factor of each expander graph is of the form and the number of inter-cluster edges is at most for some small positive constants and . Such a partition is formed with a new algorithm that we present in Section 6.

Next, consider a list of the edges of sorted by decreasing weight. This list is partitioned into sublists each of size for some small constant . These sublists correspond to suitable subsets ordered by decreasing weight.

Each level -cluster contains only edges from . To form the children of in , we remove from the edges in and partition the remaining graph into expander graphs as above; these expander graphs are then the children of . The recursion stops when has size polynomially smaller than .

Next, we form a new graph from as follows. Initially, . For each and for each level -cluster , all the edges of between distinct child clusters of are added to an auxiliary structure that we describe below. In , their edge weights are artificially increased to a value which is smaller than the weight of any edge of in and heavier than the weight of any edge of in . The edges added to keep their original weights in . An example is shown in Figure 1.

Now, we have an auxiliary structure containing a certain subset of edges of and a recursive clustering of . Because of the way we defined edge weights in , an MSF of this graph has the nice property that it is consistent with the recursive clustering: for any cluster , restricted to is an MSF of . This could also have been achieved if we had simply deleted the edges from whose weights were artificially increased above; however, it is important to keep them in in order to preserve the property that clusters are expander graphs.

Assuming for now that clusters do not become disconnected during updates, it follows from this property that we can maintain by maintaining an MSF for each level independently where level -clusters are regarded as vertices of the MSF at level . The global MSF is then simply the union of (the edges of) these MSFs. Each edge deletion in only requires an MSF at one level to be updated and we show that the number of edges at this level is polynomially smaller than , allowing us to maintain ’ in time polynomially faster than .

We add the edges of to . In order to maintain an MSF of , we show that it can be maintained as an MSF of the edges added to . This follows easily from observations similar to those of Eppstein et al. [3] combined with the fact that any edge that was increased in belongs to with its original weight. We show that the number of non-tree edges in the graph maintained by is polynomially smaller than . is an instance of a new data structure (Section 5) which maintains an MSF of a graph in worst-case time per update where is an upper bound on the number of non-tree edges ever present in the graph. Hence, maintaining can be done in time polynomially faster than .

The main obstacle to overcome is to handle disconnected clusters. If a level -cluster becomes disconnected, this may affect the MSF at level and changes can propagate all the way down to level (similar to what happens in the data structure in [9]). Our analysis sketched above then breaks down. However, this is where we exploit the fact that w.h.p., each cluster is initially an expander graph. This implies that, assuming the total number of edge deletions is not too big, can only become disconnected along a cut where one side is small.

Whenever an edge has been deleted from a cluster , a data structure (Sections 78, and 9) is applied which “prunes” off parts of so that w.h.p., the pruned remains an expander graph. Because of the property above, only small parts need to be pruned off. As we show, this can be handled efficiently for polynomially slightly bigger than . With a reduction (Section 4) from fully-dynamic MSF to the restricted decremental MSF problem with this value of , the main result of the paper follows.

Reduction to decremental MSF (Section 4)

In Section 4, we give a reduction from fully-dynamic MSF to a restricted version of decremental MSF where the initial -vertex graph has degree at most and where the total number of edge deletions allowed is bounded by a parameter . The reduction is worst-case time-preserving, meaning roughly that if we have a data structure for the restricted decremental MSF problem with small worst-case update time then we also get a data structure for fully-dynamic MSF with small worst-case update time. This is not the case for the reduction presented in [9] since it only ensures small amortized update time for the fully-dynamic structure.

More precisely, our reduction states that if the data structure for the restricted decremental problem has preprocessing time and worst-case update time then there is a fully-dynamic structure with worst-case update time .

To get this result, we modify the reduction of Holm et al. [9]. In their reduction, decremental structures (which do not have a -bound on the total number of edge deletions) are maintained. During updates, new decremental structures are added and other decremental structures are merged together. The main reason why this reduction is not worst-case time-preserving is that a merge is done during a single update and this may take up to linear time.

We modify the reduction using a fairly standard deamortization trick of spreading the work of merging decremental structures over multiple updates. This gives the desired worst-case time-preserving reduction from fully-dynamic to decremental MSF. We then show how to further reduce the problem to the restricted variant considered in Section 3.

Fully-dynamic MSF with few non-tree edges (Section 5)

In Section 5, we present a fully-dynamic MSF structure which has an update time of where is an upper bound on the number of non-tree edges ever present in the graph. At a high level, this structure is similar to that of Frederickson [4] in that it maintains a clustering of each tree of the MSF into subtrees of roughly the same size. However, because of the bound on the number of non-tree edges, we can represent in a more compact way as follows. Consider the union of all paths in between endpoints of non-tree edges. In this subforest of , consider all maximal paths whose interior vertices have degree . The compact representation is obtained from by replacing each such path by a single “super edge”; see Figure 4. The compact version of only has size .

The update time for Frederickson’s structure is bounded by the maximum of the number of clusters and the size of each cluster so to get the bound, his structure maintains clusters each of size . We use essentially the same type of clustering as Frederickson but for the compact representation of , giving clusters each of size . Using a data structure similar to Frederickson for the compact clustering, we show that can be maintained in worst-case time per update. Here we get some additional log-factors since we make use of the top tree data structure in [1] to maintain, e.g., the compact representation of .

Partitioning a graph into expander subgraphs (Section 6)

In Section 6, we present a near-linear time algorithm to partition the vertex set of an -veretx constant-degree graph such that w.h.p., each set in this partition induces an -expander graph and the number of edges between distinct sets is for suitable positive constants and . The algorithm is a recursive variant of the Partition algorithm of Spielman and Teng [18].

For our application of this result in Section 3, we need each expander graph to respect a given partition of , meaning that each is either contained in or disjoint from . Ensuring this is a main technical challenge in this section.

Decremental Maintenance of Expander Graphs (Sections 78, and 9)

In Section 7, we present a decremental data structure which, given an initial expander graph of degree at most (such as one from Section 3), outputs after each update a subset of vertices such that at any point, there exists a subset of the set of vertices output so far so that is guaranteed to be connected; furthermore, w.h.p., the set output in each update is small. As we show, this is exactly what is needed in Section 3 where we require clusters to be connected at all times and where the vertices pruned off each cluster is small in each update.

This data structure relies on a procedure in Section 9 which we refer to as XPrune. It detects low-conductance cuts in a decremental graph (which is initially an expander graph) and prunes off the smaller side of such a cut while retaining the larger side.

XPrune uses as a subroutine the procedure Nibble of Spielman and Teng [18]. Given a starting vertex in a (static) graph, Nibble computes (approximate) probability distributions for a number of steps in a random walk from . For each step, Nibble attempts to identify a low-conductance cut based on the probability mass currently assigned to each vertex. Spielman and Teng show that if the graph has a low-conductance cut then Nibble will find such a cut for at least one choice of .

In Section 9, we show how to adapt Nibble from a static to a decremental setting roughly as follows. In the preprocessing step, Nibble is started from every vertex in the graph and if a low-conductance cut is found, the smaller side is pruned off. Now, consider an update consisting of the deletion of an edge . We cannot afford to rerun Nibble from every vertex as in the preprocessing step. Instead we show that there is only a small set of starting vertices for which Nibble will have a different execution due to the deletion of . We only run Nibble from starting vertices in this small set; these vertices can easily be identified since they are exactly those for which Nibble in some step sends a non-zero amount of probability mass along in the graph just prior to the deletion.

Hence, we implicitly run Nibble from every starting vertex after each edge deletion so if there is a low-conductance cut, XPrune is guaranteed to find such a cut. When the smaller side of a cut is pruned off, a similar argument as sketched above implies that Nibble only needs to be rerun from a small number of starting vertices on the larger side.

In order to have XPrune run fast enough, we need an additional trick which is presented in Section 8. Here we show that w.h.p., the conductance of every cut in a given multigraph is approximately preserved in a subgraph obtained by sampling each edge independently with probability ; this assumes that and the min degree of the original graph are not too small. This is somewhat similar to Karger’s result that the value of each cut is preserved in a sampled subgraph [12]. We make use of this new result in Section 9 where we run Nibble on the sampled subgraph rather than the full graph. Combined with the above implicit maintenance of calls to Nibble, this gives the desired performance of XPrune.

We conclude the paper in Section 10.

2 Preliminaries

We consider only finite undirected graphs and unless otherwise stated, they are simple. An edge-weighted graph is written on the form where ; we sometimes simply write even if is edge-weighted.

For a simple graph or a multigraph , denotes its vertex set and denotes its edge set. If is edge-weighted, we regard any subset of as a set of weighted edges and if the edge weight function for is not clear from context, we write instead of . We sometimes abuse notation and regard as a graph with edge set and vertex set consisting of the endpoints of edges in . When convenient, we regard the edge set of a minor of as a subset of in the natural way.

Given two edge-weighted graphs and , we let denote the multigraph with vertex set and edge set ; if both and contain an edge between the same vertex pair , we keep both edges in , one having weight and the other having weight .

In the rest of this section, let be an edge-weighted graph. A component of is a connected component of and we sometimes regard it as a subset of . For , is the subgraph of induced by . When is clear from context, we say that respects another subset of if either or . We extend this to a collection of subsets of and say that respects if respects each set in ; in this case, we let denote the collection of sets of that are contained in . For a subgraph of , we say that respects resp.  if respects resp. .

A cut of or of is a pair such that and . When is clear from context, we identify a cut with or with .

For a subset of , denote by the number of edges of crossing the cut , i.e., . The volume of in is the number of edges of incident to . Assuming both and have positive volume in , the conductance of (or of ) is defined as (this is called sparsity in [18]). When is clear from context, we define, for , , , and . We extend the definitions in this paragraph to multigraphs in the natural way.

Given a real value , we say that is a -expander graph and that has expansion if for every cut , . Note that if is connected and has constant degree then for every ; thus, in this special case, has expansion iff every such cut has conductance .

We let resp.  denote an MSF resp. MST of ; in case this forest resp. tree is not unique, we choose the MSF resp. MST that has minimum weight w.r.t. some lexicographical ordering of edge weights. For instance, consider assigning a unique index between and to each vertex. If two distinct edges and have the same weight, we regard as being cheaper than iff the index pair corresponding to is lexicograpically smaller than the index pair corresponding to . We extend and to the case where is a multigraph.

The fully-dynamic MSF problem is the problem of maintaining an MSF of an -vertex edge-weighted dynamic simple graph under updates where each update is either the insertion or the deletion of a single edge. Initially, contains no edges.

The following is well-known and easy to show for the dynamic MSF problem. When an edge is inserted into , becomes a new tree edge (of ) if it connects two distinct trees in . If has both endpoints in the same tree, it becomes a tree edge if the heaviest edge on the -to- path in has weight greater than , and becomes a non-tree edge; otherwise becomes a non-tree edge. No other changes happen to . After such an insertion, a data structure for the problem should report whether becomes a tree edge and if so, it should report if it exists.

When an edge is deleted, if is a non-tree edge, no updates occur in . Otherwise, is correctly updated by adding a cheapest reconnecting edge (if any) for the two new trees of containing and , respectively. The data structure should report such an edge if it exists.

Decremental MSF is the same problem as fully-dynamic MSF except that we only permit edge deletions; here we have an initial graph with an initial MSF and we allow a preprocessing step (which in particular needs to compute the initial MSF). Both fully-dynamic and decremental MSF extend to multigraphs but unless otherwise stated, we consider these problems for simple graphs. When convenient, we identify a fully-dynamic or a decremental MSF structure with the dynamic graph that it maintains an MSF of.

Our data structure uses the top tree structure of Alstrup et al. [1]. We assume that the reader is familiar with this structure, including concepts like top tree clusters and top tree operations like create, join, split, link, and cut.

We shall assume the Word-RAM model of computation with standard operations where each word consists of bits plus extra bits (if needed) to store the weight of an edge. We use this model to get a cleaner description of our data structure; with only a logarithmic overhead, our time bound also applies for a pointer machine having the same word size and the same operations as in the Word-RAM model.

We use the notation , , and when suppressing a factor of or so that, e.g., a function is if and for some constants .

3 Restricted Decremental MSF Structure

In this section, we present our data structure for a restricted version of decremental MSF where for an -vertex graph, the total number of edge deletions allowed is upper bounded by a parameter . The following theorem, whose proof can be found in Section 4, will imply that this suffices to obtain our fully-dynamic MSF structure.

Theorem 2.

Let a decremental MSF structure be given which for an -vertex graph of max degree at most and for constants and has preprocessing time at most and supports up to edge deletions each in worst-case time at most . Then there is a fully-dynamic MSF structure which for an -vertex dynamic graph has worst-case update time . If for the decremental structure the preprocessing time and update time bounds hold w.h.p. then in each update, w.h.p. the fully-dynamic structure spends no more than worst-case time.

We shall specify later but it will be chosen slightly bigger than . Parts of the structure are regarded as black boxes here and will be presented in detail in later sections. We assume that the input graph has max degree at most and we will give a data structure with update time polynomially less than . In the following, we let denote the decremental MSF of that our data structure should maintain.

A key invariant of our data structure is that it maintains a subgraph of having the same MSF as but having polynomially less than non-tree edges at all times. This allows us to apply the data structure of the following theorem whose proof is delayed until Section 5.

Theorem 3.

Let be a dynamic -vertex graph undergoing insertions and deletions of weighted edges where the initial edge set need not be empty and where the number of non-tree edges never exceeds the value . Then there is a data structure which after worst-case preprocessing time can maintain in worst-case time per update where an update is either the insertion or the deletion of an edge in or a batched insertion of up to edges in , assuming this batched insertion does not change .

The data structure in Theorem 3 is at a high level similar to those of Frederickson [4] and Eppstein et al. [3] and for this reason, we shall refer to each instance of it as an FFE structure (Fast Frederickson/Eppstein et al.) and denote it by .

3.1 Preprocessing

Let be some small positive constant which will be specified later; for now, we only require it to be chosen such that is an integer that divides . In the first part of the preprocessing, we sort the weights of edges of the initial graph in non-decreasing order and assign a rank to each edge between and according to this order, i.e., the edge of rank has minimum weight and the edge of rank has maximum weight. We redefine such that equals the rank of each edge . MSF w.r.t. these new weights is also an MSF w.r.t. the original weights and uniqueness of edge weights implies uniqueness of . In particular, does not reveal any information about the random bits used by our data structure so we may assume that the sequence of edge deletions in is independent of these bits.

We compute the initial MSF using Prim’s algorithm implemented with binary heaps.333We could have chosen the faster MSF algorithm in [2] but it is more complicated and will not improve the overall performance of our data structure. It will be convenient to assume that each component of the initial graph contains at least vertices. This can be done w.l.o.g. since we can apply the data structure of Eppstein et al. for every other component, requiring a worst-case update time of which is polynomially less than .

Next, Frederickson’s FINDCLUSTERS procedure [4] is applied to , giving a partition of into subsets each of size between and and each inducing a subtree of ; here we use the fact that and hence has degree at most . Let denote the collection of these subsets. For each , we refer to as an -cluster. We denote by the union of edges of -clusters.

For , let be the set of edges of of weights in the range . Note that ; this set is only defined to give a cleaner description of the data structure. For , let , , , and .

Computing a laminar family of clusters:

Next, a recursive procedure is executed which outputs a family of subgraphs of that all respect . We refer to these as level -clusters where . Collectively (i.e., over all ), we refer to them as -clusters in order to distinguish them from -clusters. Family will be laminar w.r.t. subgraph containment. We need the following theorem whose proof can be found in Section 6.

Theorem 4.

Let be a constant-degree graph with vertex set and let be a partition of into subsets each of size and each inducing a connected subgraph of . Let and be given constants. There is an algorithm which, given , , and any non-empty set of size respecting , outputs a partition of respecting such that with probability at least , the following three conditions hold for suitable and :

  1. is a -expander graph for each ,

  2. the number of edges of between distinct sets of is at most , and

  3. the worst-case time for the algorithm is .

We shall pick in Theorem 4 in the following. We may assume that .

The recursive procedure takes as input an integer and a set of level -clusters and outputs the level -clusters contained in these level -clusters for . The first recursive call is given as input and as the single level -cluster.

In the general recursive step, for each level -cluster , the algorithm of Theorem 4 is applied with , , and , giving a partition of respecting such that for suitable and , the following holds w.h.p.,

  1. is a -expander graph for each , and

  2. the are at most edges of between distinct sets in .

The graphs for all are defined to be level -clusters. If the procedure recurses with and with these level -clusters. The recursion stops when level -cluster has at most edges of ; this ensures that the lower bound on in Theorem 4 is satisfied for each application of this theorem.

The laminar family of all the clusters is represented as a rooted tree in the natural way where the root is the single level -cluster and a level -cluster has as children the level -clusters contained in it.

For any subset of edges of and for any -cluster , we let be the subset of edges of belonging to and having endpoints in distinct children of in ; note that if is a leaf of . We let be the union of over all and all level -clusters .

Next, a new graph is formed where for each level -cluster the weight of each is set to ; note that this ensures that for all and all , . For all other edges of , we define . An example is shown in Figure 1. Forest is computed and an FFE structure is initialized.

(b)(a)

Figure 1: (a): A level -cluster is shown with four level -child clusters, for . Letting , we have . Edges of not belonging to its children are shown together with their -weights where thick edges are more expensive than thin edges. (b): The same clusters and edges but with the modified -weights.

3.2 Updates

We now describe how our data structure handles updates. First, we extend some of the above definitions from the preprocessing step to any point in the sequence of updates as follows. -clusters are the components (trees) of the graph consisting of the initial -clusters minus the edges removed so far. Hence, when an edge of an -cluster is removed, the two new trees obtained replace as -clusters. -clusters are the initial -clusters minus the edges deleted so far. Note that remains a laminar family over all updates. Finally, , , and are the initial , , and , respectively, minus the edges removed so far.

Data structure maintains an MSF for the dynamic graph . Lemma 2 below implies that this MSF is . To show it, we use the following result of Eppstein et al. [3].

Lemma 1 ([3], Lemma 4.1).

Let be an edge-weighted multigraph and let and be two subgraphs of such that . Then .

The result was not stated for multigraphs in [3] but immediately generalizes to these.

Lemma 2.

Let be an edge-weighted graph, let , and let where for all and for all . Then .

Proof.

By Lemma 1, we have

Corollary 1.

With the above definitions, .

As we show later, the number of non-tree edges of is at all times polynomially smaller than . Hence, by Theorem 3, it suffices to give an efficient data structure to maintain . We present this in the following. In the rest of this section, all edge weights are w.r.t.  unless otherwise stated. An advantage of considering rather than is that behaves nicely w.r.t. the laminar family as the following lemma shows.

Lemma 3.

For any -cluster , .

Proof.

Observe that . Hence, we can obtain by running a Kruskal-type algorithm on the edges of where the initial forest has edge set .

Given a level -cluster , we have . By definition of , all edges of are cheaper than all other edges of incident to . Hence, Kruskal’s algorithm processes all edges of before any other edge of incident to so it will form the spanning forest of as part of . It must be a cheapest such spanning forest of since otherwise, the cost of could be reduced. ∎

We now present a data structure that maintains . At a high level, this structure is similar to as it makes use of an FFE structure. The edge set of is maintained using smaller dynamic structures for the various -clusters; these structures are described below.

We say that a level -cluster is small if initially it contained at most edges of ; otherwise, the cluster is large. Note that a large cluster must have children in since otherwise, it is a level -cluster and . Thus small clusters are leaves in while large clusters are interior nodes. We shall make the simplifying assumption that each large cluster is connected over all updates. This is a strong assumption and we shall later focus on how to get rid of it.

Part of is a data structure which maintains where is the union of all small -clusters. This structure consists of an FFE structure (in fact, Frederickson’s original structure suffices here) for each small -cluster which is initialized during preprocessing. For large clusters, we use more involved data structures which we present in the following.

3.2.1 Compressed clusters

For each level and each large level -cluster , we define the compressed level -cluster as the multigraph obtained from as follows. First, each large child cluster of is contracted to a single vertex called a large cluster vertex, and self-loops incident to this new vertex are removed. Second, for each small child cluster of , its edge set is replaced by . Figure 2(a) and (b) illustrate and , respectively. We define three subgraphs of :

(b)(a)(d)(c)(e)

Figure 2: (a): A level -cluster with three large child clusters (left) and four small child clusters (right). Edges of not belonging to its child clusters are shown. (b): compressed cluster with an MSF for each child cluster shown. Large cluster vertices are shown in black. (c)–(e): Graphs , , and , respectively.
:

consists of the union of over all small child clusters of as well as the edges of with both endpoints in small child clusters of (Figure 2(c)),

:

consists of the large cluster vertices of , for each small child cluster of , and the edges of having a large cluster vertex as one endpoint and having the other endpoint in a small child cluster of (Figure 2(d)),

:

consists of the subgraph of induced by its large cluster vertices (Figure 2(e)).

Note that , , and together cover all vertices and edges of . Define , , and . Data structure will use an FFE structure for the graph defined as the union of and of , , and over all compressed clusters . This FFE structure, which we denote by , is initialized during preprocessing. By Lemma 3, it will maintain as desired. As we show later, contains polynomially less than non-tree edges at all times so that it can be updated efficiently.

Let be a given compressed cluster. It remains to give efficient data structures that maintain , , and . We maintain using an FFE structure for , initialized during preprocessing. In the following, we present structures maintaining and .

3.2.2 Maintaining

To maintain and efficiently, we shall exploit the fact that both and have a subset of only large cluster vertices and (ignoring in the edges of for all small child clusters of ) all edges of these graphs are incident to this small subset.

Forest is represented as a top tree. In the following, we shall abuse notation slightly and refer to this top tree as . Each top tree cluster of has as auxiliary data a pair where is the set of large cluster vertices of contained in and contains, for each large cluster vertex a minimum-weight edge having as one endpoint and having the other endpoint in ; if no such edge exists, is assigned some dummy edge whose endpoints are undefined and whose weight is infinite.

In order to maintain , we first describe how to maintain auxiliary data under the basic top tree operations create, split, and join for . When create outputs a new cluster consisting of a single edge, we form as the set of at most one large cluster vertex among the endpoints of the edge. Then is computed by letting be a cheapest edge incident to both and (or if undefined), for each large cluster vertex .

When a split operation is executed for a top tree cluster , we simply remove and . Finally, when two top tree clusters and are joined into a new top tree cluster by join, we first form the set . Then we form by letting be an edge of minimum weight among and , for each large cluster vertex .

We are now ready to describe how to maintain when an edge is deleted from .

Deleting a non-tree edge:

Assume first that . Then the topology of is unchanged. If is incident to a large cluster vertex then let be the other endpoint of ( cannot be a large cluster vertex); in this case the auxiliary data for each top tree cluster containing needs to be updated. We do this bottom-up by first applying create to replace each leaf cluster containing with a new leaf cluster and applying join to update all non-leaf clusters containing .

Note that the new set of top tree clusters is identical to the old set, only their auxiliary data are updated.

Deleting a tree edge:

Now assume that belongs to a tree of . Top tree is updated with the operation cut. If belongs to for some small child cluster of then also belongs to . In this case, if a reconnecting edge was found for , it is added to as a reconnecting edge for . By Lemma 3, this is the cheapest reconnecting edge for . Top tree is updated using a link-operation.

Now assume that no reconnecting edge was found in (which may also happen if did not belong to for any small child cluster of ). Let and be the two subtrees of . After having computed top trees for and , let resp.  be the root top tree cluster representing resp. . A cheapest reconnecting edge (if any) is of one of the following two types: a cheapest edge connecting a large cluster vertex in with a vertex of or a cheapest edge connecting a large cluster vertex in with a vertex of . We shall only describe how to identify the first type of edge as the second type is symmetric. First, we identify from the set . Then the desired edge is identified as an edge of minimum weight over all large cluster vertices . Having found a cheapest reconnecting edge for , if , we add to to reconnect . In the top tree, this is supported by a link-operation.

3.2.3 Maintaining

Maintaining is quite simple. For all distinct pairs of large cluster vertices in , the initial set of edges between and in are stored during preprocessing in a list sorted in increasing order of weight. A graph is formed, containing a cheapest edge (if any) between each such pair . The initial is computed from using Prim’s algorithm with binary heaps. Whenever an edge is deleted from , it is also deleted from and a cheapest remaining edge (if any) between and is identified from and added to . Whenever a tree edge is deleted from , a simple linear-time algorithm is used to find a cheapest replacement edge by scanning over all edges of .

3.3 Performance

We now analyze the performance of the data structure presented above. We start with the preprocessing step.

3.3.1 Preprocessing

Prim’s algorithm finds in time. Having found , can be found in time since this is the time bound for Frederickson’s FINDCLUSTERS procedure.

The time to compute is dominated by the total time spent by the algorithm in Theorem 4. For each , the total vertex size of all level -clusters is at most since their vertex sets are pairwise disjoint. Hence, the total size of all sets given to the algorithm is . By the third part of Theorem 4, w.h.p. the total time for computing is .

By Theorem 3, the FFE structures and can be initialized in worst-case time. This is also the case for the FFE structures of graphs since these graphs are compressed versions of subgraphs of that are pairwise both vertex- and edge-disjoint, implying that their total size is . Finally, to bound the time to initialize , note that the graph consisting of the union of and MSFs , , and over all contain a total of edges and at most vertices of . Furthermore, the total number of large cluster vertices is . Hence, the total worst-case time spent on initializing FFE structures is .

We conclude that w.h.p., the total worst-case preprocessing time is .

3.3.2 Updates

Now we bound the update time of our data structure. We start by bounding the time to update after a single edge deletion in . Recall that . A single edge deletion in can cause at most one edge deletion in , at most one edge deletion in , and (in case a tree edge was deleted from ) at most one edge insertion in . Hence, and thus can be updated with a constant number of edge insertions/deletions.

By Theorem 3, in order to bound the time to update after a single edge insertion/deletion, we need to bound the number of non-tree edges of . We do this in the following lemma.

Lemma 4.

At any time during the sequence of updates, the number of non-tree edges of is .

Proof.

Observe that edges of are edges of (since they belonged to initially and since we only delete edges from ). In particular, edges of belonging to are tree edges of . Furthermore, if each -cluster is contracted to a vertex in then the number of remaining edges is at most the number of -clusters minus . The initial number of -clusters in a tree of is and the number of -clusters can increase by at most per edge deletion in . Since we have a bound of on the total number of edge deletions in , we conclude that at all times, the number of non-tree edges of is .

Next, we bound . By the second property of Theorem 4, for , and for each non-leaf level -cluster , where is the partition of found by the algorithm in Theorem 4. By a telescoping sums argument applied to laminar family , it follows that