Dynamic Representations of Sparse Distributed Networks: A Locality-Sensitive Approach

In 1999, Brodal and Fagerberg (BF) gave an algorithm for maintaining a low outdegree orientation of a dynamic uniformly sparse graph. Specifically, for a dynamic graph on -vertices, with arboricity bounded by at all times, the BF algorithm supports edge updates in amortized update time, while keeping the maximum outdegree in the graph bounded by . Such an orientation provides a basic data structure for uniformly sparse graphs, which found applications to several dynamic graph algorithms, including adjacency queries and labeling schemes, maximal and approximate matching, approximate vertex cover, forest decomposition, and distance oracles.

A significant weakness of the BF algorithm is the possible temporary blowup of the maximum outdegree, following edge insertions. Although BF eventually reduces all outdegrees to , some vertices may reach an outdegree of during the process, hence local memory usage at the vertices, which is an important quality measure in distributed systems, cannot be bounded. We show how to modify the BF algorithm to guarantee that the outdegrees of all vertices are bounded by at all times, without hurting any of its other properties, and present an efficient distributed implementation of the modified algorithm. This provides the first representation of distributed networks in which the local memory usage at all vertices is bounded by the arboricity (which is essentially the average degree of the densest subgraph) rather than the maximum degree.

For settings where there is no strict limitation on the local memory, one may take the temporary outdegree blowup to the extreme and allow a permanent outdegree blowup. This allows us to address the second significant weakness of the BF algorithm – its inherently global nature: An insertion of an edge may trigger changes in the orientations of edges that are arbitrarily far away from and . Such a non-local scheme may be prohibitively expensive in various practical applications. We suggest an alternative local scheme, which does not guarantee any outdegree bound on the vertices, yet is just as efficient as the BF scheme for some of the aforementioned applications. For example, we obtain a local dynamic algorithm for maintaining a maximal matching with sub-logarithmic update time in uniformly sparse networks, providing an exponential improvement over the state-of-the-art in this context. We also present a distributed implementation of this scheme and some of its applications.

1 Introduction

1.1 Quality measures in distributed computing

The and the models are perhaps the two most fundamental communication models in distributed computing (cf. [25]), the former is the standard model capturing the essence of spatial locality, and the latter also takes into account congestion limitations. In these models it is assumed that initially all the processors wake up simultaneously, and that computation proceeds in fault-free synchronous rounds during which every processor exchanges messages with its direct neighbors in the network. In the model these messages are of unbounded size, whereas in the model each message contains bits. An efficient distributed algorithm allows the nodes to communicate with their direct neighbors for a small number of rounds, after which they need to produce their outputs, which are required to form a valid global solution. A task is called local if the number of rounds needed for solving it is constant. The locality of many distributed tasks have been studied in the past two decades, with the emerging conclusion that truly local tasks are rather scarce.

Another important locality measure is the local memory usage at a processor. The standard premise is that each processor may communicate with all its neighbors by sending and receiving messages. To this end, the local memory usage at a processor should be proportional to (and at least linear in) its degree. Reducing the local memory at processors to be independent of their degree could be of fundamental importance for many real-life applications. In fact, the processors in a distributed network are in many cases identical, thus the local memory at low degree processors is not proportional to their degree but rather to the maximum degree in the network. Moreover, in sparse networks (such as planar networks), the maximum degree may be while the average degree is constant, so the global memory (over all processors) will be blown up by a factor of if all the processors are identical. (In dynamic networks, on which we focus here, this factor blow-up may occur even if the processors are not identical.) Low-degree spanners have been used to reduce local memory usage at processors, which was proved useful for a plethora of applications, such as efficient broadcast protocols, data gathering and dissemination tasks in overlay networks, compact routing schemes, network synchronization, computing global functions [4, 27, 5, 6, 25]. However, for the vast majority of distributed tasks, such as maximum independent set and coloring, the global solution must consider all edges of the network and not just the spanner edges.

The total number of messages needed for solving a distributed task is another fundamental quality measure in distributed computing, which we will also consider in the sequel.

1.2 The dynamic distributed setting

The dynamic distributed model is defined as follows. Starting with the empty graph , in every round , the adversary chooses a vertex or an edge to be either inserted to or deleted from , resulting in . (As a result of a vertex deletion, all its incident edges are deleted. A vertex is inserted without incident edges.) Upon the insertion or deletion of a vertex or an edge , an update procedure is invoked, which should restore the validity of the solution being maintained. For example, if we maintain a maximal matching, then following the deletion of a matched edge the matching is no longer maximal, and the update procedure should restore maximality. We shall consider the most natural model in this setting, hereafter the local wakeup model (cf. [26, 24, 13, 3]), where only the affected vertices wake up (following an update to a vertex , only wakes up; following an edge update , both and wake up). The update procedure proceeds in fault-free synchronous rounds during which every processor exchanges messages with its neighbors, just as in the static setting, until finishing its execution.

In the distributed dynamic setting, the amortized update time and amortized message complexity bound the average number of communication rounds and messages sent, respectively, needed to update the solution per update operation, over a worst-case sequence of updates. The worst-case update time and worst-case message complexity is the maximum number of communication rounds and messages sent, again over a worst-case sequence of updates.

We assume that the topological changes occur serially and are sufficiently spaced so that the protocol has enough time to complete its operation before the occurrence of the next change. Since all our algorithms can be strengthened to achieve a worst-case update time of (and in some cases even ), this assumption should be acceptable in many practical scenarios. Moreover, the same assumption has been made also in previous works; see, e.g., [26, 24, 13, 3], and the references therein. We remark that our focus is on optimizing amortized rather than worst-case bounds, which may provide another justification for making this assumption.

1.3 Representations of sparse networks via dynamic edge orientations

1.3.1 Centralized networks

A graph has arboricity if , where . Thus the arboricity is close to the maximum density over all induced subgraphs of . While a graph of bounded arboricity is uniformly sparse, a graph of bounded density (i.e., a sparse graph) may contain a dense subgraph (e.g., on of the vertices), and therefore may have large arboricity. The family of bounded arboricity graphs contains planar and bounded genus graphs, bounded tree-width graphs, and in general all graphs excluding fixed minors.

One of the most fundamental questions in data structures is to devise efficient representations of graphs supporting adjacency queries: Given two vertices and , is there an edge between them in the -vertex graph ? Using an adjacency matrix (of size ) one can support such queries in time. In sparse graphs, however, a quadratic-space data structure seems very wasteful. If one uses adjacency lists instead, the space is reduced to , but then adjacency queries may require time. By maintaining these adjacency lists sorted, the worst-case query time can be reduced to , but no further than that, even in sparse graphs. Another approach is to use hashing, which guarantees linear space and constant query time, but alas it requires randomization, otherwise the construction time is super-linear. While some of these data structures have linear (in the graph size) space usage, none of them can bound the local space usage (per vertex).

In a pioneering paper from 1999, Brodal and Fagerberg (BF) [12] devised a data structure for adjacency queries in uniformly sparse graphs that is based on edge orientations. Specifically, an arboricity preserving sequence is a sequence of edge insertions and deletions starting from an empty graph, in which the arboricity of the dynamic graph is bounded by at all times. For any arboricity preserving sequence, the BF algorithm has an amortized time update time of , while keeping the maximum outdegree in the graph bounded by . (The BF algorithm can, in fact, handle vertex updates within the same asymptotic bounds, where stands for the current number of vertices.) Such an edge orientation, which is called a -orientation, allows to support adjacency queries in worst-case time, thus providing a significant improvement over the known data structures in graphs of sufficiently low arboricity.

BF also showed that the amortized time of their algorithm is asymptotically optimal. Specifically, let and be three arbitrary integers satisfying , and suppose one can maintain a -orientation for some sequence of edge updates while doing edge flips, starting with the empty graph. (We omit the constants hidden in the notation above and the notation to follow.) Then the BF algorithm on this update sequence with an outdegree parameter maintains a -orientation with a total runtime (and thus number of edge flips) of .

Recently, there has been a growing interest in the edge orientation problem, due to its applications to additional dynamic graph problems. See App. A for additional results on this problem and some of its applications.

1.3.2 Distributed networks

There is a close connection between low outdegree orientations and the forest decomposition problem, where one aims to decompose the edges of a graph into a small number of (rooted) forests. Obviously, a decomposition of a graph into forests immediately yields an -orientation. The other direction is also true [24]: An -orientation yields a decomposition into at most forests. Also, a dynamic maintenance of the former can be translated into a dynamic maintenance of the latter with a constant overhead in the update time, in both centralized and distributed settings [24].

[7] studied the forest-decomposition problem in the distributed static setting. They showed that for a network with arboricity and any , there exists a distributed algorithm that computes a decomposition of into at most forests (and hence also a -orientation) in rounds. (This result was refined recently by [16].) [7] also showed that given such a forest decomposition (or an edge orientation), one can compute an -vertex coloring for in more rounds. Using this coloring an MIS can be computed in rounds. More generally, low outdegree orientations lead to sublinear-time algorithms for vertex and edge coloring, MIS, and maximal matching in distributed networks of bounded arboricity. (See Chapters 4 and 11.3 in [8] for more details.)

For the dynamic distributed model, [24] devised a distributed algorithm for maintaining -orientation in amortized update time. They then used this orientation to maintain within the same time a decomposition into forests and also an adjacency labeling scheme with label size . They used the same approach to get distributed algorithms for maintaining -coloring and other related structures with the same update time. Although the distributed algorithm of [24] has a low amortized update time, it incurs a polynomial (in the network size) bound on three important parameters: (1) the amortized message complexity, (2) the local memory usage at processors, and (3) the messages size. In particular, the algorithm of [24] cannot be implemented in the model.

While the distributed algorithms of [7] can be implemented in the model, they are static, and as such, their message complexity must be at least linear in the size of the network. Moreover, unless there is some underlying representation of the network, for an algorithm to solve any nontrivial task from scratch, any processor must communicate with each of its neighbors at least once. Hence the local memory usage at processors, which should be at least linear in the maximum degree for some processors, may be larger than the arboricity bound by a factor of .
A fundamental question.  Can one use -orientations to obtain a representation of a dynamic distributed network with a local memory usage of ? We first argue that a distributed implementation of the BF algorithm cannot achieve this. Indeed, a significant weakness of the BF algorithm is the possible temporary blowup of the maximum outdegree, following edge insertions. More specifically, following an insertion of edge that is oriented from to , the outdegree of may exceed the threshold . To restore a valid -orientation, the BF algorithm resets , thereby flipping all its outgoing edges. As a result, the former out-neighbors (outgoing neighbors) of increase their outdegree. All such neighbors whose outdegree now exceeds are then handled in the same way, one after the other, and this process is repeated until all vertex outdegrees are . BF used an elegant potential function argument to show that this process not only terminates, but also leads to an asymptotically optimal algorithm (as mentioned before). Although BF eventually reduces all outdegrees to , some of these outdegrees may blow up throughout the reset cascade all the way to .

To implement the BF algorithm with local memory usage of , the orientation should remain a (or close to )-orientation throughout the reset cascade. We show that this is not the case unless the graph is of arboricity . Specifically, we show that for dynamic forests (), the BF algorithm never increases the outdegree of a vertex beyond , but there exist graphs of arboricity for which the BF algorithm blows up the outdegree of some vertices to ! Hence, a distributed implementation of the BF algorithm requires a huge local memory usage. The algorithms of [18, 17], with a worst-case update time, never increase the outdegree of a vertex beyond the specified threshold. However, the tradeoffs between the outdegree and update time provided by these algorithms are significantly inferior to the BF tradeoff. In particular, for graph of constant arboricity, the outdegree should remain constant at all times, and the algorithms of [18, 17] cannot provide outdegree better than . (See App. A for more details.)

We remark that the reset cascade of the BF algorithm is inherently sequential, and it is unclear if it can be distributed efficiently even regardless of local memory constraints. A similar issue arises with the worst-case update time algorithms of [18, 17].

Question 1

Is there an algorithm with the same optimal (up to constants) tradeoff of BF between the outdegree and the amortized cost, which guarantees that the outdegree of all vertices is always ? Furthermore, can this algorithm be distributed efficiently with a local memory usage of ?

Our contribution.  Our first attempt towards answering Question 1 is by making a natural modification to the BF algorithm: Instead of resetting vertices of outdegree larger than at an arbitrary order, we always choose to reset next, the vertex of largest outdegree among all vertices of outdegree larger than . We show that with this modification the algorithm of BF keeps the outdegrees at all times. We also complement this upper bound with a matching lower bound, showing that the BF algorithm together with this modification can indeed generate vertices of outdegree during the reset cascade, and this can happen even in graphs of arboricity 2. This modification does not resolve Question 1, as the outdegree may blow-up by a logarithmic factor during the cascade, and more importantly, it seems unlikely that the algorithm with this modification can be distributed efficiently.

To resolve Question 1 we first give a new centralized algorithm, which is inherently different than the BF algorithm, and keeps the outdegree bounded by at all times. In contrast to the BF algorithm, our algorithm does not apply a cascade of reset operations on vertices whose outdegree exceeds following an insertion. Note that any reset operation on some vertex “helps” that particular vertex but “hurts” its out-neighbors. Instead, our algorithm first collects a set of vertices of relative high outdegree that would “benefit” from being reset. Then it works on the graph induced by the outgoing edges of these vertices in a somewhat opposite manner to the BF algorithm. More specifically, it applies a cascade of “anti-reset” operations on vertices of outdegree significantly smaller than , where an anti-reset on a vertex flips all its incoming edges to be outgoing of it. In other words, vertices in our algorithm are being helpful to their neighbors rather than hurtful as before. The cascade of “anti-reset” operations leads to a low outdegree orientation within the subgraph , but it also makes sure that the outdegree of all vertices would never exceed in the entire graph throughout the process. We show that our algorithm has the same (up to a constant factor) tradeoff of BF between the outdegree and amortized cost. This is nontrivial, since the potential function argument of BF relies heavily on the gain of any reset operation to the potential value. Roughly speaking, that argument compares the current orientation to an optimal orientation, where all edges but must be incoming to any vertex, and so the potential must be reduced after resetting a vertex of outdegree much larger than . This argument, alas, does not carry over to anti-resets. The argument that we provide is based on a global consideration (of the total potential gain of all anti-resets) rather than on a local consideration (of each reset). We also demonstrate that this approach of replacing resets with anti-resets facilitates efficient distributed implementation, as we can perform all the anti-resets in parallel, without worrying about the neighbors’ outdegrees.

In this way we resolve Question 1 in the affirmative, providing a distributed algorithm for maintaining -orientation with the optimal (up to constants, w.r.t. ) amortized cost, with a local memory usage of , for any . Moreover, the amortized cost bounds not just the amortized update time of our algorithm but also its amortized message complexity. Our algorithm uses short messages, and can thus be implemented in the model. As immediate consequences, we can maintain forest decomposition and adjacency labeling schemes with the same bounds as above, thereby significantly improving [24]. (Recall that the algorithm of [24] incurs polynomial bounds on the amortized message complexity, local memory usage at processors, and messages size.)

A low outdegree orientation does not provide information on the incoming neighbors of a vertex. Hence, although it finds applications as discussed above, it cannot be viewed as a complete representation of the network. To obtain a complete representation of the network, we distribute the information on the incoming neighbors of any vertex within the local memory of these neighbors. In this way we can guarantee that the local memory usage remains , yet each vertex can scan its incoming neighbors upon need. On the negative side, this scan of incoming neighbors will be carried out sequentially rather than in parallel. Nevertheless, in some applications, we only need to scan a few incoming neighbors. As a first application of our network representation, we obtain a distributed algorithm for maintaining a maximal matching with amortized update time and message complexities, with local memory usage. (A maximal matching can be maintained via a trivial distributed algorithm with worst-case update time, even in general networks, but its amortized message complexity and local memory usage will be , even in forests.) To enhance the applicability of our network representation, we demonstrate that the bounded degree sparsifiers of [29] can be maintained dynamically in a distributed network using low local memory usage. Using these sparsifiers, we obtain efficient distributed algorithms for maintaining approximate matching and vertex cover with low amortized update time and message complexities and with low local memory usage (see Section 2 for details).

This result provides the first efficient representation of uniformly sparse distributed networks with low local memory usage. Besides the aforementioned applications, such a representation may be used more broadly in applications currently suitable only for low degree networks, where local memory is very limited.

1.4 The algorithm of BF is global

When dealing with networks of huge scale, it is often important to devise algorithms that are intrinsically local. Local algorithms have been extensively studied, from various perspectives. (See e.g. [21, 1, 30, 28, 22, 14, 15] and the references therein.) A local algorithm in a dynamic network performs an operation at a vertex while affecting only and its immediate neighbors (or more generally vertices in a small ball around ). Local algorithms are motivated by environments, both centralized and distributed, in which it is undesirable, and sometimes even impossible, for a change at a particular vertex of the network to affect remote locations unrelated to the change. In the context of I/O efficiency, local algorithms may have better cache performance.

The second drawback of the BF algorithm that we address is the fact that it is not local. A single edge insertion that increases the outdegree of a vertex beyond may trigger edge flips that are at distance from and , as shown in Figure 1 for . In fact, for the example of Figure 1, any algorithm that maintains a -orientation must flip edges that are at distance from and . (There are degenerate examples showing that the BF algorithm sometimes flips edges at distance from and .) Consequently, to achieve locality, we must relax the outdegree condition inherent to the edge orientation problem.

Figure 1: An illustration for -orientation. Upon the insertion of edge , at least edges must be flipped to restore a 2-orientation, some of which must be at distance from and . For example, flipping the edges along the red path should restore a -orientation.

Our contribution.  We propose an alternative local scheme that performs a sequence of edge insertions, deletions, and adjacency queries in total time that is asymptotically no worse than that of BF. The scheme is natural and works as follows. Upon a query and/or an update at a vertex we reset . That is we make ’s outgoing edges incoming. (We suggest two versions, one aggressive that always flips ’s outgoing edges, and another that flips these edges only if the outdegree of is larger than .) More specifically, whenever the application of interest has to traverse ’s outgoing neighbors it also flips them (thereby intuitively paying for the traversal). Thus, we get locality at the cost of giving away the worst case upper bound on the outdegrees of the vertices. We call this scheme the flipping game. We use the flipping game to get local algorithms for adjacency queries and dynamic maximal matching. These two application can, in fact, be casted as special cases of a generic paradigm, described in detail in Section 3.1.

The only known local algorithm for maintaining maximal matching has update time of where is the number of edges in the graph [23], and this guarantee does not improve for graphs with bounded arboricity. (Even in dynamic forests, the fastest known local algorithm has amortized update time .) Using the flipping game we get a local algorithm with amortized update time of for low arboricity graphs.

The fastest local deterministic data structure for supporting adjacency queries requires a logarithmic query time, again even for dynamic forests. Using the flipping game we get a deterministic local data structure for adjacency queries supporting queries and updates in amortized time in low arboricity graphs, providing an exponential improvement over the state-of-the-art.

To prove these bounds, we upper bound the number of flips made by the flipping game in terms of the number of flips made by the algorithm of BF for maintaining a -orientation. We remark that the flipping game can be easily and efficiently distributed. This gives rise to a local distributed algorithm for maintaining a maximal matching in a distributed network of low arboricity, with amortized update time and message complexities of . (Applying the distributed algorithm of [24] instead of the flipping game yields a global algorithm with amortized message complexity .)

2 Efficient Representations for Sparse Networks

2.1 Low outdegree orientations with low local memory usage

Let denote the outdegree threshold in the BF algorithm. We present here a new algorithm for maintaining a -orientation in dynamic graphs of bounded arboricity . Our algorithm achieves the same (up to a constant factor) parameters as the BF algorithm, yet it guarantees that the outdegree of all vertices is bounded by the required threshold (i.e., ) at all times. We first (Section 2.1.1) describe the algorithm in a centralized setting, and then (Section 2.1.2) present a distributed implementation. Finally, we complement these results (Section 2.1.3) by showing that the BF algorithm indeed blows up the outdegree of vertices, even after applying to it several natural adjustments.

2.1.1 A new centralized algorithm that controls the outdegrees

Our algorithm handles edge deletions and insertions in the same way as the BF algorithm, until the outdegree of some vertex exceeds . At this stage our algorithm does not apply a reset cascade, but rather aims at finding all the vertices that would “benefit” from flipping their edges (in terms of reducing the value of a global potential function, details follow), and then applies a cascade of anti-resets, where vertices of sufficiently low outdegree flip their incoming edges to be outgoing of them (rather than the other way around. as in the BF algorithm).

Specifically, the algorithm starts by exploring the directed neighborhood outgoing of , stopping at vertices of outdegree at most . That is, for each vertex of outdegree greater than that we reach starting from , hereafter an internal vertex, we explore all its out-neighbors. For each vertex of outdegree at most that we reach, hereafter a boundary vertex, we do not do anything. (Thus internal vertices of have outdegree greater than and all their out-neighbors belong to , whereas boundary vertices have outdegree at most and their out-neighbors may belong to due to other internal vertices, but not due to the boundary vertices themselves.) Denote by and the sets of internal and boundary vertices of , respectively. The algorithm constructs the digraph , where consists of all the outgoing edges of the internal vertices of . This can be carried out in time linear in the size of . Having constructed the digraph , the algorithm proceeds by computing a new orientation of in which the outdegree of all vertices is bounded by as follows. Initially we color (i.e., mark) all edges of . Observe that at least one vertex of is adjacent to at most colored edges; we maintain a list of all vertices adjacent to at most colored edges. We pick an arbitrary vertex in , perform an anti-reset on it (flipping all its incoming edges to be outgoing of it), and then uncolor all its at most adjacent colored edges and update accordingly. This process is repeated until no edge of is colored, at which stage we have a valid -orientation for . Note that until a vertex performs an anti-reset, its outdegree may only decrease. Whenever a vertex performs an anti-reset, its outdegree may increase, but to at most , which means that a vertex never increases its outdegree beyond the maximum between and its initial out-degree.

Since each boundary vertex had at most out-neighbors in the entire graph, its new outdegree will be at most , and this also bounds its outdegree at any time during the process. Moreover, since all outgoing edges of each internal vertex of are taken to , the outdegree of each internal vertex never exceeds . This process of computing a valid -orientation while never blowing up the outdegree, hereafter the anti-reset cascade procedure, is inspired by the the static algorithm of [2], with the inherent difference that it works on a carefully chosen (possibly small) subgraph , whereas the reset cascade procedure underlying the BF algorithm does not work on a precomputed subgraph, but rather on a subgraph that grows “on the fly” with the resets. While it is easy to see that our procedure runs in linear time on any chosen subgraph (as with the BF algorithm), the challenge is to show that the total cost of these procedures over all chosen subgraphs throuhgout the execution of our algorithm is aymptotically the same as that of the BF algorithm.

Lemma 2.1

The total runtime of our algorithm is linear in the total number of edge flips made, assuming .

Proof:  Edge insertions and deletions are handled in constant time, until the outdegree of some vertex exceeds . At this stage a digraph as described above is constructed, along with the aforementioned list , within time linear in the size of . Then edges of are flipped by the anti-reset cascade procedure, so that each edge is flipped at most once. By maintaining the list throughout the anti-reset cascade procedure, we can easily implement this procedure in time linear in the size of . Note also that the size of is given by the sum of outdegrees over the internal vertices of . To complete the proof, we argue that a constant fraction of the outgoing edges of each internal vertex of are flipped during the anti-reset cascade procedure. To see this, note that the outdegree of each internal vertex of reduces during this procedure from more than to at most . Recalling that the outdegrees of vertices are bounded by at all times, at least out of at most outgoing edges (which is at least a -fraction assuming ) of each internal vertex must have been flipped during the procedure.    

Although our algorithm and the BF algorithm are inherently different, we use a potential function argument similar to the one in [12] to bound the number of flips made by our algorithm, which by Lemma 2.1 also bounds its total runtime (up to a constant factor). The key insight is that we can apply a potential function argument globally, i.e., for all the anti-resets together, rather than to each one of them separately as was done for resets by [12].

Suppose one can maintain a -orientation for some sequence of edge updates while doing edge flips, starting with the empty graph. As in [12], we define an edge to be good if its orientation in our algorithm is the same as in the -orientation and bad otherwise. We define the potential to be the number of bad edges in the current graph. Initially . Each insertion or a flip performed by the -orientation increases by at most one, while edge deletions may only decrease . All edge flips made by our algorithm are due to the anti-reset cascade procedures. Consider some digraph on which an anti-reset cascade procedure is applied throughout the execution of our algorithm, and note that all the edges of are outgoing of internal vertices of before the procedure starts. Let be an arbitrary internal vertex of , and note that its outdegree before the procedure starts is greater than . Moreover, by the definition of a -orientation, at most of ’s outgoing edges at that moment are good. As a result of the procedure, these edges may become bad. However, since ’s outdegree reduces to at most at the end, at least edges were bad and become good. It follows that is decreased by at least per each internal vertex. Consequently, the total number of vertices that serve as internal vertices of some digraph throughout the execution of our algorithm is at most . Since the outdegree of all vertices is bounded by at all times, the total number of edge flips made by our algorithm is bounded by . Assuming , it follows that .

2.1.2 A distributed implementation with low local memory usage

Consider a vertex whose outdegree exceeds . The centralized algorithm starts by exploring the directed neighborhood and coloring all edges of the digraph as described above. We can distribute this step using broadcast and convergecast in a straightforward way. However, we also need to make sure that the local memory usage at processors is bounded by . To this end, every internal processor (with outdegree larger than ) will be responsible for coloring its outgoing edges. Throughout this broadcast we also compute the directed BFS tree on , so that each processor will hold information about its parent in , using which we can easily carry out the subsequent convergecast. The number of rounds will be linear in the depth of , whereas the number of messages will be linear in the size of .

The centralized algorithm continues by running the anti-reset cascade procedure. This procedure is inspired by the static algorithm of [2], for which an efficient distributed implementation was given in [7]. We cannot use the distributed algorithm of [7], however, since it lets processors communicate with all their neighbors, hence the local memory usage will depend on the maximum degree in the network, which can be significantly larger than . (Recall that here stands for the out-degree threshold, which is linear in the arboricity , and may be times smaller than the maximum degree.)

The distributed algorithm that we propose is a variant of [7], and works as follows. First, we change the threshold of the centralized algorithm from to . To compensate for the decrease in the value of , we increase by a constant factor. (By letting increase by a constant factor, the above potential function argument will carry over smoothly.) In each round , all the colored processors send messages on each of their colored outgoing edges. Every colored processor that receives at least one message checks if the number of its colored outgoing edges plus the number of messages it received is bounded by . If so, it flips all the edges along which it received messages to be outgoing of it, and then uncolors itself and all its outgoing edges.

This distributed anti-reset cascade procedure implicitly assumes that all processors of wake up simultaneously, and the entire subgraph (both edges and processors) is colored at this moment. To justify this assumption, before initiating this procedure, we perform a broadcast along , in which each processor at directed distance from the root receives message . A processor receiving message will wake up in exactly rounds from the time it received the message to color itself and its outgoing edges, and then participate in the distributed anti-reset cascade procedure.

We next analyze this procedure. In each round , at least 3/5 of the colored processors are adjacent to at most colored edges, since the subgraph induced by the colored edges has arboricity at most . This means that the number of colored vertices reduces by a factor of in each round, hence after the last round all edges have been uncolored, and we obtain a -orientation for . Moreover, we argue that the number of edges being uncolored in each round is no smaller than the number of edges that remain colored. To see this, fix an arbitrary round , consider the graph induced by the colored edges at the beginning of the round, and denote by and the set of vertices that get uncolored and remain colored at the end of round , respectively. Since no vertex in get uncolored in round , the degree of each vertex of is at least in . However, the subgraph of induced by the vertex set has arboricity at most , hence at least half of the vertices of have at most neighbors in , which means their remaining neighbors are in . The assertion now follows since the number of edges in , or the number of edges that remain colored, is at most , whereas the number of edges that got uncolored is at least . Consequently, the number of messages sent in each round decays geometrically, hence the total number of messages sent is linear in the size of . Note also that this procedure terminates within rounds, which does not exceed the number of messages sent.

Theorem 2.2

For any and and any arboricity preserving sequence of edge and vertex updates starting from empty graph, there is a distributed algorithm for maintaining a -orientation (in the model) with an optimal (up to a constant) amortized message complexity, and the same (or better) amortized update time. The local memory usage at all vertices is at all times, which is also optimal. For , we obtain -orientation with amortized update time and message complexities, with local memory usage.

The worst-case update time of the above algorithm may be high. The bottleneck is the time needed to explore the directed neighborhood and compute the tree on which the broadcast and convergecast are carried out, which is linear in the depth of . To remedy this, we show that the aforementioned potential function argument will continue to work if we truncate the tree at a carefully chosen depth parameter , thereby reducing the worst-case update time to . This truncation, however, is nontrivial. In particular, we do not truncate at depth , but rather at the minimal depth for which the number of vertices is smaller than , where the constant hiding in the -notation should be chosen with care. We omit these details, since our focus in this work is on amortized rather than worst-case bounds.

2.1.3 Outdegree blowup in the BF algorithm

Lemma 2.3

For graphs with arboricity 1 (i.e., for forests), the original BF algorithm does not increase the outdegree of a vertex beyond during a reset cascade that follows an edge insertion.

Proof:  Note that the graph is a forest, not necessarily a tree. However, as the reset cascade does not reset vertices outside the subtree containing , we may henceforth restrict our attention to that subtree, denoted . Let be the oriented tree before the cascade started and let be the vertex that we reset first in the cascade. (So in , the outdegree of is and the outdegrees of all other vertices is .)

Observation 2.4

If the cascade resets then there is a directed path from to in .

We prove this observation by induction on the position of the reset in the reset sequence of the cascade. For the basis , and the statement holds vacuously. For the induction step, consider a reset of an arbitrary vertex , and suppose that the statement holds for any preceding reset in the reset sequence of the cascade. Note that ’s outdegree at the time of the reset is larger than . So when the reset occurs must have an outneighbor, say , that was not an outneighbor of in . Since the orientation of edge flips only due to a reset, there must have been at least one reset on preceding the reset on in the reset sequence. By induction there is a directed path from to in . Furthermore, the edge was oriented from to in . Hence there is a directed path from to in , as required.

Now we prove the lemma by contradiction. Consider the time during the reset cascade in which the outdegree of a vertex becomes . Then at this time vertex must have two outneighbors and which were not outneighbors of in . It follows that there must have been a reset on and on . By the observation above there are directed paths in from to and from to . This means that there are two directed paths in from to , one ending with the arc and another ending with the arc , contradicting the fact that the arboricity is 1.

If the outdegree of becomes , then has an outneighbor that was not an outneighbor of at . As before, there must have been a reset on , so by Observation 2.4 there is a directed path from to in . This path together with the arc closes a direccted cycle in , a contradiction.    

The following lemma shows that when the arboricity is larger than we may get vertices with very large outdegree during the reset cascade process.

Lemma 2.5

There exists a graph with arboricity 2, for which the original BF algorithm may increase the outdegree of a vertex to .

Proof:  Consider an “almost perfect” -ary tree oriented towards the leaves. Specifically, the only difference from a perfect -ary tree is that each of the parents of the leaves has children rather than , but it also has an outgoing edge to some vertex . So the arboricity of the graph is 2.

Suppose that the outdegree of the root increases to due to some edge insertion, thus starting a reset cascade. When the parents of the leaves are reached, they will have outdegree of . Hence they will be reset one after another, which gradually increases the outdegree of from 0 to .    

Remark. The lower bound on the maximum outdegree provided by Lemma 2.5 is tight. To see this, note that only vertices with degree greater than may perform resets. In a graph of arboricity , there are at most such vertices, implying that the outdegree of a vertex will not increase by more than during the reset cascade.
Largest outdegree first.  There is a natural adjustment to the reset cascade one can make in order to control the outdegree blowup during the cascade, specifically, to reset vertices of larger outdegree first. This is easily achieved with overhead on each operation of the cascade, by keeping the vertices whose outdegree is larger than in a heap , using the outdegree of a vertex as its key. We need to be able to extract the maximum element in when we decide on the next vertex to reset, and to increase the key of a vertex by when we flip an edge. It is straightforward to implement such an heap so that each operation takes time. The following lemma shows that this adjustment suffices to control the outdegree from blowing up by more than a logarithmic factor. We remark that the proof of this lemma is similar to the proofs of Lemma 6 and 7 of [17].

Lemma 2.6

If we always reset a vertex of largest outdegree first, then the outdegree of a vertex never exceeds .

Proof:  To prove Lemma 2.6, we employ the following two claims.

Claim 2.7

A vertex that has outdegree during the cascade has distinct neighbors where the outdegree of during the cascade is at least .

Proof:  Focus on an arbitrary vertex and consider a maximal subsequence of the reset cascade in which the outdegree of does not decrease. At the beginning of this subsequence, vertex has outdegree ( has outdegree if it was reset just before the subsequence starts, and outdegree if the subsequence starts with the first reset of the cascade). By the largest-reset adjustment, the outdegree of increases from to due to a reset on a neighbor of outdegree . Clearly , so the claim follows.    

Claim 2.8

Let be a vertex of outdegree during the cascade. Then for every , , there are vertices at distance from whose outdegree during the cascade is .

Proof:  The proof is by induction on . The basis follows from Claim 2.7. For the induction step, we assume the statement holds for some , and prove it for . Let the set of vertices at distance from whose outdegree during the cascade is . By induction . By Claim 2.7, each has neighbors whose outdegree (and degree) during the cascade is . Let be the set of all these neighbors, and note that all vertices in are at distance from and their outdegree during the cascade is . Moreover, the number of edges in the graph induced by is . Since the arboricity of the graph induced by is at most , it follows that this graph must have vertices, which completes the induction step.    

We conclude that the outdegree of a vertex cannot exceed , as otherwise there would be more than vertices in the graph by Claim 2.8. This completes the proof of Lemma 2.6.

We next show that the upper bound of Lemma 2.6 is tight for the BF algorithm with the above adjustment. Our lower bound holds even if we make another natural adjustment to the algorithm, where we orient a newly inserted edge from the vertex with lower outdegree to the vertex with higher outdegree.

For every , we define a directed graph on vertices, in which each vertex has outdegree , except for two special vertices that have ourdegree . The graphs and are shown in Figure 2. The graph consists of two vertices, denoted by and , and a cycle of length which we denote by .

Figure 2: The graphs and

In general we obtain from by adding to a cycle on vertices and an outgoing edge from each vertex of to a unique (but arbitrary) vertex of , such that each vertex of is connected in to a single vertex of . The proofs of the following observation and lemmas are immediate.

Observation 2.9

For any the graph has vertices. Each vertex of has outdegree except for the vertices and of that have outdegree .

Lemma 2.10

The arboricity of is .

Proof:  By induction on . We can easily decompose into two forests. Assuming we can decompose into two forests and , we decompose into two forests as follows. We index the vertices on from to , and add to (respectively, ) the two outgoing edges of every vertex of odd (resp., even) index. It is easy to verify that and are cycle-free.    

Lemma 2.11

We can construct starting from an empty graph on vertices by inserting the edges one after another, such that each edge is oriented from the vertex of lower outdegree to the vertex of higher outdegree at the time of its insertion.

Proof:  By induction on . To construct we first add the edges of , then the edges from to the vertices of and last the edges between the vertices of . It is easy to verify that the orientations are assigned properly if newly inserted edges are oriented towards the higher outdegree endpoint.    

Assume for simplicity that and consider the reset cascade that starts when we add to some vertex of an outgoing edge such that its outdegree increases to . (This edge to be oriented out of should be incident to a vertex whose outdegree is not smaller than the outdegree of and is external to .) Flipping increases the outdegree of the vertex following on , as well as the outdegree of some vertex in connected to . So the next flip may be on . We continue this way flipping all vertices of while increasing the outdegree of all vertices of from to , except for vertices and of whose outdegree increases from to . Next we flip the vertices of and so on. Right before flipping the vertices of they have outdegree . The following lemma specifies the invariant being maintained during the cascade. Its proof is straightforward by induction on the operations of the cascade.

Lemma 2.12

When we flip the vertices of for some , the outdegrees of vertices are as follows:    (1) Vertices of for have outdegree .    (2) Vertices of that were already flipped have outdegree .    (3) A vertex of for that is incident to a vertex of that was already flipped has outdegree and a vertex of for that is incident to a vertex of that was not flipped already has outdegree .

By applying the Invariant of Lemma 2.12 to the point when we finished flipping the vertices of , it follows that during a cascade on that starts by increasing the outdegree of a vertex of , we get that the vertices of have outdegree right before they are flipped. We derive the following corollary.

Corollary 2.13

The BF algorithm with the two adjustments above may blowup the outdegree of a vertex to during an insertion into a graph with vertices. (In fact vertices suffice.)

If the threshold of the BF algorithm is some then we can adapt the example described above by adding to each vertex “private” neighbors. This increases the number of vertices to . The maximum outdegree reached during the reset cascade is , hence this lower bound matches the upper bound of Lemma 2.6 up to a constant factor, for graphs of constant arboricity.

Next, we generalize the construction to show that the BF algorithm with the two adjustments above may blowup the outdegree of a vertex to during a reset cascade initiated by an edge insertion in a graph with arboricity and vertices.

We describe the construction in two stages. First we need to change the graph slightly for technical reasons, and then we construct a graph on which we demonstrate the reset cascade.

The technical change of is as follows.

  1. We change to the graph in Figure 3.

  2. When we construct from we make the cycle of length (rather than ), where is the vertex set of . One special vertex of is not connected to any vertex of in . We denote this special vertex by .

Figure 3: The graphs and in the generalized construction, before we replace each vertex by vertices

The graph is constructed from by performing the following modification for every .

  1. For each vertex vertex we have vertices, in .

  2. For each edge from a vertex of to a vertex of , put a complete bipartite clique between the vertices and in . Each edge is directed from to .

  3. For each edge from a vertex of to the next vertex of , put a complete bipartite clique between the vertices and . Each edge is directed from to .

  4. Connect the vertices to another set of vertices . Make a clique and orient it such that an edge for is oriented from to . Make a clique and orient it analogously. Connect to for . Notice that the number of edges that are directed from to one of and is exactly . See Figure 4.

The analysis of this generalization is analogous to the analysis of the construction for , and thus omitted from this extended abstract.

Figure 4: The graph which we use to replace for .

2.2 Efficient representations of sparse distributed networks, with applications

In this section we describe a natural representation of sparse distributed networks, along with some applications.

2.2.1 Forest decomposition and adjacency queries 

For a distributed network with arboricity , Theorem 2.2 provides a distributed algorithm (in the model) for maintaining a low outdegree orientation with low local memory usage. Such an orientation can be viewed as a representation of the network, and it finds two natural applications. First, due to the equivalence between the edge orientation and the forest decomposition problems shown in [24], we obtain a distributed algorithm for maintaining a decomposition into forests within an optimal (up to a constant) amortized message complexity, and the same (or better) amortized update time, with local space, for any and .

We can then use this forest decomposition to maintain efficient distributed adjacency labeling schemes. An adjacency labeling scheme assigns an (ideally short) label to each vertex, allowing one to infer if any two vertices and are neighbors directly from their labels. For an adjacency representation scheme to be useful, it should be capable of reflecting online the current up-to-date picture in a dynamic setting. Moreover, the algorithm for generating and revising the labels must be distributed. Given an -forest-decomposition for , the label of each vertex can be given by where is the parent of in the th forest. We derive the following result.

Theorem 2.14

For any and any arboricity preserving sequence of updates, there is a distributed algorithm (in the model) for maintaining an adjacency labeling scheme with label size of bits with amortized message complexity and update time, with local memory usage.

2.2.2 A complete representation

A low outdegree orientation may not quality as a complete representation of the network, since a processor cannot access its incoming neighbors, and in particular it cannot communicate with them. Next, we describe a complete representation of a distributed network.

Consider a processor with incoming neighbors . For each , we will make sure that holds information on and , with , and will hold information on an arbitrary processor among these, say . (This information that we hold per neighbor of should be enough for to communicate with directly.) Since the network may change dynamically, we need to update the “extra” local information that we hold at processors efficiently. We refer to the processors as siblings, and is referred to as their parent. For each , and are referred to as the left sibling and right sibling of , respectively. (The left and right siblings of and , respectively, are defined as null.) Note that each processor holds information on two of its siblings, per any parent. Since the number of parents of any processor is given by its outdegree, the information regarding all siblings of over all of its parents is linear in its outdegree. In addition, any processor holds information on a single incoming neighbor , as described above. Together with all its outgoing neighbors, the total information at a processor is linear in its outdegree. Since the outdegree of the underlying edge orientation is (close to) linear in the arboricity of the network, we can make sure that the local information at processors is (close to) linear in the arboricity, yielding the required bound on the local memory usage.

Following an insertion of edge that is oriented from to , will hold information on (by the underlying edge orientation). We also make sure that will hold information on by designating as , i.e., takes the role of . Subsequently, sends a message with information on to and another message with information on to , so that (respectively, ) will hold information on (resp., ) as its new right (resp., left) sibling. Following a deletion of edge that is oriented from to , with being the parent of for some index , sends a message with information on both and to . Subsequently, sends two messages (in parallel), one to and another to , informing (respectively, ) that its right (resp., left) sibling has changed from to (resp., ). Note that we send a message along the deleted edge in order to update the representation following an edge deletion, i.e., we support a graceful edge deletion but not an abrupt one. (In the former, the deleted edge may be used for exchanging messages between its endpoints, and retires only once the representation has been updated. In the latter, while the endpoints of the deleted edge discover that the edge has retired, it cannot be used for any communication.) A similar update is triggered by edge flips and vertex updates, where we only support a graceful deletion of vertices.
Some applications.  The drawback of such a representation is that a processor cannot communicate with its in-neighbors in parallel. For to be able to send a message to an in-neighbor , it first needs to retrieve the information on required for communicating with it. To this end, has to sequentially scan and communicate with all its in-neighbors , starting at (on which holds information) and finishing at . For some applications, however, such a sequential scan of the in-neighbors is not needed.

For the sake of conciseness, in what follows we focus on edge updates and flips. Vertex updates can be handled in a similar way.

As a first application, consider the problem of maintaining a maximal matching in a distributed network that changes dynamically. Instead of maintaining the information on the in-neighbors as described above, we will maintain information only on the free in-neighbors. More specifically, information on the free in-neighbors is being distributed among them in the manner described above. Whenever a processor changes status from free to matched, or vice versa, it notifies all its out-neighbors about that. (Recall that each processor has complete information on all its out-neighbors, and can communicate with all of them in parallel. Interestingly, there is no need to exploit parallelism here.) Any processor that receives such information makes sure to update the relevant local information regarding its free in-neighbors, which is distributed into the relevant neighbors, following along similar lines to the above. The rest of the algorithm now proceeds as in the centralized setting [23]. Specifically, following an edge insertion, we match the two endpoints if they are free, and otherwise there is nothing special to do (besides updating the underlying representation). Following a deletion of an unmatched edge, there is again nothing special to do. Finally, following a deletion of a matched edge , and exchange messages with their out-neighbors, attempting to find a free neighbor among them. Let us focus on ( is handled in the same way). If none of ’s out-neighbors is free, needs to check whether it has a free in-neighbor. Since we made sure to (distributively) maintain information on the free in-neighbors of each vertex, including , and as there is no need to perform a sequential scan over these neighbors of (the first one, if any, will do), we conclude that the amortized message complexity of the algorithm, and thus the amortized update time, is dominated (up to constant factors) by the maximum among the outdegree bound of the underlying orientation and the cost of maintaining that orientation.

Theorem 2.15

For any and any arboricity preserving sequence of edge and vertex updates starting from an empty graph, there is a distributed algorithm (in the model) for maintaining a maximal matching with an amortized update time and message complexities of . The local memory usage is .

As a broader application, we revisit the bounded degree sparsifiers introduced recently in [29]. Informally, a bounded degree -sparsifier for a graph , a degree parameter and a slack parameter is a subgraph of with maximum degree bounded by that preserves certain quantitative properties of the original graph up to a (multiplicative) factor of . For the maximum matching problem, such a sparsifier should preserve the size of the maximum matching of up to a factor of . It was shown in [29] that one can locally compute a -maximum matching sparsifier of degree , for any network of arboricity bounded by . All the sparsifiers of [29] adhere to a rather strict notion of locality, which makes them applicable to several settings. In particular, for distributed networks, all the sparsifiers of [29] can be computed in a single round of communication. The definition of a sparsifier for the minimum vertex problem is more involved, and we omit it here for conciseness (refer to [29] for the formal definition), but the bottom-line is the same: For any distributed network of arboricity bounded by , one can compute a -minimum vertex cover sparsifier of degree in a single round.

Similarly to the maintenance of a maximal matching, maintaining these bounded degree sparsifiers dynamically do not require a sequential scan of the in-neighbors of a processor. Indeed, these sparsifiers have s degree bound of by definition, hence each processor can hold complete information on all its adjacent edges that belong to the sparsifier, or equivalently, on all its corresponding neighbors. Following a deletion of an edge from the graph, we first update the underlying representation. If the edge does not belong to the sparsifier, there is nothing special to do. Otherwise, we remove it from the sparsifier and check if another edge needs to be added to the sparsifier instead. In any case we update the endpoints of the affected edges accordingly. It is straightforward to implement this update efficiently using the underlying representation. Following an edge insertion, we may need to add it to the sparsifier, but this too involves a straightforward update. In this way we can maintain bounded degree -sparsifiers for maximum matching and minimum vertex cover using a local memory at processors that is (close to) linear in the network arboricity.

Subsequently, we can naively run static distributed algorithms for approximate maximum matching and minimum vertex cover on top of the bounded degree sparsifiers, following every update step. Due to the degree bound of the sparsifiers, in this way we adhere to the local memory constraints at processors. To be able to run the distributed algorithm (following every update step), alas, we need to assume that all processors wake up prior to each such run, which does not apply to the local wakeup model. Instead of running a static distributed algorithm from scratch on the sparsifiers following every update step, we shall apply more efficient dynamic algorithms on top of the sparsifiers.

[26] devised distributed algorithms for maintaining, in networks of degree bounded by , -approximate and -approximate maximum matching with update time and message complexities and , respectively. (In fact, the bounds on the update time and message complexities hold in the worst-case. Moreover, these algorithms extend to bounded arboricity graphs; refer to Corollary 3.1 in [26].) Running these dynamic algorithms on top of the bounded degree )-maximum matching sparsifier that we maintain dynamically, we obtain the following result.

Theorem 2.16

For any , any arboricity preserving sequence of edge and vertex updates starting from an empty graph and any , there are distributed algorithms for maintaining -approximate and -approximate maximum matching with amortized update time and amortized message complexities of and , respectively. The local memory usage is .

There is a straightforward distributed algorithm for maintaining a maximal matching, in networks of degree bounded by , with update time and message complexity . Such an algorithm can be used to maintain a 2-approximate minimum vertex cover within the same bounds. Running this dynamic algorithm on top of the bounded degree )-minimum vertex cover sparsifier that we maintain dynamically, we obtain the following result.

Theorem 2.17

For any , any arboricity preserving sequence of edge and vertex updates starting from an empty graph and any , there is a distributed algorithm for maintaining a )-approximate minimum vertex cover with an amortized update time of and an amortized message complexity of . The local memory usage is .

3 The Flipping Game

This section is devoted to the flipping game and its applications.

We start by proposing a generic paradigm for this game (Section 3.1). In Section 3.2 we show a reduction from the edge orientation problem to the flipping game, and in Section 3.3 we show a reduction in the other direction, thus obtaining an equivalence. Some applications of the flipping game are given in Section 3.4.

3.1 A Generic Paradigm for the Flipping Game

The flipping game provides a local solution for the following generic problem. We want to maintain a dynamic graph in which each vertex has a value. There are two types of updates to the graph: (1) edge insertion and deletion, (2) a change of a value at a vertex. (We may also consider scenarios where there is only one type of updates. In particular, the scenario where the graph topology is static and vertex values are dynamic is already not trivial.) A query specifies a vertex and to answer it we need to compute some fixed function of the values of and its neighbors.

We restrict ourselves to a natural family of algorithms that maintain an edge orientation of , where each vertex maintains the current values of all its in-neighbors (incoming neighbors). When the value of a vertex changes, transmits its new value to all its out-neighbors. When a vertex is queried, collects the values of its out-neighbors, computes the function and returns the result. The algorithm has the freedom to change the edge orientation by flipping edges. The cost of flipping an edge outgoing of is if we flip it during a query or update at , and otherwise. (Note that the algorithms of [23, 18, 17] can also be viewed as belonging to , but they all require that the outdegree of all vertices at all times will be bounded by some threshold . In general, the algorithms of may violate this requirement.)

The (communication) cost of an algorithm in this family for serving a sequence of operations is

where is the number of edge insertions and deletions in , is the cost of edge flips that the algorithm performs during , and the sum is over all vertex updates and queries in of the outdegree of the vertex to which the operation () applies. We remark that this cost is equal to the total runtime of algorithm with respect to , up to a constant factor. (To be accurate, the runtime should include the cost of extracting the relevant information on the incoming neighbors of the queried vertices. If this cost is high, which depends on the application, that application cannot be solved using our scheme.)

The flipping game is a particular algorithm in that resets a vertex whenever we apply a query or update to , which means that all the outgoing edges of are flipped and become incoming to . The flipping game is simple and local. Furthermore, it is easy to verify that for any sequence of operations, the cost of the flipping game is at most twice the cost of any other algorithm in . Hence:

Observation 3.1

Denote the flipping game algorithm by . For any sequence of operations and algorithm , . The initial graph may be arbitrary (non-empty), but and should start from the same edge orientation.

Proof:  Since always flips edges at cost, the total cost of is

Consider an edge and an operation at during which was outgoing of (and therefore was charged for the communication along ). If this is the first operation in which is charged for then either is charged for during this operation as well, or flipped before this operation. If there was a previous operation in which was charged for then it must have been an operation at . So it must be the case that either flipped the edge between the operation at and the operation at or paid for in at least one of these operations.    

3.2 A reduction from the edge orientation problem to the flipping game

We can easily simulate the BF algorithm using the reset operations of the flipping game. The following lemma shows that for an appropriate outdegree threshold the amortized time per edge update of the simulation is essentially the same as the amortized time per operation (update or reset) of the flipping game. Thus the amortized bound of the flipping game is essentially as large as that of the BF algorithm.

Lemma 3.2

Consider an arbitrary sequence of edge updates, and suppose that the flipping game (either the basic game or the -flipping game) on this update sequence with any resets performs at most edge flips, for any parameter . Then for any , the BF algorithm with outdegree threshold , performs at most edge flips.

Proof:  We simulate the BF algorithm using the flipping game by resetting every vertex whose outgoing edges are flipped by the reset cascade of the BF algorithm. Let be the total number of resets that the simulation performs and let be the total number of edge flips. Since each reset of the simulation flips at least edges, . By our assumption on the flipping game we have . The lemma follows by substituting the upper bound on into this inequality and rearranging.    

For example, if we set , the amortized update time of the simulation (per edge update), and hence of the BF algorithm, is at most . This shows that we only lose a factor of 2 when amortizing over the edge updates rather than over both the edge updates and the reset operations.

3.3 A reduction from the flipping game to the edge orientation problem

Lemma 3.3

Suppose we can maintain a -orientation for some sequence of edge updates while doing edge flips, starting with the empty graph. Then the flipping game on this update sequence with any resets performs at most edge flips, for any .

Proof:  We charge the edge flips performed by reset operations of the flipping game to edge flips performed to maintain the -orientation. Following a reset on , we place two tokens on every edge that is outgoing of in the -orientation. When the -orientation flips an edge we place a token on . When an edge is inserted to the graph we place a token on . The total number of tokens placed on edges is . We claim that the number of tokens placed on is no smaller than the number of times flips in the flipping game (so these tokens “pay” for these flips). Consider a maximal sequence of flips of that occur while the orientation of in the -orientation does not change. Assume without loss of generality that is oriented from to by the -orientation during . Let be the number of flips in . During the time span of both and were reset at least times. Each such reset of places tokens on . The total number of these tokens is at least . The flip of performed by the -orientation or its insertion just before starts contributes an additional token.    

The number of edge flips per edge update performed for maintaining the -orientation is whereas the number of edge flips per operation of the flipping game is . Thus the flipping game does not depend on but its amortized time bound does depend on .

To remove the dependency of the amortized time of the flipping game on the outdegree threshold , we modify the game slightly and make it aware of as follows. We define the -flipping game in which when we reset a vertex we flip all its outgoing edges only if there are more than such edges. Note that by setting , we get that the total number of flips of the -flipping game is at most . This bound is the same, up to a constant, as for maintaining the -orientation, even though we also performed reset operations.

Lemma 3.4

Suppose we can maintain a -orientation for some sequence of edge updates while doing edge flips, starting with the empty graph. Then the -flipping game on this update sequence with any resets performs at most edge flips, for any parameters and .

Proof:  Our proof uses a potential function argument similar to the one used in Lemma 1 of [12]. We define an edge to be good if its orientation in the flipping game is the same as in the -orientation and bad otherwise. We define the potential to be the number of bad edges in the current graph. Initially . Each insertion or a flip performed by the -orientation increases by at most one, while edge deletions may only decrease .

Consider a reset of some vertex of outdegree greater than . By the definition of a -orientation at most of ’s outgoing edges are good. As a result of the flip these edges may become bad, but at least edges were bad and become good. It follows that as a result of the reset decreases by at least . This implies that the total number of reset operations on vertices with outdegree greater than is at most . The total number of times a good edge becomes bad due to the resets is bounded by , from which we conclude that the total number of times a bad edge becomes good due to the resets is bounded by . Summarizing, the total number of flips made by the flipping game is bounded by

3.4 Applications

As discussed in the introduction, by using the flipping game instead of the BF algorithm, we obtain local algorithms for several dynamic graph problems. In this section we describe two such applications to the problems of dynamic maximal matching and adjacency queries.
Dynamic maximal matching.  The goal here is to maintain a maximal matching in a graph that undergoes edge insertions and deletions. Following an edge insertion or a deletion of an edge not in , a maximal matching remains maximal. The difficult operation is a deletion of an edge in . Following an edge deletion both and become free, and if either or has a free neighbor then is not maximal anymore, and we must add edges from and to one of their free neighbors.

Neiman and Solomon [23] reduced this problem to the edge orientation problem as follows. We maintain an edge orientation of , and each vertex maintains its free incoming neighbors. Following an edge deletion , and perform the following operations. (We restrict attention to and describe what it does; performs the same operations.) First notifies its out-neighbors that it is free. Then it checks whether its list of free in-neighbors is not empty. If has a free in-neighbor then we add the edge to and both and notify their out-neighbors that they are now matched. Otherwise scans its out-neighbors for a free vertex. If finds a free out-neighbor then we add to and both and notify their out-neighbors that they are matched.

This reduction implies that from an algorithm that maintains a -orientation with an update time of (either amortized or worst-case), we can get a dynamic algorithm for maximal matching with an update time of (again, either amortized or worst-case).

The result of [17] shows that in a graph with arboricity bounded by the BF algorithm maintains an -orientation in amortized update time of for any parameter . (Refer to App. A for more details.) Using this tradeoff in the particular case of (where ), we get a dynamic algorithm for maximal matching with amortized update time. The drawback of the resulting algorithm is that it is not local. Indeed, this is because any algorithm for maintaining -orientation is inherently non-local.

To get a local algorithm for dynamic maximal matching we use our (inherently local) flipping game. As before, we maintain an orientation and each vertex maintains its free in-neighbors. But now, when a vertex scans its out-neighbors (either when changes its state from matched to unmatched or vice versa, or when looks for a free out-neighbor), then we also reset , thereby flipping all its outgoing edges. The total running time of the resulting local algorithm for dynamic maximum matching is linear in the number of edge flips made by the underlying flipping game.

To bound the number of edge flips made by the flipping game, note that we reset at most a constant number of vertices per edge update. By Lemma 3.3, combined with the result of [17] for the case , we conclude that the amortized number of flips made by the flipping game is .

The flipping game can be easily distributed. Resetting a vertex requires one communication round, and the message complexity is asymptotically the same as the runtime in the centralized setting. Summarizing, we have proved the following result.

Theorem 3.5

For any arboricity preserving sequence, there is a local algorithm for maintaining a maximal matching on the corresponding dynamic -vertex graph with an amortized update time of . The space usage of the algorithm is linear in the graph size. Moreover, there is a distributed algorithm for maintaining a maximal matching with an amortized message complexity of and a constant worst-case update time.

Adjacency queries. In this application we want to maintain a deterministic linear space data structure that allows efficient adjacency queries in a dynamic graph. (If we use dynamic perfect hash tables to represent adjacency lists then the data structure is of linear size but randomized.) Although the problem of supporting adjacency queries is inherently local, the state-of-the-art deterministic solution (described next) relies on the inherently non-local task of maintaining a low outdegree orientation.

The BF algorithm with outdegree threshold has an amortized update time of . Such an orientation allows to support adjacency queries in worst-case time, since to decide if the graph contains the edge , it suffices to search among the out-neighbors of , and among the out-neighbors of . Later Kowalik [19] proved that for outdegree threshold , the amortized update time of the BF algorithm is constant. Kowalik noted that if the out-neighbors of each vertex are stored in a balanced search tree, then the amortized update time increases from to (each edge flip requires an insertion to and a deletion from a balanced search tree, and similarly for edge insertions) but the worst-case query time becomes . When the arboricity bound is polylogarithmic in , these bounds are , and using more sophisticated data structures, one can improve this bound to under the RAM model.

Next, we describe a local data structure for supporting adjacency queries. To this end we use the -flipping game, for . Specifically, to perform an adjacency query with , we start by resetting and , thereby flipping the out-neighbors of (resp. ) if it has more than out-neighbors. Following these resets, and have at most out-neighbors and we answer the query by scanning these lists of out-neighbors as before. To speed up the query further we keep the out-neighbors of each vertex with at most out-neighbors in a balanced search tree as described above. (More concretely, we start building the tree at when ’s outdegree drops below and once we have the tree ready we maintain it as long as the outdegree of is below . This guarantees that we always have a tree ready when the outdegree is at most , while keeping the cost of constructing the trees in check.)

By Lemma 3.4 combined with the result of [19], the amortized number of edge flips made by the -flipping game is constant. Hence both adjacency queries and edge updates take amortized time. So our -flipping game provides a local data structure for adjacency queries at the cost of having only an amortized guarantee for the query time rather than a worst-case guarantee. Summarizing, we have proved the following result.

Theorem 3.6

For any arboricity preserving sequence, there is a (deterministic) local algorithm for supporting adjacency queries in the corresponding dynamic -vertex graph with an amortized update time of . The space usage of the algorithm is linear in the graph size.

References

  • [1] N. Alon, R. Rubinfeld, S. Vardi, and N. Xie. Space-efficient local computation algorithms. In Proc. 23rd SODA, pages 1132–1139, 2012.
  • [2] S. R. Arikati, A. Maheshwari, and C. D. Zaroliagis. Efficient computation of implicit representations of sparse graphs. Discrete Applied Mathematics, 78(1-3):1–16, 1997.
  • [3] S. Assadi, K. Onak, B. Schieber, and S. Solomon. Fully dynamic maximal independent set with sublinear update time. In Proc. of 50th STOC, 2018 (to appear).
  • [4] B. Awerbuch. Communication-time trade-offs in network synchronization. In Proc. of 4th PODC, pages 272–276, 1985.
  • [5] B. Awerbuch, A. Baratz, and D. Peleg. Cost-sensitive analysis of communication protocols. In Proc. of 9th PODC, pages 177–187, 1990.
  • [6] B. Awerbuch, A. Baratz, and D. Peleg. Efficient broadcast and light-weight spanners. Technical Report CS92-22, Weizmann Institute, October, 1992.
  • [7] L. Barenboim and M. Elkin. Sublogarithmic distributed MIS algorithm for sparse graphs using nash-williams decomposition. Distributed Computing, 22(5-6):363–379, 2010.
  • [8] L. Barenboim and M. Elkin. Distributed Graph Coloring: Fundamentals and Recent Developments. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers, 2013.
  • [9] E. Berglin and G. S. Brodal. A simple greedy algorithm for dynamic graph orientation. In Proc. of 28th ISAAC, pages 12:1–12:12, 2017.
  • [10] A. Bernstein and C. Stein. Fully dynamic matching in bipartite graphs. In Proc. 42nd ICALP, pages 167–179, 2015.
  • [11] A. Bernstein and C. Stein. Faster fully dynamic matchings with small approximation ratios. In Proc. 27th SODA, pages 692–711, 2016.
  • [12] G. S. Brodal and R. Fagerberg. Dynamic representation of sparse graphs. In Proc. of 6th WADS, pages 342–351, 1999.
  • [13] K. Censor-Hillel, E. Haramaty, and Z. S. Karnin. Optimal dynamic distributed MIS. In Proc. of PODC, pages 217–226, 2016.
  • [14] G. Even, M. Medina, and D. Ron. Best of two local models: Local centralized and local distributed algorithms. CoRR, abs/1402.3796, 2014.
  • [15] G. Even, M. Medina, and D. Ron. Distributed maximum matching in bounded degree graphs. In Proc. 16th ICDCN, page 18, 2015.
  • [16] M. Ghaffari and H. Su. Distributed degree splitting, edge coloring, and orientations. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 2505–2523, 2017.
  • [17] M. He, G. Tang, and N. Zeh. Orienting dynamic graphs, with applications to maximal matchings and adjacency queries. In Proc. 25th ISAAC, pages 128–140, 2014.
  • [18] T. Kopelowitz, R. Krauthgamer, E. Porat, and S. Solomon. Orienting fully dynamic graphs with worst-case time bounds. In Proc. of 41st ICALP, pages 532–543, 2014.
  • [19] L. Kowalik. Adjacency queries in dynamic sparse graphs. Inf. Process. Lett., 102(5):191–195, 2007.
  • [20] L. Kowalik and M. Kurowski. Short path queries in planar graphs in constant time. In Proc. 35th STOC, pages 143–148, 2003.
  • [21] Z. Lotker, B. Patt-Shamir, and S. Pettie. Improved distributed approximate matching. In Proc. 20th SPAA, pages 129–136, 2008.
  • [22] Y. Mansour and S. Vardi. A local computation approximation scheme to maximum matching. In Proc. 16th APPROX, pages 260–273, 2013.
  • [23] O. Neiman and S. Solomon. Simple deterministic algorithms for fully dynamic maximal matching. In Proc. of 45th STOC, pages 745–754, 2013.
  • [24] M. Parter, D. Peleg, and S. Solomon. Local-on-average distributed tasks. In Proc. 27th SODA, pages 220–239, 2016.
  • [25] D. Peleg. Distributed computing: a locality-sensitive approach. SIAM, 2000.
  • [26] D. Peleg and S. Solomon. Dynamic -approximate matchings: A density-sensitive approach. In Proc. of 27th SODA, pages 712–729, 2016.
  • [27] D. Peleg and J. D. Ullman. An optimal synchronizer for the hypercube. SIAM J. Comput., 18(4):740–747, 1989.
  • [28] R. Rubinfeld, G. Tamir, S. Vardi, and N. Xie. Fast local computation algorithms. In Proc. 2nd ICS, pages 223–238, 2011.
  • [29] S. Solomon. Local algorithms for bounded degree sparsifiers in sparse graphs. In Proc. of 9th ITCS, pages 52:1–52:19, 2018.
  • [30] J. Suomela. Survey of local algorithms. ACM Comput. Surv., 45(2):24, 2013.

Appendix

Appendix A More on the Edge Orientation Problem in Centralized Networks

In light of the asymptotic optimality of the BF algorithm discussed in Section 1.3.1, any existential bound for the problem translates into an algorithmic result with the same asymptotic guarantees on the outdegree and the amortized update time. [19] proved an existential bound of -orientation with amortized update time. [17] proved a general existential tradeoff: -orientation with amortized update time, for any ; note that the results of [12] and [19] provide the two extreme points on the tradeoff curve of [17]. The tradeoff of [17] in the particular case of and bounds both the outdegree and the amortized update time by ; nevertheless, to maintain constant outdegree (when ), the state-of-the-art update time is still , due to BF.

The edge orientation problem with worst-case time bounds was first studied in [18], where it was shown that one can maintain a -orientation with worst-case update time, for . (A similar result was obtained by [17].) [9] presented a tradeoff of -orientation with worst-case update time, for any , along with additional refinements over the previous work [18, 17]. We remark that the worst-case guarantees of [18, 17, 9] are inferior to the aforementioned amortized guarantees, and in the particular case of , none of these results provides an outdegree lower than , even for a polynomial worst-case update time.

a.1 Some applications of the edge orientation problem

In this section we provide a very short (and non-exhaustive) overview on some of the applications of the edge orientation problem in the context of dynamic graph (centralized) algorithms. For a more detailed account on these applications, we refer to [23, 18, 17, 26].

[23] showed a reduction from the problem of maintaining maximal matching to the edge orientation problem. Specifically, if a -orientation can be maintained within update time (either amortized or worst-case), then a maximal matching can be maintained within update time (again, either amortized or worst-case). [23] plugged the tradeoff of BF into this reduction, and obtained an amortized update time of for maintaining maximal matching in graphs of low arboricity. By plugging their own improved tradeoff, [17] reduced the amortized update time to . A worst-case update time of for this probelm was obtained by [18], using their result for the edge orientation problem. Note also that a maximal matching naturally translates into a 2-approximate vertex cover, and this translation can be easily maintained dynamically. The edge orientation problem of [12] was shown to be useful also in other dynamic graph problems, such as distance oracles, approximate matching, and coordinate queries; see [20, 18, 17, 10, 11] for more details.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters