Improved Deterministic Distributed Matching via Rounding

# Improved Deterministic Distributed Matching via Rounding

## Abstract

We present improved deterministic distributed algorithms for a number of well-studied matching problems, which are simpler, faster, more accurate, and/or more general than their known counterparts. The common denominator of these results is a deterministic distributed rounding method for certain linear programs, which is the first such rounding method, to our knowledge. A sampling of our end results is as follows.

• An -round deterministic distributed algorithm for computing a maximal matching, in -node graphs with maximum degree . This is the first improvement in about 20 years over the celebrated -round algorithm of Hańćkowiak, Karoński, and Panconesi [SODA’98, PODC’99].

• A deterministic distributed algorithm for computing a -approximation of maximum matching in rounds. This is exponentially faster than the classic -round -approximation of Panconesi and Rizzi [DIST’01]. With some modifications, the algorithm can also find an -maximal matching which leaves only an -fraction of the edges on unmatched nodes.

• An -round deterministic distributed algorithm for computing a -approximation of a maximum weighted matching, and also for the more general problem of maximum weighted -matching. These improve over the -round -approximation algorithm of Panconesi and Sozio [DIST’10], where denotes the maximum normalized weight.

• A deterministic Local Computation Algorithm (LCA) for a -approximation of maximum matching with queries. This improves almost exponentially over the previous deterministic constant approximations with query-complexity .

## 1Introduction and Related Work

We work with the standard model of distributed computing [23]: the network is abstracted as a graph , with , , and maximum degree . Each node has a unique identifier. In each round, each node can send a message to each of its neighbors. We do not limit the message sizes, but for all the algorithms that we present, -bit messages suffice. We assume that all nodes have knowledge of up to a constant factor. If this is not the case, it is enough to try exponentially increasing estimates for .

### 1.1Broader Context, and Deterministic Distributed Rounding

Efficient deterministic distributed graph algorithms remain somewhat of a rarity, despite the intensive study of the area since the 1980’s. In fact, among the four classic problems of the area — maximal independent set, -vertex-coloring, maximal matching, and -edge-coloring — only for maximal matching a -round deterministic algorithm is known, due to a breakthrough of Hańćkowiak, Karoński, and Panconesi [13]. Finding -round deterministic algorithms for the other three problems remains a long-standing open question, since [23]. In a stark contrast, in the world of randomized algorithms, all these problems have -round [28] or even more efficient algorithms [3].

Despite this rather bleak state of the art for deterministic algorithms, there is immense motivation for them. Here are three sample reasons: (1) One traditional motivation is rooted in the classic complexity-theoretic quest which seeks to understand the difference between the power of randomized and distributed algorithms. (2) Another traditional motivation comes from practical settings where even small error probabilities cannot be tolerated. (3) Nowadays, there is also a more modern motive: we now understand that in order to have faster randomized algorithms, we must come up with faster deterministic algorithms.1 This connection goes in two directions: (A) Almost all the recent developments in randomized algorithms use the shattering technique [3] which randomly breaks down the graph into small components, typically of size , and then solves them via a deterministic algorithm. Speeding up (the -dependency in) these randomized algorithms needs faster deterministic algorithms. (B) The more surprising direction is the reverse. Chang et al. [7] recently showed that for a large class of problems the randomized complexity on -node graphs is at least the deterministic complexity on -node graphs. Hence, if one improves over (the -dependency in) the current randomized algorithms, one has inevitably improved the corresponding deterministic algorithm.

Ghaffari, Kuhn, and Maus [11] recently proved a completeness-type result which shows that “the only obstacle” for efficient deterministic distributed graph algorithms is deterministically rounding fractional values to integral values while approximately preserving some linear constraints.2 To put it more positively, if we find an efficient deterministic method for rounding, we would get efficient algorithms for essentially all the classic local graph problems, including the four mentioned above. Our results become more instructive when viewed in this context. The common denominator of our results is a deterministic distributed method which allows us to round fractional matchings to integral matchings. This can be more generally seen as rounding the fractional solutions of a special class of linear programs (LPs) to integral solutions. To the best of our knowledge, this is the first known deterministic distributed rounding method. We can now say that

### 1.2Our Results

We provide improved distributed algorithms for a number of matching problems, as we overview next.

#### Approximate Maximum Matching

There are three remarks in order, regarding this result:

• For constant , this -round algorithm is significantly faster than the previously best known deterministic constant approximations, especially in low-degree graphs: the -round -approximation of Panconesi and Rizzi [30], the -round -approximation of Hańćkowiak et al. [15], the -round -approximation of Czygrinow et al. [5], and its extension [4] which finds a -approximation in rounds.

• This -round complexity gets close to the lower bound — due to the celebrated results of Kuhn et al. [20] and Linial [23] — of that holds for any constant approximation of matching, even for randomized algorithms.

• This distributed algorithm can be transformed to a deterministic Local Computation Algorithm (LCA) [2] for a -approximation of maximum matching, with a query complexity of . This is essentially by using the standard method of Parnas and Ron [31], with an additional idea of [8]. Using slightly more care, the query complexity can be improved to . Since formally stating this result requires explaining the computational model of LCAs, we defer that to the journal version. We remark that this query complexity improves almost exponentially over the previous deterministic constant approximations with [8].

#### (Almost) Maximal Matching, and Edge Dominating Set

: Employing our approximation algorithm for maximum matching, we get an -round deterministic distributed algorithm for maximal matching.

This is the first improvement in about 20 years over the breakthroughs of Hańćkowiak et al., which presented first an - [13] and then an -round [15] algorithm for maximal matching.

As alluded to before, this improvement in the deterministic complexity directly leads to an improvement in the -dependency of the randomized algorithms. In particular, plugging in our improved deterministic algorithm in the maximal matching algorithm of Barenboim et al. [3] improves their round complexity from to .

: Recently, there has been quite some interest in characterizing the -dependency in the complexity of maximal matching, either with no dependency on at all or with at most an additive term [16]. Göös et al. [10] conjectured that

there should be no algorithm for computing a maximal matching.

does not provide any news in this regard, because of its multiplicative -factor. Indeed, our findings also seem to be consistent with this conjecture and do not suggest any way for breaking it. However, using some extra work, we can get a faster algorithm for - maximal matching, a matching that leaves only -fraction of edges among unmatched nodes, for a desirably small .

This theorem statement is interesting because of two aspects: (1) This faster almost maximal matching algorithm sheds some light on the difficulties of proving the aforementioned conjecture. In a sense, any conceivable proof of this conjectured lower bound must distinguish between maximal and almost maximal matchings and rely on the fact that precisely a maximal matching is desired, and not just something close to it. Notice that since the complexity of grows slowly as a function of , we can choose quite small. By setting , we get an algorithm that, in rounds, produces a matching that seems to be maximal for almost all nodes, even if they look up to their -hop neighborhood. (2) Perhaps, in some practical settings, this almost maximal matching, which practically looks maximal for essentially all nodes, may be as useful as maximal matching, especially since it can be computed much faster.

: As a corollary of the almost maximal matching algorithm of , we get a fast algorithm for approximating minimum edge dominating set, which is the smallest set of edges such that any edge shares at least one endpoint with them. The proof appears in .

Previously, the fastest algorithms ran in rounds [30] or rounds [15], providing -approximations. Moreover, Suomela [35] provided roughly -approximations in rounds, in a restricted variant of the model with only port numberings.

#### Approximate Maximum Weighted Matching and B-Matching

An interesting aspect of the method we use is its flexibility and generality. In particular, the algorithm of can be easily extended to computing a -approximation of maximum weighted matching, and more interestingly, to a -approximation of maximum weighted b-matching. These extensions can be found in .

To the best of our knowledge, this is the first distributed deterministic algorithm for approximating maximum (weighted) -matching. Moreover, even in the case of standard matching, it improves over the previously best-known algorithm: A deterministic algorithm for -approximation of maximum weighted matching was provided by Panconesi and Sozio [33], with a round complexity of , where denotes the maximum normalized weight. However, that deterministic algorithm does not extend to -matching.

### 1.3Related Work, Randomized Distributed Matching Approximation

Aside from the deterministic algorithms discussed above, there is a long line of research on randomized distributed approximation algorithms for matching: for the unweighted case, [18] provide a -approximation in rounds, and [25] a -approximation in for any constant . For the weighted case, [36] provide successively improved algorithms, culminating in the -round -approximation of [25]. Moreover, [22] present an -round randomized algorithm for -approximate weighted -matching.

## 2Our Deterministic Rounding Method, in a Nutshell

The main ingredient in our results is a simple deterministic method for rounding fractional solutions to integral solutions. We believe that this deterministic distributed rounding will be of interest well beyond this paper. To present the flavor of our deterministic rounding method, here we overview it in a simple special case: we describe an -round algorithm for a constant approximation of the maximum unweighted matching in 2-colored bipartite graphs. The precise algorithm and proof appear in .

: First, notice that finding a fractional approximate maximum matching is straightforward. In rounds, we can compute a fractional matching whose total value is a constant approximation of maximum matching. One standard method is as follows: start with all edge values at . Then, for rounds, in each round raise all edge values by a -factor, except for those edges that are incident to a node such that . Throughout, denotes the set of edges incident to node . One can easily see that this fractional matching has total value within a -factor of the maximum matching.

: We gradually round this fractional matching to an integral matching while ensuring that we do not lose much of the value, i.e., , for some constant . We have rounding phases, each of which takes rounds. In each phase, we get rid of the smallest (non-zero) values and thereby move closer to integrality. The initial fractional matching has3 only values for or . In the phase, we partially round the edge values for . Some of these edges will be raised to , while others are dropped to . The choices are made in a way that keeps essentially unchanged, as we explain next.

Consider the graph edge-induced by edges with value . For the sake of simplicity, suppose all nodes of have even degree. Dealing with odd degrees requires some delicate care, but it will not incur a loss worse than an -fraction of the total value. In this even-degree graph , we effectively want that for each node of , half of its edges raise to while the others drop it to . For that, we generate a degree- graph by replacing each node of with nodes, each of which gets two of ’s edges4. Notice that the edge sets of and are the same. Graph is simply a set of cycles of even length, as was bipartite.

In each cycle of , we would want that the raise and drop of edge weights is alternating. That is, odd-numbered, say, edges are raised to while even-numbered edges are dropped to . This would keep a valid fractional matching— meaning that each node still has — because the summation does not increase, for each node . Furthermore, it would keep the total weight unchanged. If the cycle is shorter than length , this raise/drop sequence can be identified in rounds. For longer cycles, we cannot compute such a perfect alternation in rounds. However, one can do something that does not lose much5: imagine that we chop the longer cycles into edge-disjoint paths of length . In each path, we drop the endpoints to while using a perfect alternation inside the path. These border settings mean we lose -fraction of the weight. Thus, even over all the iterations, the total loss is only a small constant fraction of the total weight.

## 3Preliminaries

: An integral matching is a subset of such that for all . It can be seen as an assignment of values to edges, where iff , such that for all . When the condition is relaxed to , such an assignment is called a fractional matching.

: A -matching for -values is an assignment of values to edges such that for all . Again, one can relax this to fractional -matchings by replacing with .

: An integral matching is called maximal if we cannot add any edge to it without violating the constraints. For , we say that is an - maximal matching if for , that is, if after removing the edges in and incident to from , at most edges remain.

: A matching is called maximum if it is the/a largest matching in terms of cardinality. For any , we say that a matching is -approximate if for a maximum matching . In a weighted graph where each edge is assigned a weight , we say that is a maximum weighted matching if it is the/a matching with maximum weight . An integral matching is a -approximate weighted matching if .

We now state some simple and well-known facts about matchings.

Many problems are easier in small-degree graphs. To exploit this fact, we sometimes use the following simple transformation which decomposes a graph into graphs with maximum degree 2 — that is, node-disjoint paths and cycles — with the same edge set, in zero rounds. As mentioned before, this has been used frequently in prior work [29].

: We -decompose graph as follows. For every node , introduce copies and arbitrarily split its incident edges among these copies in such a way that every copy has degree 2, with the possible exception of one copy which has degree (when has odd degree). The graph on these copy nodes is what we call a -decomposition of . See for an example.

## 4Approximate Maximum Matching

We present a -approximation algorithm for maximum matching, proving . The first step towards this goal is finding a constant approximation, explained in . We show in how to further improve this approximation ratio to .

### 4.1Constant Approximate Maximum Matching

In this subsection, we show how to compute a constant approximation.

The key ingredient for our -approximation algorithm of is a distributed algorithm that computes a constant approximate maximum matching in the special case of a -colored bipartite graph. We first present the algorithm for this special case in , and then explain in how to reduce the general graph case to the bipartite case, hence proving .

#### Constant Approximate Maximum Matching in Bipartite Graphs

Next, we show how to find a -approximate matching in a 2-colored bipartite graph.

: The proof of is split into three parts. In the first step, explained in , we compute a -fractional 4-approximate maximum matching in rounds. The second step, which is also the main step of our method and is formalized in , is a method to round these fractional values to almost integrality in rounds. In the third step, presented in , we resort to a simple constant-round algorithm to transform the almost integral matching that we have found up to this step into an integral matching. As a side remark, we note that we explicitly state some of the constants in this part of the paper, for the sake of readability. We remark that these constants are not the focus of this work, and we have not tried to optimize them.

: We show that a simple greedy algorithm already leads to a fractional 4-approximate maximum matching.

Initially, set for all . This trivially satisfies the constraints . Then, we iteratively raise the value of all loose edges in parallel by a -factor. This can be done in rounds, since at the latest when the value of an edge is , both endpoints would be tight. Once all edges are tight, for a maximum matching we have .

: The heart of our approach, the Rounding Lemma, is a method that successively turns a -fractional matching into a -fractional one, for decreasing values of , while only sacrificing the approximation ratio by a little.

Iteratively, for , in phase , we get rid of edges with value for by either increasing their values by a 2-factor to or setting them to . In the following, we describe the process for one phase , thus a fixed .

Let be the graph induced by the set of edges with value and use to denote its 2-decomposition. Notice that is a node-disjoint union of paths and even-length cycles. Set . We call a path/cycle short if it has length at most , and long otherwise. We now process short and long cycles and paths, by distinguishing three cases, as we discuss next. Each of these cases will be done in rounds, which implies that the complexity of one phase is . Thus, over all the phases, this rounding algorithm takes rounds.

: Alternately set the values of the edges to 0 and to . Since the cycle has even length, the values for all nodes in the cycle remain unaffected by this update. Moreover, the total value of the edges in the cycle stays the same.

: We first orient the edges in a manner that ensures that each maximal directed path has length at least . This is done in rounds. For that purpose, we start with an arbitrary orientation of the edges. Then, for each , we iteratively merge two (maximal) directed paths of length that are directed towards each other by reversing the shorter one, breaking ties arbitrarily. For more details of this orientation step, we refer to [14].

Given this orientation, we determine the new values of as follows. Recall that we are given a -coloring of nodes. Set the value of all border edges (that is, edges that have an incident edge such that they are either oriented towards each other or away from each other) to 0, increase the value of a non-border edge to if it is oriented towards a node of color 1, say, and set it to 0 otherwise.

Now, we show that this process generates a valid fractional matching while incurring only a small loss in the value. Observe that no constraint is violated, as for each node the value of at most one incident edge can be raised to while the other is dropped to 0. Moreover, in each maximal directed path, we can lose at most in the total sum of edge values. This happens in the case of an odd-length path starting with a node of color 2. Hence, we lose at most a -fraction of the total sum of the edge values in long cycles and long paths.

: Give the path an arbitrary direction, that is, identify the first and the last node. Set the value of the first edge to if the first node is loose, and to otherwise. Alternately, starting with value for the second edge, set the value of every even edge to 0 and of every odd edge to . If the last edge should be set to (that is, the path has odd length) but the last node is tight, set the value of that last edge to 0 instead.

If a node is in the interior of the path, that is, not one of the endpoints, then can have at most one of its incident edges increased to while the other one decreases to 0. Hence the summation does not increase. If is the first or last node in the path, the value of the edge incident to is increased only if was loose, i.e., if . In this case, we still have after the increase, as the value of the edge raises by at most a -factor.

We now argue that the value of the matching has not decreased by too much during this update. For that, we group the edges into blocks of two consecutive edges, starting from the first edge. If the path has odd length, the last block consists of a single edge. The block value, that is, the sum of the values of its two edges, of every interior (neither first nor last) block is unaffected. If an endpoint of a path is loose, the value of the block containing remains unchanged or increases (in the case of an odd-length path ending in ). If is tight, then the value of its block stays the same or decreases by , which is at most a -fraction of the value . This allows us to bound the loss in terms of these tight endpoints. The crucial observation is that every node can be endpoint of a short path at most once. This is because, in the 2-decomposition, a node can be the endpoint of a path only if it has a degree-1 copy, which happens only for odd-degree vertices and then exactly once. Thus, we lose at most a -fraction in when updating the values in short paths.

: First, we show that over all the rounding phases, the overall loss is only a constant fraction of the total value . Let and denote the value of edge and node , respectively, before eliminating all the edges with value . Putting together the loss analyses discussed above, we get

It follows that

for a maximum matching , recalling that we started with a 4-approximate maximum matching. Here, the second inequality holds because , as . Finally, observe that in all the rounding phases the constraints are preserved, since the value can increase by at most a -factor and only when is loose.

: So far, we have an almost integral matching. Next, we round all edges to either 0 or 1, by finding a maximal matching in the graph induced by edges with positive value.

In the given -fractional matching, means . Thus, a node cannot have more than 16 incident edges with non-zero value in this fractional matching. In this constant-degree subgraph, a maximal matching can be found in rounds using the algorithm in , recalling that we are given a 2-coloring. We have by (i), and, since we started with a -approximation, is -approximate.

#### Constant Approximate Maximum Matching in General Graphs

We explain how the approximation algorithm for maximum matchings in 2-colored bipartite graphs can be employed to find approximate maximum matchings in general graphs. The main idea is to transform the given general graph into a bipartite graph with the same edge set in such a way that a matching in this bipartite graph can be easily turned into a matching in the general graph.

Let be an arbitrary orientation of the edges . Split every node into two siblings and , and add an edge to for every oriented edge . Let and be the nodes with color 1 and 2, respectively. By , a -approximate maximum matching in the bipartite graph can be computed in rounds. We now go back to , that is, merge and back into . This makes the edges of incident to or now be incident to , leaving us with a graph with maximum degree .

We compute a maximal matching in . Using the algorithm of , this can be done in rounds. If an -coloring of is provided, which implies a coloring of with colors, the round complexity of this step is merely .

It follows from (i) that for maximum matchings in and in , respectively. Thus, is a -approximate maximum matching in . The last inequality is true since by introducing additional nodes but leaving the edge set unchanged (when going from to ), the maximum matching size cannot decrease.

### 4.2Wrap-Up: (2+ε)-Approximate Matching and Maximal Matching

In this section, we iteratively invoke the constant approximation algorithm from the to obtain algorithms for a -approximate maximum matching () and a maximal matching ().

The approximation ratio of a matching algorithm can be improved from to easily, by repetitions: each time, we apply the algorithm of to the remaining graph, and remove the found matching together with its neighboring edges from the graph.

Before explaining the details, we present the following frequently used trick.

Starting with , for , where , iteratively compute a -approximate maximum matching in , using the algorithm of . We delete together with its incident edges from the graph, that is, set .

Now, we argue that the obtained matching is -approximate. To this end, we bound the size of a maximum matching in the remainder graph .

Let be a maximum matching in . An inductive argument shows that . Indeed, observe , where the first inequality holds since otherwise would be a better matching than in , contradicting the latter’s optimality. For , we thus have . As is a maximal matching in by construction, is a maximal matching in . By (ii), this means that , hence .

We have iterations, each taking rounds. As mentioned in , by precomputing an -coloring in rounds, the round complexity of each iteration can be decreased to , leading to an overall running time of rounds.

If one increases the number of repetitions to , the found matching is maximal.

Apply the -approximation algorithm of for iterations on the respective remainder graph, as described in the proof of . The same analysis (also adopting the notation from there) shows that a maximum matching in the remainder graph must have size , which means that is an empty graph. But then must be maximal.

## 5Almost Maximal Matching

In the previous section, we have seen how one can obtain a matching that reduces the size of the matching in the remainder graph, that is, the graph after removing the matching and all incident edges, by a constant factor. Intuitively, one would expect that this also reduces the number of remaining edges by a constant factor, which would directly lead to an (almost) maximal matching just by repetitions. However, this is not the case, since not every matched edge removes the same number of edges from the graph, particularly in non-regular graphs. This calls for an approach that weights edges incident to nodes of different degrees differently, which naturally brings into play weighted matchings.

In , we present a fast algorithm that finds a constant approximation of maximum weighted matching based on the algorithm of . Then, we use this algorithm, by assigning certain weights to the edges, to find a matching that removes a constant fraction of the edges in . Via repetitions of this, each time removing the found matching and its incident edges, we get an -maximal matching. More details are provided in the proof of in the end of this section. Observe that when setting , thus increasing the number of repetitions to , we obtain a maximal matching.

We assume without loss of generality that the edge weights are normalized, that is, from a set for some maximum weight . Round the weights for down to the next power of , resulting in weights . This rounding procedure lets us lose at most a -factor in the total weight and provides us with a decomposition of into graphs with for .

In parallel, run the algorithm of with on every to find a -approximate maximum matching in in rounds. Observe that while the edges in do not form a matching, since edges from and for can be neighboring, a matching can be obtained by deleting all but the highest-index edge in every such conflict, that is, by removing all edges with an incident edge for a .

In the following, we argue that the weight of cannot be too small compared to the weight of by an argument based on counting in two ways.

Every edge puts blame on an edge in as follows. Since , there is an edge incident to such that and for some . If , then blames weight on . If , then puts blame on the same edge as does.

For an edge and , let be the maximum number of edges from that blame . An inductive argument shows that . Indeed, there can be at most two edges from blaming , at most one per endpoint of , and, for , we have , since at most two edges in can be incident to and at most one further edge can be incident to each edge in for .

Therefore, overall, at most weight is blamed on . This means that , hence , and lets us conclude that for a maximum weighted matching .

Next, we explain how to use this algorithm to remove a constant fraction of edges, by introducing appropriately chosen weights. We define the weight of each edge to be the number of its incident edges. This way, an (approximate) maximum weighted matching corresponds to a matching that removes a large number of edges.

For each edge , introduce a weight , and apply the algorithm of to find a -approximate maximum weighted matching in .

For the weight of a maximum weighted matching , it holds that , as the following simple argument based on counting in two ways shows. Let every edge in put a blame on an edge in that is responsible for its removal from the graph as follows. An edge blames itself. An edge blames an arbitrary incident edge . Notice that at least one such edge must exist, as otherwise would not even be maximal. In this way, many blames have been put onto edges in such that no edge is blamed more than times, as can be blamed by itself and any incident edge. Therefore, indeed , and, as is a -approximate, it follows that .

Now, observe that is the number of edges that are deleted when removing together with its incident edges from . Since every edge can be incident to at most two matched edges (and thus can be deleted by at most two edges in the matching), in total many edges are removed from when deleting the edges in and incident to , which proves that is a -maximal matching.

We iteratively invoke this algorithm to successively reduce the number of remaining edges.

For and , iteratively apply the algorithm of to to get a -maximal matching in . Set , that is, remove the matching and its neighboring edges from the graph. Then for is -approximate, since , using .

Overall, recalling , this takes .

## 6Extensions and Corollaries

### 6.1B-Matching

In this subsection, we explain that only slight changes to the algorithm of are sufficient to make it suitable also for computing approximations of maximum -matching. To this end, we first introduce an approximation algorithm for maximum -matching in 2-colored bipartite graphs in . Then, we extend this algorithm to work for general graphs, in . Finally, in the second part of the proof of presented at the end of this subsection, we show that the approximation ratio can be improved to a value arbitrarily close to 2, simply by repetitions of this constant approximation algorithm.

This result is a direct consequence of , , and , which we present next. These lemmas respectively show how a fractional constant approximate -matching can be found, how this fractional matching can be round to almost integrality, and how these almost integral values can be turned into an integral matching, while only losing a constant fraction of the total value. The proofs are very similar to the ones in , except for the very last step of rounding (), which requires one extra step, as we shall discuss.

In the following, we call a node loose if , and tight otherwise. As before, an edge is called tight if either of its endpoints are tight, otherwise edge is called loose.

The next lemma shows how to obtain a -approximate maximum -matching in rounds. Alternatively, [20] find such a -matching in rounds.

As in , starting with (and thus ), in parallel, the edge values of non-tight edges with value are gradually increased by a -factor. This takes no more than rounds. We employ a simple argument based on counting in two ways to show that this yields a 4-approximation of a maximum -matching . Let each edge blame one of its tight endpoints, if existent. If there is no tight endpoint, the value of the edge is , and is blamed to . In this way, each tight node  — which by definition has value  — is blamed at most times. Let split this blame uniformly among its incident edges in such that each edge is blamed at most twice its value . In this way, every edge is blamed at most , as it can be blamed by both of its tight endpoints, or by the edge itself if it has no tight endpoint. It follows that .

Next, we transform this fractional solution into an almost integral solution, which is still a constant approximation.

As in the proof of , the edges of values for are eliminated. We derive analogously that the fractional matching obtained at the end is a -approximation, observing that changing the condition for tightness of a node from to only helps in the analysis.

In a final step, the almost integral solution is transformed into an integral one. Notice that for -matchings, as opposed to standard matchings, the subgraph induced by edges with positive value need not have constant degree. In fact, a node can have up to incident edges with non-zero value. This prevents us from directly applying the algorithm of to find a maximal matching in the subgraph with non-zero edge values, as this could take rounds.

We decompose the edge set induced by edges of positive value in the -fractional maximum -matching into constant-degree subgraphs , as follows. We make at most copies of node , and we arbitrarily split the edges among these copies in such a way that every copy has degree at most 16. This is done in a manner similar to the -decomposition procedure.

In parallel, run the algorithm of on each , in rounds. This yields a maximal matching for each that trivially, by (i), satisfies the condition . Now, let . Since each node occurs in at most subgraphs and each is a matching in , node cannot have more than incident edges in . Thus, indeed, is a -matching. Finally, observe that is -approximate, since .

A similar argument as in shows that the algorithm for approximate maximum -matchings in bipartite graphs from can be adapted to work for general graphs.

Do the same reduction to a bipartite graph as in the proof of , that is, create an in- and an out-copy of every node, and, for an arbitrary orientation of the edges, make each oriented edge incident to the respective copy of the corresponding nodes.

Compute a -approximate maximum -matching in using the algorithm of . Merging back the two copies of a node into one yields a graph with degree of node bounded by , as and both can have at most incident edges in . Now, compute a -decomposition of this graph. On each component with edges , find a maximal matching in rounds by the algorithm of .

Notice that for each node without a degree-1 copy, its degree is at least halved in compared to , and thus at most . If a node has a degree-1 copy, then its degree need not be halved. But this can happen only if ’s degree in is odd, thus at most . In this case, has at most degree-2 copies and one degree-1 copy, which means that its degree in is upper bounded by . We conclude that is indeed a -matching.

Moreover, it follows from by (i) that for maximum -matchings in and in . Thus, is -approximate.

Starting with , , and for all , for , iteratively apply the algorithm of to with -values to obtain a -approximate maximum -matching in . Update and with , that is, reduce the -value of each vertex by the number of incident edges in the matching and remove as well as all the edges incident to a node with remaining -value 0 from the graph. The same analysis as in the proof of for standard matchings in goes through and concludes the proof.

### 6.2Weighted Matching

Using the idea from [26], we can iteratively invoke the constant-approximation algorithm of times to get a -approximate maximum weighted matching.

In each of the iterations , we set up a new auxiliary weighted graph as follows. Let be the matching obtained in the previous iteration. For every edge , let , and for every edge , set to the gain if is added to and the (possibly) incident edges in are deleted (if we lose by this change, we set ). We then run the algorithm of to get a -approximate maximum weighted matching in this auxiliary graph, and augment along the edges in the matching in the auxiliary graph, i.e. add all edges in the matching to and remove all the possibly incident edges. Lemma 4.3 in [26] (see also [32]) shows that then . Thus, after iterations, a -approximate maximum weighted matching is found.

### 6.3Edge Dominating Set

An edge dominating set is a set such that for every there is an such that . A minimum edge dominating set is an edge dominating set of minimum cardinality. Since any maximal matching is an edge dominating set, an almost maximal matching can easily be turned into an edge dominating set: additionally to the edges in the almost maximal matching, add all the remaining (at most many) edges to the edge dominating set. When is small enough, the obtained edge dominating set is a good approximation to the minimum edge dominating set. We next make this relation more precise.

Apply the algorithm of with , say, to find an -maximal matching in . It is easy to see that is an edge dominating set. Moreover, due to the fact that a minimum maximal matching is a minimum edge dominating set (see e.g. [37]) and since maximal matchings can differ by at most a -factor from each other, by (ii), it follows, also from (i), that .

## Acknowledgment

I want to thank Mohsen Ghaffari for suggesting this topic, for his guidance and his support, as well as for the many valuable and enlightening discussions. I am also thankful to Seth Pettie for several helpful comments.

### Footnotes

1. For instance, our improvement in the deterministic complexity of maximal matching directly improves the randomized complexity of maximal matching, as we formally state in .
2. Stating this result in full generality requires some definitions. See [11] for the precise statement.
3. Any fractional maximum matching can be transformed to this format, with at most a -factor loss in the total value: simply round down each value to the next power of , and then drop edges with values below .
4. This simple idea has been used frequently before. For instance, it gives an almost trivial proof of Petersen’s 2-factorization theorem from 1891 [29]. It has also been used by [19].
5. Our algorithm actually does something slightly different, but describing this ideal procedure is easier.

### References

1. A fast and simple randomized parallel algorithm for the maximal independent set problem.
Noga Alon, László Babai, and Alon Itai. Journal of algorithms, 7(4):567–583, 1986.
2. Space-efficient local computation algorithms.
Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1132–1139, 2012.
3. The locality of distributed symmetry breaking.
Leonid Barenboim, Michael Elkin, Seth Pettie, and Johannes Schneider. In Proceedings of the Symposium on Foundations of Computer Science (FOCS), pages 321–330, 2012.
4. Distributed algorithm for better approximation of the maximum matching.
Andrzej Czygrinow and Michał Hańćkowiak. In International Computing and Combinatorics Conference, pages 242–251, 2003.
5. Distributed algorithm for approximating the maximum matching.
Andrzej Czygrinow, Michał Hańćkowiak, and Edyta Szymańska. Discrete Applied Mathematics, 143(1):62–71, 2004.
6. A fast distributed algorithm for approximating the maximum matching.
Andrzej Czygrinow, Michał Hańćkowiak, and Edyta Szymańska. In Proceedings of the Annual European Symposium on Algorithms (ESA), volume 3221, pages 252–263, 2004.
7. An exponential separation between randomized and deterministic complexity in the LOCAL model.
Yi-Jun Chang, Tsvi Kopelowitz, and Seth Pettie. In Proceedings of the Symposium on Foundations of Computer Science (FOCS), pages 615–624, 2016.
8. Deterministic stateless centralized local algorithms for bounded degree graphs.
Guy Even, Moti Medina, and Dana Ron. In Proceedings of the Annual European Symposium on Algorithms (ESA), pages 394–405, 2014.
9. An improved distributed algorithm for maximal independent set.
Mohsen Ghaffari. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 270–277, 2016.
10. Linear-in-delta lower bounds in the local model.
Mika Göös, Juho Hirvonen, and Jukka Suomela. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 86–95, 2014.
11. On the complexity of local distributed graph problems.
Mohsen Ghaffari, Fabian Kuhn, and Yannic Maus. In Proceedings of the Symposium on Theory of Computing (STOC), pages 784–797, 2017.
12. Distributed degree splitting, edge coloring, and orientations.
Mohsen Ghaffari and Hsin-Hao Su. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2505–2523, 2017.
13. On the distributed complexity of computing maximal matchings.
Michał Hańćkowiak, Michał Karonski, and Alessandro Panconesi. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 219–225, 1998.
14. On the distributed complexity of computing maximal matchings.
Michał Hańćkowiak, Michał Karonski, and Alessandro Panconesi. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 219–225, 1998.
15. A faster distributed algorithm for computing maximal matchings deterministically.
Michał Hańćkowiak, Michał Karoński, and Alessandro Panconesi. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 219–228, 1999.
16. Distributed maximal matching: Greedy is optimal.
Juho Hirvonen and Jukka Suomela. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 165–174, 2012.
17. Distributed (+ 1)-coloring in sublogarithmic rounds.
David G. Harris, Johannes Schneider, and Hsin-Hao Su. In Proceedings of the Symposium on Theory of Computing (STOC), pages 465–478, 2016.
18. A fast and simple randomized parallel algorithm for maximal matching.
Amos Israeli and Alon Itai. Information Processing Letters, 22(2):77–80, 1986.
19. An improved parallel algorithm for maximal matching.
Amos Israeli and Yossi Shiloach. Information Processing Letters, 22(2):57–60, 1986.
20. The price of being near-sighted.
Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 980–989, 2006.
21. Local computation: Lower and upper bounds.
Fabian Kuhn, Thomas Moscibroda, and Roger Wattenhofer. J. ACM, 63(2):17:1–17:44, 2016.
22. Distributed fractional packing and maximum weighted b-matching via tail-recursive duality.
Christos Koufogiannakis and Neal E. Young. In Proceedings of the International Symposium on Distributed Computing (DISC), pages 221–238, 2009.
23. Distributive graph algorithms - global solutions from local data.
Nathan Linial. In Proceedings of the Symposium on Foundations of Computer Science (FOCS), pages 331–335, 1987.
24. Locality in distributed graph algorithms.
Nathan Linial. SIAM Journal on Computing, 21(1):193–201, 1992.
25. Improved distributed approximate matching.
Zvi Lotker, Boaz Patt-Shamir, and Seth Pettie. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 129–136, 2008.
26. Improved distributed approximate matching.
Zvi Lotker, Boaz Patt-Shamir, and Seth Pettie. Journal of the ACM (JACM), 62(5), 2015.
27. Distributed approximate matching.
Zvi Lotker, Boaz Patt-Shamir, and Adi Rosen. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 167–174, 2007.
28. A simple parallel algorithm for the maximal independent set problem.
Michael Luby. SIAM journal on computing, 15(4):1036–1053, 1986.
29. Julius petersen’s theory of regular graphs.
Henry Martyn Mulder. Discrete mathematics, 100(1-3):157–175, 1992.
30. Some simple distributed algorithms for sparse networks.
Alessandro Panconesi and Romeo Rizzi. Distributed computing, 14(2):97–100, 2001.
31. Approximating the minimum vertex cover in sublinear time and a connection to distributed algorithms.
Michal Parnas and Dana Ron. Theoretical Computer Science, 381(1):183–196, 2007.
32. A simpler linear time 2/3- approximation for maximum weight matching.
Seth Pettie and Peter Sanders. Information Processing Letters, 91(6):271–276, 2004.
33. Fast primal-dual distributed algorithms for scheduling and matching problems.
Alessandro Panconesi and Mauro Sozio. Distributed Computing, 22(4):269–283, 2010.
34. Fast local computation algorithms.
Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. In Proceedings of the Symposium on Innovations in Computer Science (ICS), pages 223–238, 2011.
35. Distributed algorithms for edge dominating sets.
Jukka Suomela. In Proceedings of the ACM Symposium on Principles of Distributed Computing (PODC), pages 365–374, 2010.
36. Distributed weighted matching.
Mirjam Wattenhofer and Roger Wattenhofer. In Proceedings of the International Symposium on Distributed Computing (DISC), pages 335–348, 2004.
37. Edge dominating sets in graphs.
Mihalis Yannakakis and Fanica Gavril. SIAM Journal on Applied Mathematics, 38(3):364–372, 1980.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters