Randomized Composable Coresets for Matching and Vertex Cover

Randomized Composable Coresets for Matching and Vertex Cover

Sepehr Assadi Department of Computer and Information Science, University of Pennsylvania. Supported in part by National Science Foundation grants CCF-1552909, CCF-1617851, and IIS-1447470.
Email: {sassadi,sanjeev}@cis.upenn.edu.
   Sanjeev Khanna11footnotemark: 1
Abstract

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say , machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of requires using representative summaries of size i.e. essentially no better summary exists than each machine simply sending its entire input graph.

The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size that yield an -approximate solution111Here and throughout the paper, we use notation to suppress factors, where is the number of vertices in the graph.. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an -approximation simultaneous protocol for these problems with total communication when the input is randomly partitioned across machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our bound for total communication is optimal for any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.

1 Introduction

Recent years have witnessed tremendous algorithmic advances for efficient processing of massive data sets. A common approach for designing scalable algorithms for massive data sets is to distribute the computation across machines that are interconnected via a communication network. These machines can then jointly compute a function on the union of their inputs by exchanging messages. Two main measures of efficiency in this setting are the communication cost and the round complexity; we shall formally define these terms in details later in the paper but for the purpose of this section, communication cost measures the total number of bits exchanged by all machines and round complexity measures the number of rounds of interaction between them.

An important and widely studied framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains a solution for the desired problem from the union of the representative summary of combined pieces. The appeal of this framework lies in the simple fact that the simultaneous protocols are inherently round-optimal; they perform in only one round of interaction. The only measure that remains to be optimized is the communication cost – this is now determined by the size of the summary created by each machine. An understanding of the communication cost for a problem in the simultaneous model turns out to have value in other models of computation as well. For instance, a lower bound on the maximum communication needed by any machine implies a matching lower bound on the space complexity of the same problem in dynamic streams [47, 7].

Two particularly successful techniques for designing small summaries for simultaneous protocols are linear sketches and composable coresets. Linear sketching technique corresponds to taking a linear projection of the input data as its representative summary. The “linearity” of the sketches is then used to obtain a sketch of the combined pieces from which the final solution can be extracted. There has been a considerable amount of work in designing linear sketches for graph problems in recent years [5, 6, 40, 10, 20, 18, 41, 17, 50]. Coresets are subgraphs (in general, subsets of the input) that suitably preserve properties of a given graph, and they are said to be composable if the union of coresets for a collection of graphs yields a coreset for the union of the graphs. Composable coresets have also been studied extensively recently [11, 12, 15, 36, 53, 52], and indeed several graph problems admit natural composable coresets; for instance, connectivity, cut sparsifiers, and spanners (see [49], Section 2.2; the “merge and reduce” approach). Successful applications of these two techniques has yielded size summaries for many graph problems (see further related work in Section 1.3). However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently [10] that both matching and vertex cover require summaries of size for even computing a -approximate solution222The authors in [10] only showed the inapproximability result for the matching problem. However, a simple modification of their result proves an identical lower bound for the vertex cover problem as well..

This state-of-affairs is the starting point for our work, namely, intractability of matching and vertex cover in the simultaneous communication model. Our main insight is that a natural data oblivious partitioning scheme completely alters this landscape: both problems admit -approximate composable coresets of size provided the edges of the graph are randomly partitioned across the machines. The idea that random partitioning of data can help in distributed computation was nicely illustrated in the recent work of [52] on maximizing submodular functions. Our work can be seen as the first illustration of this idea in the domain of graph algorithms. The applicability of this idea to graph theoretic problems has been cast as an open problem in [52].

Randomized Composable Coresets

We follow the notation of [52] with a slight modification to adapt to our application in graphs. Let be an edge-set of a graph ; we say that a partition of the edges is a random -partitioning iff the sets are constructed by assigning each edge in independently to a set chosen uniformly at random. A random partitioning of the edges naturally defines partitioning the graph into graphs whereby for any , and hence we use random partitioning for both the edge-set and the input graph interchangeably.

Definition (Randomized Composable Coresets [52]).

For a graph-theoretic problem , consider an algorithm ALG that given any graph , outputs a subgraph with at most edges. Let be a random -partitioning of a graph . We say that ALG outputs an -approximation randomized composable core-set of size for if is an -approximation for w.h.p., where the probability is taken over the random choice of the -partitioning. For brevity, we use randomized coresets to refer to randomized composable coresets.

We further augment this definition by allowing the coresets to also contain a fixed solution to be directly added to the final solution of the composed coresets. In this case, size of the coreset is measured both in the number of edges in the output subgraph plus the number of vertices and edges picked by the fixed solution (this is mostly relevant for our coreset for the vertex cover problem).

1.1 Our Results

We show existence of randomized composable coresets for matching and vertex cover. {mdframed}[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Result 1.

There exist randomized coresets of size that w.h.p. (over the random partitioning of the input) give an -approximation for maximum matching, and an -approximation for minimum vertex cover.

In contrast to the above result, when the graph is adversarially partitioned, the results of [10] show that the best approximation ratio conceivable for these problems in space is only . We further remark that Result 1 can also be extended to the weighted version of the problems. Using the Crouch-Stubbs technique [22] one can extend our result to achieve a coreset for weighted matching (with a factor loss in approximation and extra term in the space). Similar ideas of “grouping by weight” of edges can also be used to extend our coreset for weighted vertex cover with an factor loss in approximation and space; we omit the details.

The space bound achieved by our coresets above is considered a “sweet spot” for graph streaming algorithms [54, 30] as many fundamental problems are provably intractable in space (sometimes not enough to even store the answer) while admit efficient solutions in space. However, in the simultaneous model, these considerations imply only that the total size of all coresets must be , leaving open the possibility that coreset output by each machine may be as small as in size (similar in spirit to coresets of [52]). Our next result rules out this possibility and proves the optimality of our coresets size.

{mdframed}

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Result 2.

Any -approximation randomized coreset for the matching problem must have size , and any -approximation randomized coreset for the vertex cover problem must have size .

We now elaborate on some applications of our results.

Distributed Computation

We use the following distributed computation model in this paper, referred to as the coordinator model (see, e.g., [62]). The input is distributed across machines. There is also an additional party called the coordinator who receives no input. The machines are allowed to only communicate with the coordinator, not with each other. A protocol in this model is called a simultaneous protocol iff the machines simultaneously send a message to the coordinator and the coordinator then outputs the answer with no further interaction. Communication cost of a protocol in this model is the total number of bits communicated by all parties.

Result 1 can also be used to design simultaneous protocols for matching and vertex cover with total communication and the same approximation guarantee stated in Result 1 in the case the input is partitioned randomly across machines. Indeed, each machine only needs to compute a coreset of its input, sends it to the coordinator, and coordinator computes an exact maximum matching or a -approximate minimum vertex cover on the union of the coresets. We further prove that the communication cost of theses protocols are essentially optimal.

{mdframed}

[backgroundcolor=lightgray!40,topline=false,rightline=false,leftline=false,bottomline=false,innertopmargin=2pt]

Result 3.

Any -approximation simultaneous protocol for the maximum matching problem, resp. the vertex cover problem, requires total communication of bits, resp. bits, even when the input is partitioned randomly across the machines.

Result 3 is a strengthening of Result 2; it rules out any representative summary (not necessarily a randomized coreset) of size (resp. ) that can be used for -approximation of matching (resp. vertex cover) when the input is partitioned randomly.

For the matching problem, it was shown previously in [35] that when the input is adversarially partitioned in the coordinator model, any protocol (not necessarily simultaneous) requires bits of communication to achieve an -approximation of the maximum matching. Result 3 extends this to the case of randomly partitioned inputs albeit only for simultaneous protocols.

MapReduce Framework

We show how to use our randomized coresets to obtain improved MapReduce algorithms for matching and vertex cover in the MapReduce computation model formally introduced in [42, 46]. Let be the number of machines, each with a memory of ; we show that two rounds of MapReduce suffice to obtain an -approximation for matching and -approximation for vertex cover. In the first round, each machine randomly partitions the edges assigned to it across the machines; this results in a random -partitioning of the graph across the machines. In the second round, each machine sends a randomized composable coreset of its input to a designated central machine ; as there are machines and each machine is sending size coreset, the input received by is of size and hence can be stored entirely on that machine. Finally, computes the answer by combining the coresets (similar to the case in the coordinator model). Note that if the input was distributed randomly in the first place, we could have implemented this algorithm in only one round of MapReduce (see [52] for details on when this assumption applies).

Our MapReduce algorithm outperforms the previous algorithms of [46] for matching and vertex cover in terms of the number of rounds it uses, albeit with a larger approximation guarantee. In particular, [46] achieved a -approximation to both matching and vertex cover in rounds of MapReduce when using similar space as ours on each machine (the number of rounds of this algorithm is always at least even if we allow space per each machine). The improvement on the number of rounds is significant in this context; the transition between different rounds in a MapReduce computation is usually the dominant cost of the computation [46] and hence, minimizing the number of rounds is an important goal in the MapReduce framework.

1.2 Our Techniques

Randomized Coreset for Matching

Greedy and Local search algorithms are the typical choices for composable coresets (see, e.g., [36, 52]). It is then natural to consider the greedy algorithm for the maximum matching problem as a randomized coreset: the one that computes a maximal matching. However, one can easily show that this choice of coreset performs poorly in general; there are simple instances in which choosing arbitrary maximal matching in the graph results only in an -approximation.

Somewhat surprisingly, we show that a simple change in strategy results in an efficient randomized coreset: any maximum matching of the graph can be used as an -approximate randomized coreset for the maximum matching problem. Unlike the previous work in [52, 36] that relied on analyzing a specific algorithm (or a specific family of algorithms) for constructing a coreset, we prove this result by exploiting structural properties of the maximum matching (i.e., the optimal solution) directly, independent of the algorithm that computes it. As a consequence, our coreset construction requires no prior coordination (such as consistent tie-breaking rules used in [52]) between the machines and in fact each machine can use a different algorithm for computing the maximum matching required by the coreset.

Randomized Coreset for Vertex Cover

In the light of our coreset for the matching problem, one might wonder whether a minimum vertex cover of a graph can also be used as its randomized coreset. However, it is easy to show that the answer is negative here – there are simple instances (e.g., a star on vertices) on which this leads to an approximation ratio. Indeed, the feasibility constraint in the vertex cover problem depends heavily on the input graph as a whole and not only the coreset computed by each machine, unlike the case for matching and in fact most problems that admit a composable coreset [12, 36, 52]. This suggests the necessity of using edges in the coreset to certify the feasibility of the answer. On the other hand, only sending edges seems too restrictive: a vertex of degree can safely be assumed to be in an optimal vertex cover, but to certify this, one needs to essentially communicate edges. This naturally motivates a slightly more general notion of coresets – the coreset contains both subsets of vertices (to be always included in the final vertex cover) and edges (to guide the choice of additional vertices in the vertex cover).

To obtain a randomized coreset for vertex cover, we employ an iterative “peeling” process where we remove the vertices with the highest residual degree in each iteration (and add them to the final vertex cover) and continue until the residual graph is sufficiently sparse, in which case we can return this subgraph as the coreset. The process itself is a modification of the algorithm by Parnas and Ron [59]; we point out that other modifications of this algorithm has also been used previously for matching and vertex cover [58, 38, 16].

However, to employ this algorithm as a coreset we need to argue that the set of vertices peeled across different machines is not too large as these vertices are added directly to the final vertex cover. The intuition behind this is that random partitioning of edges in the graph should result in vertices to have essentially the same degree across the machines and hence each machine should peel the same set of vertices in each iteration. But this intuition runs into a technical difficulty: the peeling process is quite sensitive to the exact degree of vertices and even slight changes in degree results in moving vertices between different iterations that potentially leads to a cascading effect. To address this, we design a hypothetical peeling process (which is aware of the actual minimum vertex cover in ) and show that the our actual peeling process is in fact “sandwiched” between two application of this peeling process with different degree threshold for peeling vertices. We then use this to argue that the set of all vertices peeled across the machines are always contained in the solution of the hypothetical peeling process which in turn can be shown to be a relatively small set.

Lower Bounds for Randomized Coresets.

Our lower bound results for randomized coresets for matching are based on the following simple distribution: the input graph consists of union of two bipartite graphs, one of which is a random -regular graph with vertices on each side while the other graph is a perfect matching of size . Thus the input graph almost certainly contains a matching of size and any -approximate solution must collect edges from overall i.e. edges from from each machine on average. After random partitioning, the input given to each machine is essentially a matching of size from and a matching of size roughly from . The local information at each machine is not sufficient to differentiate between edges of and , and thus any coreset that aims to include edges from , can not reduce the input size by more than a factor of . Somewhat similar ideas can also be shown to work for the vertex cover problem.

Communication Complexity Lower Bounds

We briefly highlight the ideas used in obtaining the lower bounds described in Result 3. We will focus on the vertex cover problem to describe our techniques. Our lower bound result is based on analyzing (a variant of) the following distribution: the input graph consists of a bipartite graph plus a single edge . is a graph on vertices , each connected to random neighbors in , and is an edge chosen uniformly at random between and . This way admits a minimum vertex cover of size at most . However, when this graph is randomly partitioned, the input to each machine is essentially a matching of size chosen from the graph with possibly one more edge (in exactly one machine chosen uniformly at random). The local information at the machine receiving the edge is not sufficient to differentiate between the edges of and and thus if the message sent by this machine is much smaller than its input size (i.e., bits), it most likely does not “convey enough information” to the coordinator about the identity of . This in turn forces the coordinator to use more than vertices in order to cover , resulting in an approximation factor larger than .

Making this intuition precise is complicated by the fact that the input across the players are highly correlated, and hence the message sent by one player, can also reveal extra information about the input of another (e.g. a relatively small communication from the players is enough for the coordinator to know the identity of entire ). To overcome this, we show that by conditioning on proper parts of the input, we can limit the correlation in the input of players and then use the symmetrization technique of [62] to reduce the simultaneous -player vertex cover problem to a one-way two-player problem named the hidden vertex problem (HVP). Loosely speaking, in HVP, Alice and Bob are given two sets , each of size , with the promise that and their goal is to find a set of size which contains the single element in . We prove a lower bound of bits for this problem using a subtle reduction from the well-known set disjointness problem. In this reduction, Alice and Bob use the protocol for HVP on “non-legal” instances (i.e., the ones for which HVP is not well-defined) to reduce the original disjointness instance between sets on a universe to a lopsided disjointness instance whereby , and then solve this new instance in communication (using the Håstad-Wigderson protocol [34]), contradicting the lower bound on the communication complexity of disjointness.

The lower bound for the matching problem is also proven along similar lines (over the hard distribution mentioned earlier for this problem) using a careful combinatorial argument instead of the reduction from the disjointness problem.

1.3 Further Related Work

Maximum matching and minimum vertex cover are among the most studied problems in the context of massive graphs including, in dynamic graphs [55, 64, 14, 58], sub-linear algorithms [59, 33, 56, 57, 66], streaming algorithms [48, 30, 26, 27, 5, 31, 44, 6, 3, 32, 37, 38, 22, 21, 49, 4, 29, 43, 10, 20, 51, 28, 9, 61], MapReduce computation [5, 46], and different distributed computation models [35, 8, 24, 32]. Most relevant to our work are the linear sketches of [20] for computing an exact minimum vertex cover or maximum matching in space (opt is the size of the solution), and linear sketches of [10, 20] for -approximating maximum matching in space. These results are proven to be tight by [21], and [10], respectively. Finally, [10] also studied the simultaneous communication complexity of bipartite matching in the vertex-partition model and proved that obtaining better than an -approximation in this model requires strictly more than communication from each player (see [10] for more details on this model).

Coresets, composable coresets, and randomized composable coresets are respectively introduced in [2][36], and [52]. Composable coresets have been used previously in the context of nearest neighbor search [1], diversity maximization [36], clustering [12, 15], and submodular maximization [36, 52, 11]. Moreover, while not particularly termed a composable coreset, the “merge and reduce” technique in the graph streaming literature (see [49], Section 2.2) is identical to composable coresets. Similar ideas as randomized coreset for optimization problems has also been used in random arrival streams [44, 38]. Moreover, communication complexity lower bounds have also been studied previously under the random partitioning of the input [39, 19].

2 Preliminaries

Notation.

For any integer , . Let be a graph; denotes the maximum matching size in and denotes the minimum vertex cover size. We assume that these quantities are 333Otherwise, we can use the algorithm of [20] to obtain exact coresets of size as mentioned in Section 1.3.. For a set and , denotes the neighbors of in the set . For an edge set , we use to refer to vertices incident on .

Useful Concentration of Measure Inequalities.

We use the following standard version of Chernoff bound (see, e.g., [25]) throughout.

Proposition 1 (Chernoff bound).

Let be independent random variables taking values in and let . Then,

We also need the method of bounded differences in our proofs. A function satisfies the Lipschitz property with constant , iff for all , , whenever and differ only in the -th coordinate.

Proposition 2 (Method of bounded differences).

If satisfies the Lipschitz property with constant and are independent random variables, then,

A proof of this proposition can be found in [25] (see Section 5).

Communication Complexity

We prove our lower bounds for distributed protocols using the framework of communication complexity, and in particular in the multi-party simultaneous communication model and the two-player one-way communication model.

Formally, in the multi-party simultaneous communication model, the input is partitioned across players . All players have access to an infinite shared string of random bits, referred to as public randomness (or public coins). The goal is for the players to compute a specific function of the input by simultaneously sending a message to a central party called the coordinator (or the referee). The coordinator then needs to output the answer using the messages received by the players. We refer to the case when the input is partitioned randomly as the random partition model.

In the two-player one-way communication model, the input is partitioned across two players, namely Alice and Bob. The players again have access to public randomness, and the goal is for Alice to send a single message to Bob, so that Bob can compute a function of the joint input. The communication cost of a protocol in both models is the total length of the messages sent by the players. In Section 5.3.1, we also consider general two-player communication model which allows a two-way communication, i.e., both Alice and Bob can send messages to each other. We refer the reader to an excellent text by Kushilevitz and Nisan [45] for more details.

3 Randomized Coresets for Matching and Vertex Cover

We present our randomized composable coresets for matching and vertex cover in this section.

3.1 An -Approximation Randomized Coreset for Matching

The following theorem formalizes Result 1 for matching.

Theorem 1.

Any maximum matching of a graph is an -approximation randomized composable coreset of size for the maximum matching problem.

We remark that our main interest in Theorem 1 is to achieve some constant approximation factor for randomized composable coresets of the matching problem and as such we did not optimize the constant in the approximation ratio. Nevertheless, our result already shows that the approximation ratio of this coreset is at most (in fact, with a bit more care, we can reduce this factor down to ; however, as this is not the main contribution of this paper, we omit the details).

Let be any graph and be a random -partitioning of . To prove Theorem 1, we describe a simple process for combining the maximum matchings (i.e., the coresets) of ’s, and prove that this process results in a constant factor approximation of the maximum matching of . We remark that this process is only required for the analysis, i.e., to show that there exists a large matching in the union of coresets; in principle, any (approximation) algorithm for computing a maximum matching can be applied to obtain a large matching from the coresets.

Consider the following greedy process for computing an approximate matching in : {tcolorbox}[ enlarge top by=5pt, enlarge bottom by=5pt, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] :

  1. Let . For to :

  2. Let be a maximal matching obtained by adding to the edges in an arbitrary maximum matching of that do not violate the matching property.

  3. return .

Lemma 1.

GreedyMatch is an -approximation algorithm for the maximum matching problem w.h.p (over the randomness of the edge partitioning).

Before proving Lemma 1, we show that Theorem 1 easily follows from this lemma.

Proof of Theorem 1.

Let ALG be any algorithm that given a graph , outputs an arbitrary maximum matching of . It is immediate to see that to implement GreedyMatch, we only need to compute a maximal matching on the output of ALG on each graph where ’s form a random -partitioning of . Consequently, since GreedyMatch outputs an -approximate matching (by Lemma 1), the graph should contain an -approximate matching as well. We emphasize here that the use of GreedyMatch for finding a large matching in is only for the purpose of analysis.       

In the rest of this section, we prove Lemma 1. Recall that denotes the maximum matching size in the input graph . Let be a small constant to be determined later. To prove Lemma 1, we will show that w.h.p, where is the output of GreedyMatch. Notice that the matchings (for ) constructed by GreedyMatch are random variables depending on the random -partitioning.

Our general approach for the proof of Lemma 1 is as follows. Suppose at the beginning of the -th step of GreedyMatch, the matching is of size . It is easy to see that in this case, there is a matching of size in that is entirely incident on vertices of that are not matched by . We can further show that in fact edges of this matching are appearing in , even when we condition on the assignment of the edges in the first graphs. The next step is then to argue that the existence of these edges forces any maximum matching of to match edges in between the vertices that are not matched by ; these edges can always be added to the matching to form . This ensures that while the maximal matching in GreedyMatch is of size , we can increase its size by edges in each of the first steps, hence obtaining a matching of size at the end. The following key lemma formalizes this argument.

Lemma 2.

For any , if , then, w.p. ,

To continue we define some notation. Let be an arbitrary maximum matching of . For any , we define as the part of assigned to the first graphs in the random -partitioning, i.e., the graphs . We have the following simple concentration result.

Claim 3.

W.p. , for any ,

Proof.

Fix an ; each edge in is assigned to , w.p. , hence in expectation, size of is . The claim now follows from a standard application of Chernoff bound (recall that, throughout the paper, we assume ).       

We now prove Lemma 2.

Proof of Lemma 2.

Fix an and the set of edges for ; this also fixes the matching while the set of edges in together with the matching are still random variables. We further assume that after fixing the edges in , which happens w.p. by Claim 3.

We first define some notation. Let be the set of vertices incident on and be the remaining vertices. Let be the set of edges in . We partition into two parts: : the set of edges with at least one endpoint in , and : the set of edges incident entirely on . Our goal is to show that w.h.p. any maximum matching of matches vertices in to each other by using the edges in ; the lemma then follows easily from this.

Notice that the edges in the graph are chosen by independently assigning each edge in to w.p. 444This is true even when we condition on the size of since this event does not depend on the choice of edges in .. This independence allows us to treat the edges in and separately; we can fix the set of sampled edges of in denoted by without changing the distribution of edges in chosen from . Let , i.e., the maximum number of edges that can be matched in using only the edges in . In the following, we show that w.h.p., there exists a matching of size in ; by the definition of , this implies that any maximum matching of has to use at least edges in , proving the lemma.

Let be any arbitrary maximum matching of size in . Let be the set of vertices in that are incident on . We show that there is a large matching in that avoids .

Claim 4.

There exists a matching in of size that avoids the vertices of .

Proof.

We first bound the size of . Since any edge in has at least one endpoint in , we have . By the assertion of the lemma, , and hence .

Moreover, by the assumption that , there is a matching of size in the graph . By removing the edges in that are either incident on or , at most edges are removed from . Now the remaining matching is entirely contained in and also avoids , hence proving the claim.       

We are now ready to finalize the proof. Let be the matching guaranteed by Claim 4. Each edge in this matching is chosen in w.p. independent of the other edges; hence, by Chernoff bound (and the assumption that ), there is a matching of size

()

in the edges of that appear in . This matching can be directly added to the matching , implying the existence of a matching of size in . As argued before, this ensures that any maximum matching of contains at least edges in . These edges can always be added to to form , hence proving the lemma.       

Having proved Lemma 2, we can easily conclude Lemma 1.

Proof of Lemma 1.

Recall that is the output matching of GreedyMatch. For the first steps of GreedyMatch, if at any step we obtained a matching of size , then we are already done. Otherwise, at each step, by Lemma 2, w.p. , we increase the size of the maximal matching by edges; consequently, by taking a union bound on the steps, w.p. , the size of the maximal matching would be . By picking , we ensure that in either case, the matching computed by GreedyMatch is of size at least , proving the lemma.       

3.2 An -Approximation Randomized Coreset For Vertex Cover

The following theorem formalizes Result 1 for vertex cover.

Theorem 2.

There exists an -approximation randomized composable coreset of size for the vertex cover problem.

Let be a graph and be a random -partitioning of ; we propose the following coreset for computing an approximate vertex cover of . This coreset construction is a modification of the algorithm for vertex cover first proposed by [59].

{tcolorbox}

[ enlarge top by=5pt, enlarge bottom by=5pt, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] . An algorithm for computing a composable coreset of each .

  1. Let be the smallest integer such that and define .

  2. For to , let:

  3. Return as a fixed solution plus the graph as the coreset.

In VC-Coreset we allow the coreset to, in addition to returning a subgraph, identify a set of vertices (i.e., ) to be added directly to the final vertex cover. In other words, to compute a vertex cover of the graph , we compute a vertex cover of the graph and return it together with the vertices . It is easy to see that this set of vertices indeed forms a vertex cover of : any edge in that belongs to is either incident on some , and hence is covered by , or is present in , and hence is covered by the vertex cover of .

In the remainder of this section, we bound the approximation ratio of this coreset. To do this, we need to prove that . The bound on the approximation ratio then follows as the vertex cover of can be computed to within a factor of .

It is easy to prove (and follows from [59]) that the set of vertices is of size ; however, using this fact directly to bound the size of only implies an approximation ratio of which is far worse than our goal of achieving an -approximation. In order to obtain the bound, we need to argue that not only each set is relatively small, but also that these sets are all intersecting in many vertices. In order to do so, we introduce a hypothetical algorithm (similar to VC-Coreset) on the graph and argue that the set output by is, with high probability, a subset of the output of this hypothetical algorithm. This allows us to then bound the size of the union of the sets for .

Let denote the set of vertices in an arbitrary optimum vertex cover of and . Consider the following process on the original graph (defined only for analysis):

{tcolorbox}

[ enlarge top by=5pt, enlarge bottom by=5pt, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ]

  1. Let be the bipartite graph obtained from by removing edges between vertices in .

  2. For to , let:

We first prove that the sets ’s and ’s in this process form an approximation of the minimum vertex cover of and then show that (for any ) is mimicking this hypothetical process in a sense that the set is essentially contained in the union of the sets ’s and ’s.

Lemma 5.

.

Proof.

Fix any ; we prove that . The lemma follows from this since there are at most different sets and the union of the sets ’s is a subset of (with size ).

Consider the graph . The maximum degree in this graph is at most by the definition of the process. Since all the edges in the graph are incident on at least one vertex of , there can be at most edges between the remaining vertices in and in . Moreover, any vertex in has degree at least by definition and hence there can be at most vertices in , proving the claim.       

We now prove the main relation between the sets ’s and ’s defined above and the intermediate sets ’s computed by . The following lemma is the heart of the proof.

Lemma 6.

Fix an , and let and . With probability , for any :

  1. .

  2. .

Proof.

To simplify the notation, for any , we let and (and similarly for ’s, ’s, and ’s). We also use to denote the neighbor-set of the vertex in the set .

Note that the vertex-sets of the graphs and are the same and we can “project” the sets ’s and ’s on graph as well. In other words, we can say a vertex in belongs to iff in the original graph . In the following claim, we crucially use the fact that the graph is obtained from by sampling each edge w.p. to prove that the degree of vertices across different sets ’s (and ’s) in are essentially the same as in (up to the scaling factor of ).

Claim 7.

For any :

  • For any vertex , in the graph w.p. .

  • For any vertex , in the graph w.p. .

Proof.

Fix any and . By definition of , degree of is at least in ; in other words, in the graph . Since each edge in is sampled w.p. in , in in expectation. Moreover, by the choice of , , and hence by Chernoff bound, w.p. , in .

Similarly for a vertex , degree of is less than in by definition of ; hence, in the graph . Using a similar argument as before, by Chernoff bound, w.p. , in .       

By using a union bound on the vertices in , the statements in Claim 7 hold simultaneously for all vertices of w.p. ; in the following we condition on this event. We now prove Lemma 6 by induction.

Let be a vertex that belongs to ; we prove that belongs to the set of VC-Coreset, i.e., . By Claim 7 (for ), the degree of in is at least . Note that in , may also have edges to other vertices in but this can only increase the degree of . This implies that also belongs to by the threshold chosen in VC-Coreset. Similarly, let be a vertex in (i.e., not in ); we show that is not chosen in , implying that can only contain vertices in . By Claim 7, degree of in is less than . This implies that does not belong to . In summary, we have and .

Now consider some and let be a vertex in . By induction, . This implies that the degree of to is at least as large as its degree to . Consequently, by Claim 7 (for ), degree of in the graph is at least and hence also belongs to . Similarly, fix a vertex in . By induction, and hence the degree of to is at most as large as its degree to ; note that since is a vertex cover, does not have any other edge in except for the ones to . We can now argue as before that does not belong to .       

We are now ready to prove Theorem 2.

Proof of Theorem 2.

The bound on the coreset size follows immediately from the fact that the graph contains at most edges and size of is at most . As argued before, to prove the bound on the approximation ratio, we only need to show that is of size . Let and ; clearly, each and moreover, by Lemma 6 (for ), each . Consequently, , where the last inequality is by Lemma 5.       

4 Lower Bounds for Randomized Coresets

We formalize Result 2 in this section. As argued earlier, Result 2 is a special case of Result 3 and hence follows from that result; however, as the proof of Result 3 is rather technical and complicated, we also provide a self-contained proof of Result 2 as a warm-up to Result 3.

4.1 A Lower Bound for Randomized Composable Coresets of Matching

We prove a lower bound on the size of any randomized composable coreset for the matching problem, formalizing Result 2 for matching.

Theorem 3.

For any and , any -approximation randomized composable coreset of the maximum matching problem is of size .

By Yao’s minimax principle [65], to prove the lower bound in Theorem 3, it suffices to analyze the performance of deterministic algorithms over a fixed (hard) distribution. We propose the following distribution for this task. For simplicity of exposition, in the following, we prove a lower bound for -approximation algorithms; a straightforward scaling of the parameters proves the lower bound for -approximation.

{tcolorbox}

[ enlarge top by=5pt, enlarge bottom by=5pt, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] Distribution . A hard input distribution for the matching problem.

  • Let (with ) be constructed as follows:

    1. Pick and , each of size , uniformly at random.

    2. Define as a set of edges between and , chosen by picking each edge in w.p. .

    3. Define as a random perfect matching between and .

    4. Let .

  • Let be a random -partitioning of and let the input to player be the graph .

Let be a graph sampled from the distribution . Notice first that the graph always has a matching of size at least , i.e., the matching . Additionally, it is easy to see that any matching of size more than in uses at least edges from : the edges in can only form a matching of size by construction. This implies that any -approximate solution requires recovering at least edges from . In the following, we prove that this is only possible if the coresets of the players are sufficiently large.

For any , define the induced matching as the unique matching in that is incident on vertices of degree exactly one, i.e., both end-points of each edge in have degree one in . We emphasize that the notion of induced matching is with respect to the entire graph and not only with respect to the vertices included in the induced matching. We have the following crucial lemma on the size of . The proof is technical and is deferred to Appendix A.

Lemma 1.

W.p. , for all , .

We are now ready to prove Theorem 3.

Proof of Theorem 3.

Fix any randomized composable coreset (algorithm) for the matching problem that has size . We show that such a coreset cannot achieve a better than -approximation over the distribution . As argued earlier, to prove this, we need to show that this coreset only contains edges from in expectation.

Fix any player , and let be the subset of the matching assigned to . It is clear that by the definition of . Moreover, define as the random variable denoting the number of edges from that belong to the coreset sent by player . Notice that is clearly an upper bound on the number of edges of that are in the final matching of coordinator and also belong to the input graph of player . In the following, we show that

(1)

Having proved this, we have that the expected size of the output matching by the coordinator is at most , a contradiction.

We now prove Eq (1). In the following, we condition on the event that and ; by Chernoff bound (for the first part, since ) and Lemma 1 (for the second part), this event happens with probability . As such, this conditioning can only change by an additive factor of which we ignore in the following.

A crucial property of the distribution is that the edges in and the remaining edges in are indistinguishable in . More formally, for any edge ,

On the other hand, for a fixed input to player , the computed coreset is always the same (as the coreset is a deterministic function of the player input). Hence,

where the second last equality is by the assumption that the size of the coreset, i.e., , is . This finalizes the proof.       

4.2 A Lower Bound for Randomized Composable Coresets of Vertex Cover

In this section, we prove that the size of the corset for the vertex cover problem in Theorem 2 is indeed optimal. The following is a formal statement of Result 2 for the vertex cover problem.

Theorem 4.

For any and , any -approximation randomized composable coreset of the minimum vertex cover problem is of size .

By Yao’s minimax principle [65], to prove the lower bound in Theorem 4, it suffices to analyze the performance of deterministic algorithms over a fixed (hard) distribution. We propose the following distribution for this task555We point out that simpler versions of this distribution suffice for proving the lower bound in this section. However, as we would like this proof to also act as a warm-up to the proof of Theorem 6, we use the same distribution that is used to prove that theorem.. For simplicity of exposition, in the following, we prove a lower bound for -approximation algorithms (for some constant ); a straightforward scaling of the parameters proves the lower bound for -approximation as well.

{tcolorbox}

[ enlarge top by=5pt, enlarge bottom by=5pt, boxsep=0pt, left=4pt, right=4pt, top=10pt, arc=0pt, boxrule=1pt,toprule=1pt, colback=white ] Distribution . A hard input distribution for the vertex cover problem.

  • Construct (with ) as follows:

    1. Pick of size uniformly at random.

    2. Let be a set of edges chosen by picking each edge in w.p. .

    3. Pick a single vertex uniformly at random from