Fully Dynamic Approximate Matchings
Abstract
We present the first data structures that maintain near optimal maximum cardinality and maximum weighted matchings on sparse graphs in sublinear time per update. Our main result is a data structure that maintains a approximation of maximum matching under edge insertions/deletions in worst case time per update. This improves the approximation given in [Neiman, Solomon, STOC 2013] which runs in similar time. The result is based on two ideas. The first is to rerun a static algorithm after a chosen number of updates to ensure approximation guarantees. The second is to judiciously trim the graph to a smaller equivalent one whenever possible.
We also study extensions of our approach to the weighted setting, and combine it with known frameworks to obtain arbitrary approximation ratios. For a constant and for graphs with edge weights between and , we design an algorithm that maintains an approximate maximum weighted matching in time per update. The only previous result for maintaining weighted matchings on dynamic graphs has an approximation ratio of 4.9108, and was shown in [Anand, Baswana, Gupta, Sen, FSTTCS 2012, arXiv 2012].
USletter \setmarginsrb.75in.5in .75in.5in .25in.25in .25in.5in
1 Introduction
The problem of computing maximum or nearmaximum matchings in a graph has played a central role in the study of combinatorial optimization[LovaszP86, PapadimitriouS82]. A matching is a set of vertexdisjoint edges in a graph, and two variants of the problem are finding the maximum cardinality matching in an unweighted graph, and finding the matching of maximum weight in a weighted graph. The problem is appealing for several reasons: it has a simple description; matchings sometimes need to be improved by highly nonlocal steps; and certifying the optimality of a matching yields a surprising amount of structural information about a graph. On static graphs, the current best algorithms for maximum cardinality matching run in time, on bipartite graph by Hopcroft and Karp [hopcroft1971n5], and on general graph by Micali and Vazirani [micali1980v]. In the weighted case, algorithms with similar running times were given by Gabow and Tarjan [gabow1991faster], and by Duan et al. [DuanPS11].
A natural question from a data structure perspective is whether on a dynamically changing graph the solution to an optimization problem can be maintained faster than recomputing it from scratch after each update. For maximum cardinality matching, an time algorithm follows by executing one phase of the static algorithm described by Tarjan[Tarjan83]. For dense graphs, a faster running time of has been shown by Sankowski [sankowski2007faster], and to date this is the only known result that gives sublinear time per update. For trees, Gupta and Sharma[gupta2009log] gave an algorithm based on top trees that takes time per update.
On static graphs, a nearlyoptimal matching can be computed much faster than finding the optimum matching. So it stands to reason that the same should apply in the dynamic case. Ivković and Llyod[ivkovi1994fully] gave the first result in this direction: an algorithm that maintains a maximal matching with update time. Recently there has been a growing interest in designing efficient dynamic algorithms for approximate matching. Onak and Rubinfeld designed a randomized algorithm that maintains a approximation of maximum matching in update time [onak2010maintaining], where is a large unspecified constant. Baswana, Gupta and Sen[baswana2011fully] showed that maximal matching, which is a approximation of maximum matching, can be maintained in a dynamic graph in amortized update time with high probability. Subsequently, Anand et al. [AnandBGS12, AnandBGSArxiv12] extended this work to the weighted case, and showed how to maintain a matching with weight that is expected to be at least of the optimum.
These results show that a large matching can be maintained very efficiently in dynamic graphs, but leave open the question of maintaining a matching closer to the optimum matching. Recently, Neiman and Solomon[neiman12deterministic] showed that a matching of size at least of the size of optimum matching can be maintained in time per update in general graphs , as well as time per update on bounded arboricity graphs. A similar result of maintaining 3/2approximate matchings was obtained independently by Anand [Anand12]. This leads to the following question: Can we maintain a matching close to maximum matching (say approximate matching) in a dynamic weighted or unweighted graph? We answer this question in affirmative by designing the first data structure that maintains arbitrary quality approximate maxcardinality and maxweighted matching in sublinear time on sparse graph.
Our algorithm differs significantly from previous ones in that we do not maintain strict invariants. Baswana et al. [baswana2011fully] maintained a maximal matching, which ensures no edge has both endpoints unmatched; and the approximate algorithm designed by Neiman and Solomon[neiman12deterministic] remove all length three augmenting paths in the graph at each update step. Our approach makes crucial use of the fact that the optimization objectives involving matching is stable. That is, a single update can only change the value of the optimum matching by . So if we find a matching close to maximum matching at some update step, it remains close to maximum even after several updates to the graph. In case the current matching ceases to be a good approximation of the maximum matching, we then rerun the static algorithm to get a matching that is close to optimum. This approach of rerunning a expensive routine occasionally is a common technique in dynamic graph data structures [HenzingerK99, HolmLT01, BaswanaKS12]. It is particularly powerful for approximating matchings since the stability property gives us freedom in choosing when to rerun the static algorithms. But rerunning static algorithm occasionally works well when the maximum matching in the graph is large. To deal with graphs having small maximum matching, we introduce the concept of core subgraph which is the central concept of our paper. A core subgraph is a subgraph of a graph having the following two properties: Its size is considerably smaller than the entire graph. Secondly, the size of maximum matching in core subgraph is same as the size of maximum matching in the entire graph. We will crucially use these two properties in designing a dynamic algorithm for approximate matching. A detailed description of our algorithm, as well as other components of our data structure are presented in Section 3 and Appendix LABEL:sec:improvementsdetails. The main result for approximating the maximum cardinality matching can be stated as follows:
Theorem 1.1.
For any constant , there exists an algorithm which maintains a approximate matching in an unweighted dynamic graph in worst case update time.
It can be argued that the stability property of matchings that we rely on is rare among optimization problems. For most other problems like shortest paths and minimum spanning tree, there exist updates that require immediate changes in the approximate solution maintained. For matchings, such updates exists in the weighted version, where the objective is the sum of weight over edges in the matching. Direct extensions of our approach have linear dependencies on in update time, where is the maximum weight of an edge. This dependency can in fact be viewed as a quantitative measurement of the decrease in stability as we allow larger weights.
As a result, we investigate rounding/bucketing based approaches which have logarithmic dependency on in Section LABEL:sec:weightedscaling. This was first studied for maintaining dynamic matchings by Anand et al. [AnandBGS12], and they used dynamic maximal matchings as a subroutine in their algorithm. Directly substituting our result for maximum cardinality matching leads to immediate improvements in the approximation ratio which is the second result in this paper.
Theorem 1.2.
For any constant , there exists an algorithm that maintains approximate maximum weighted matching in a graph where edges have weights between in worst case update time.
Our approximation algorithm is derived from known schemes which bucket edges based on their weights. The rounding scheme we use in this algorithm is based on algorithm designed by Anand et al. [AnandBGS12]. It is not clear whether any extension of this bucketing scheme will lead to a approximate matching. To do this, we devise a new rounding scheme which obtains arbitrarily good approximations of maximum matching, albeit at the cost of a much higher dependency on in the running time.
Theorem 1.3.
For any constant , there exists an algorithm that maintains approximate maximum weighted matching in a graph where edges have weights between in worst case update time.
As with the algorithm by Neiman and Solomon [neiman12deterministic], our algorithms are deterministic and the update time guaranteed by them is worst case. However, for simplicity in our presentations we will often start by describing the simpler amortized variants.
2 Preliminaries
We start by stating the notations that we will use, and reviewing some wellknown results on matchings. An undirected graph is represented by , where represents the set of vertices and represents the set of edges in the graph. We will use to denote the number of vertices , and to denote the number of edges .
A matching in a graph is a set of independent edges in the graph. Specifically, a subset of edges, is a matching if no vertex of the graph is incident on more than one edge in . A vertex is called unmatched if it is not incident on any edge in , otherwise it is matched. Similarly, an edge is called matched if it is in or free otherwise. A vertex cover is a set of vertices in a graph such that each edge has at least one of its endpoint in the vertex cover.
The maximum cardinality matching(MCM) in a graph is the matching of maximum size. Similarly, given a set of weights , we can denote the weight of a matching as . The maximum weight matching(MWM) in a graph is in turn the matching of maximum weight. We will use to denote a optimum matching for either of these two objectives depending on context.
For measuring the quality of approximate matching, we will use the notation of approximation, which indicates that the objective (either cardinality or weight) given by the current solution is at least of the optimum. Specifically, a matching is called MCM if , and MWM if .
Finding or approximating MCMs and MWMs in the static setting have been intensely studied. Nearly linear time algorithms have been developed for finding approximations, and we will make crucial use of these algorithms in our data structure. For maximum cardinality matching, such an algorithm for bipartite graph was introduced by Hopcroft and Karp[hopcroft1971n5], and extended to general graphs by Micali and Vazirani[micali1980v, Vazirani12].
Lemma 2.1.
There exists an algorithm ApproxMCM that when given a graph with edges along with a parameter , return an MCM in time.
For approximate MWM, there has been some recent progress. Duan et al.[DuanP10, DuanPS11] designed an algorithm that find a approximate maximum weighted matching in time.
Lemma 2.2.
[DuanP10, DuanPS11] There exists an algorithm ApproxMWM that when given a graph with edges along with a parameter , return an MWM in time.
All logarithms in this paper are with base 2 unless mentioned otherwise.
3 MCMs Using Lazy Updates
3.1 Overview
To maintain approximate matching, we exploit the stability of the matching and use the static algorithm for matching ApproxMCM periodically. Our starting point is the observation that the size of maximum matching changes by at most per update. This means that if we have a large matching that’s close to the maximum, it will remain close to maximum matching over a large number of updates. So we use the following approach: Find a matching at certain update step and wait for certain number of updates till the matching is a good approximation of maximum matching. This approach works well if the maximum matching is itself large to begin with. But if the maximum matching itself is small, we still need to run the static algorithm many times.
To overcome this, we show that instead of finding a maximum matching on the entire graph, we can use a small special subgraph such that the size of maximum matching in this subgraph is same as the size of maximum matching in the entire graph. We call this subgraph a core subgraph, and it is the central idea of our approximate algorithm. As this subgraph is considerably smaller, the time needed to find a maximum matching on it is considerably less. We will show that this core subgraph can be formed using the vertex cover of the entire graph. Specifically, we take the vertexinduced subgraph formed by the cover, along with some special chosen edges out of vertices belonging to the cover.
But this leads to another question: How do we maintain a vertex cover in a dynamic graph? For this, we can use the algorithm of Neiman and Solomon [neiman12deterministic]. One of the invariants in this algorithm is that there are no edges between unmatched vertices, which means the set of matched vertices form a approximate minimum vertex cover. Therefore reporting these vertices suffices for a vertex cover at any update step. However, note that our dependence on the above algorithm is not critical. Specifically, we design another simple algorithm which does not depend on the algorithm of Neiman and Solomon[neiman12deterministic] for finding the core subgraph. A description of this, as well as modifications for handling edges with weights in a small range, and obtaining worst case bounds are in Section LABEL:subsec:improvements, with details deferred to Appendix LABEL:sec:improvementsdetails.
3.2 Algorithm
We start with some notations that we will use in this section. We number the updates from to and use the following notations:

: The graph after the i^{th} update.

: A matching computed on

: Let denote the set of all edges in that are deleted from the graph between update steps and . We define to be , i.e., consists of all the edges in the matching that are not deleted between update step and .
Also, we will use to denote the optimal matching at step . The approximation guarantees of is as follows:
Lemma 3.1.
If and is an MCM in , then for is an MCM in
Proof.
Suppose there were insertions and deletions in the updates between updates and . The assumption about implies that . Since each insert can increase the size of the maximum matching by , we have . Also, each deletion can remove at most one edge from , so . The approximation ratio is then at most:
∎
This fact has immediate algorithmic consequences for situations where the maximum matching is large. Suppose we computed an MCM for , , then is approximate maximum matching as long as . The cost of the call to ApproxMCM (given by Lemma 2.1) can then be charged to the next updates, giving time per update. When is large, this cost is fairly small. On the other hand, when is of constant size, this approach will make a call to ApproxMCM almost every update.
For small size matching, we introduce the concept of core subgraph. As mentioned previously, core subgraph can be found by using a vertex cover .
Definition 3.2.
Given a graph and a vertex cover , a core subgraph consists of:

All edges between vertices in

For each vertex , the edges of maximum weight of to vertices in . In case of an unweighted graph, these edges can be chosen arbitrarily.
An illustration of a core subgraph is shown in Figure 1. It can be used algorithmically as follows.
Lemma 3.3.
Let be a core subgraph of formed using a vertex cover . If is a MCM in , then it’s also a MCM in .
Proof.
We first show that the size of the maximum matching in is the same as the size of the maximum matching in . Among all maximum matchings in , let be one that uses the maximum number of edges in . For the sake of contradiction, suppose contains an edge in . Since is a vertex cover, one of or is in , without loss of generality assume it’s . By the construction rule, for to not be included in , there exists neighbors of in that are in , let them be . As the maximum matching in has size at most and there are no edges with both endpoints in , at most vertices in can be matched. Therefore there exists an unmatched vertex in . Substituting with gives a maximum matching that uses one more edge in the , giving a contradiction.
Combining this with the fact that implies that the size of the maximum matchings in and are the same. Therefore any MCM in is also a MCM in . ∎
As mentioned previously, we can find in the graph by using the algorithm of Neiman and Solomon [neiman12deterministic]. Their algorithm maintains 3/2 approximate matching in update time in the worst case which is less than the bound we are claiming. Whenever we need a vertex cover, we can report all the matched vertices in the 3/2 approximate matching. From now on we will assume an oracle access to the vertex cover at any update step. A more detailed treatment of maintaining a small cover can be found in Appendix LABEL:subsec:vcover.
Any vertex cover in graph formed out of a valid matching has the following property: . This is because the size of any valid matching is always less than the maximum matching size . Therefore when is small, we only need to run the static algorithm given by Lemma 2.1 on a core subgraph of . We can construct this graph in time by examining up to neighbors of each vertex in . Using Lemma 2.1, we can find a approximate matching in this graph in time. Furthermore, Lemma 3.1 allows us to charge this time to the next updates. Therefore, cost charged per update can be bounded by , which is small for small values of . Our data structure maintains the following global states:

A matching .

A counter indicating the number of updates until we make the next call to ApproxMCM

A vertex cover (Using the algorithm of Neiman and Solomon [neiman12deterministic])
Upon initialization is obtained by running the static algorithm on , or can be empty if starts empty. can be initialized to . Since we handle insertions and deletions in almost symmetrical ways, we present them as a single routine Update, shown in Figure LABEL:fig:lazysimple