We consider the problems of maintaining an approximate maximum matching and an approximate minimum vertex cover in a dynamic graph undergoing a sequence of edge insertions/deletions. Starting with the seminal work of Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. Very recently, extending the framework of Baswana, Gupta and Sen [FOCS 2011], Solomon [FOCS 2016] gave a randomized dynamic algorithm for this problem that has an approximation ratio of and an amortised update time of with high probability. This algorithm requires the assumption of an oblivious adversary, meaning that the future sequence of edge insertions/deletions in the graph cannot depend in any way on the algorithm’s past output. A natural way to remove the assumption on oblivious adversary is to give a deterministic dynamic algorithm for the same problem in update time. In this paper, we resolve this question.
We present a new deterministic fully dynamic algorithm that maintains a approximate minimum vertex cover and maximum fractional matching, with an amortised update time of . Previously, the best deterministic algorithm for this problem was due to Bhattacharya, Henzinger and Italiano [SODA 2015]; it had an approximation ratio of and an amortised update time of . Our results also extend to a fully dynamic approximate algorithm with amortized update time for the hypergraph vertex cover and fractional hypergraph matching problem where every hyperedge has at most vertices.
1 Introduction
Computing a maximum cardinality matching is a fundamental problem in computer science with applications, for example, in operations research, computer science, and computational chemistry. In many of these applications the underlying graph can change. Thus, it is natural to ask how quickly a maximum matching can be maintained after a change in the graph. As nodes usually change less frequently than edges, dynamic matching algorithms usually study the problem where edges are inserted and deleted, which is called the (fully) dynamic matching problem^{1}^{1}1Node updates are usually handled through the insertion and deletion of isolated nodes, but there has been also some work on the node insertionsonly or node deletionsonly problem [7].. The goal of a dynamic matching algorithm is to maintain either an actual matching (called the matching version) or the value of the matching (called the value version) as efficiently as possible.
Unfortunately, the problem of maintaining even just the value of the maximum cardinality matching is hard: There is a conditional lower bound that shows that no (deterministic or randomized) algorithm can achieve at the same time an amortized update time of and a query (for the size of the matching) time of for any small [10] (see [1] for conditional lower bounds using different assumptions). The best upper bound is Sankowski’s randomized algorithm [16] that solves the value problem in time per update and per query. Thus, it is natural to study the dynamic approximate maximum matching problem, and there has been a large body [13, 2, 12, 9, 5, 6, 17] of work on it and its dual, the approximate vertex cover problem, in the last few years.
Dynamic algorithms can be further classified into two types: Algorithms that require an oblivious (aka nonadaptive) adversary, i.e., an adversary that does not base future updates and queries on the answers to past queries, and algorithms that work even for an adaptive adersary. Obviously, the earlier kind of algorithms are less general than the later. Unfortunately, all randomized dynamic approximate matching and vertex cover algorithms so far either are not known to work with an adaptive adversary [13] or do not work for an adaptive adversary [2, 17]. Solomon [17] gives the best such randomized algorithm: It achieves amortized update time (with high probability) and query time for maintaining a approximate maximum matching and a approximate minimum vertex cover. He also extends this result to the dynamic distributed setting (à la Parter, Peleg, and Solomon [14]) with the same approximation ratio and update cost.
In this paper we present the first deterministic algorithm that maintains an approximation to the size of the maximum matching in amortized update time and query time. We also maintain an approximate vertex cover in the same update time. Note that this is the first deterministic dynamic algorithm with constant update time for any nontrivial dynamic graph problem. This is significant as for other dynamic problems such as the dynamic connectivity problem or the dynamic planarity testing problem there are nonconstant lower bounds in the cell probe model on the time per operation [11, 15]. Thus, our result shows that no such lower bound can exist for the dynamic approximate matching problem.
There has been prior work on deterministic algorithms for dynamic approximate matching, but they all have update time: One line of work concentrated on reducing the approximation ratio as much as possible, or at least below 2: Neiman and Solomon [12] achieved an update time for maintaining a approximate maximum matching and approximate minimum vertex cover. This result was improved by Gupta and Peng [9] that gave an algorithm with update time for maintaining a approximate maximum matching. Recently, Bernstein and Stein [3] gave an algorithm with amortised update time for maintaining a approximate maximum matching. Another line of work, and this paper fits in this line, concentrated on getting a constant approximation while reducing the update time to polylogarithmic: Bhattacharya, Henzinger and Italiano [5] achieved an update time for maintaining a approximate maximum fractional matching and a approximate minimum vertex cover. Note that any fractional matching algorithm solves the value version of the dynamic matching problem while degrading the approximation ratio by a factor of . Thus, the algorithm in [5] maintains a approximation of the value of the maximum matching. The fractional matching in this algorithm was later “determinically rounded” by Bhattacharya, Henzinger and Nanongkai [6] to achieve a update time for maintaining a approximate maximum matching.
Our method also generalizes to the hypergraph vertex (set) cover and hypergraph fractional matching problem which was considered by [4]. In this problem the hyperedges of a hypergraph are inserted and deleted over time. indicates the maximum cardinality of any hyperedge. The objective is to maintain a hypergraph vertex cover, that is, a set of vertices that hit every hyperedge. Similarly a fractional matching in the hypergraph is a fractional assignment (weights) to the hyperedges so that the total weight faced by any vertex is at most . We give an approximate algorithm with amortized update time.
1.1 Our Techniques
Our algorithm builds and simplifies the framework of hierarchical partitioning of vertices proposed by Onak and Rubinfeld [13], which was later enhanced by Bhattacharya, Henzinger and Italiano [5] to give a deterministic fullydynamic approximate vertex cover and maximum matching in amortized update time. The hierarchical partition divides the vertices into many levels and maintains a fractional matching and vertex cover. To prove that the approximation factor is good, Bhattacharya et. al.[5] also maintain approximate complementary slackness conditions. An edge insertion or deletion can disrupt these conditions (and indeed at times the feasibility of the fractional matching), and a fixing procedure maintains various invariants. To argue that the update time is bounded, [5] give a rather involved potential function argument which proves that the update time bounded by , the number of levels, and is thus . It seems unclear whether the update time can be argued to be a constant or not.
Our algorithm is morally similar to that in Bhattacharya et. al.[5], except we are a bit stricter when we fix nodes. As in [5], whenever an edge insertion or deletion or a previous update violates an invariant condition, we move nodes across the partitioning (incurring update costs), but after a node is fixed we often ensure it satisfies a stronger condition than what the invariant requires. For example, suppose a node violates the upper bound of a fractional matching, that is, the total fractional weight it faces becomes larger than , then the fixing subroutine will at the end ensure that the final weight the node faces is significantly less than . Morally, this slack allows us to make an charging argument of the following form – if this node violates the upper bound again, then a lot of “other things” must have occurred to increase its weight (for instance, maybe edge insertions have occurred). Such a charging argument, essentially, allows us to bypass the update time to an update time. The flip side of the slack is that our complementary slackness conditions become weak, and therefore instead of a approximation we can only ensure an approximation. The same technique easily generalizes to the hypergraph setting. It would be interesting to see other scenarios where approximation ratios can be slightly traded in for huge improvements in the update time.
Remark.
Very recently, and independently of our work, Gupta et al. [8] obtained a approximation algorithm for maximum fractional matching and minimum vertex cover in a hypergraph in amortized update time.
2 Notations and Preliminaries
Since the hypergraph result implies the graph result, henceforth we consider the former problem. The input hypergraph has nodes. Initially, the set of hyperegdes is empty, i.e., . Subsequently, an adversary inserts or deletes hyperedges in the hypergraph . The nodeset remains unchanged with time. Each hyperedge contains at most nodes. We say that is the maximum frequency of a hyperedge. If a hyperedge has a node as one of its endpoints, then we write . For every node , we let denote the set of hyperedges that are incident on . In this fully dynamic setting, our goal is to maintain an approximate maximum fractional matching and an approximate minimum vertex cover in . The main result of this paper is summarized in the theorem below.
Theorem 2.1.
We can maintain an approximate maximum fractional matching and an approximate minimum vertex cover in the input hypergraph in amortized update time.
To prove Theorem 2.1, throughout the rest of this paper we fix two parameters and as follows.
(1) 
We will maintain a hierarchical partition of the nodeset into levels , where . We let denote the level of a node . We define the level of a hyperedge to be the maximum level among its endpoints, i.e., . The levels of nodes (and therefore hyperedges) induce the following weights on hyperedges: for every hyperedge . For all nodes , let be the total weight received by from its incident hyperedges. We will satisfy the following invariant after processing a hyperedge insertion or deletion.
Invariant 2.2.
Every node at level has weight . Every node at level has weight .
Corollary 2.3.
Under Invariant 2.2, the nodes in levels form a vertex cover in .
Proof.
Suppose that there is a hyperedge with for all . Then we also have and . So for every node , we get: . This violates Invariant 2.2. ∎
Invariant 2.2 ensures that ’s form a fractional matching satisfying approximate complementary slackness conditions with the vertex cover defined in Corollary 2.3. This gives the following theorem.
Theorem 2.4.
In our algorithm, the hyperedge weights form a approximate maximum fractional matching, and the nodes in levels form a approximate minimum vertex cover.
Proof.
(Sketch) Say that a fractional matching, which assigns a weight to every hyperedge , is maximal iff for every hyperedge there is some node such that . Let be the set of all tight nodes in this fractional matching. Clearly, the set of nodes form a vertex cover in . It is well known that the sizes of such a fractional matching and the corresponding vertex cover are within a factor of each other. The key observation is that under Invariant 2.2, the fractional matching is approximately maximal, meaning that for every hyperedge there is some node such that . Further, the set of nodes in levels are approximately tight, since each of them has weight at least . ∎
We introduce some more notations. For any vertex , let be the total upweight received by , that is, weight from those incident hyperedges whose levels are strictly greater than . For all levels , we let and respectively denote the values of and if the node were to go to level and the levels of all the other nodes were to remain unchanged. More precisely, for every hyperedge and node , we define to be the maximum level among the endpoints of that are distinct from . Then we have: and . Our algorithm maintains a notion of time such that in each time step the algorithm performs one elementary operation. We let denote the weight (resp, upweight) faced by right before the operation at time . Similarly define , and .
Different states of a node.
Before the insertion/deletion of a hyperedge in , all nodes satisfy Invariant 2.2. When a hyperedge is inserted (resp. deleted), it increases (resp. decreases) the weights faced by its endpoints. Accordingly, one or more endpoints can violate Invariant 2.2 after the insertion/deletion of a hyperedge. Our algorithm fixes these nodes by changing their levels, which may lead to new violations, and so on and so forth. To describe the algorithm, we need to define certain states of the nodes.
Definition 2.5.
A node is DownDirty iff and . A node is UpDirty iff either {} or {}. A node is Dirty if it is either DownDirty or UpDirty .
Invariant 2.2 is satisfied if and only if no node is Dirty. We need another definition of SuperClean nodes which will be crucial.
Definition 2.6.
A node is SuperClean iff one of the following conditions hold: (1) We have and , or (2) We have , , and .
Note that a SuperClean node with has a stronger upper bound on the weight it faces and also an even stronger upper bound on the upweight it faces. At a high level, one of our subroutines will lead to SuperClean nodes, and the slack in the parameters is what precisely allows us to perform an amortized analysis in the update time.
Data Structures.
For all nodes and levels , let denote the set of hyperedges incident on that are at level . Note that for all . We will maintain the following data structures. (1) For every level and node , we store the set of hyperedges as a doubly linked list, and also maintain a counter that stores the number of hyperedges in . (2) For every node , we store the weights and , its level and an indicator variable for each of the states defined above. (3) For each hyperedge , we store the values of its level and therefore its weight . Finally, using appropriate pointers, we ensure that a hyperedge can be inserted into or deleted from any given linked list in constant time. We now state two lemmas that will be useful in analysing the update time of our algorithm.
Lemma 2.7.
Suppose that a node is currently at level and we want to move it to some level . Then it takes time to update the relevant data structures.
Proof.
If a hyperedge is not incident on the node , then the data structures associated with are not affected as moves up from level to level . Further, among the hyperedges , only the ones with get affected (i.e., the data structures associated with them need to be changed) as moves up from level to level . Finally, for every hyperedge that gets affected, we need to spend time to update the data structures for its endpoints. ∎
Lemma 2.8.
Suppose that a node is currently at level and we want to move it down to some level . Then it takes time to update the relevant data structures.
Proof.
If a hyperedge is not adjacent to the node , then the data structures associated with are not affected as moves down from level to level . Further, among the hyperedges , only the ones with get affected (i.e., the data structures associated with them need to be changed) as moves down from level to level . Finally, for every hyperedge that gets affected, we need to spend time to update the data structures for its endpoints. ∎
3 The algorithm: Handling the insertion/deletion of a hyperedge in the input graph
Initially, the graph is empty, every node is at level , and Invariant 2.2 holds. By induction, we will ensure that the following property is satisfied just before the insertion/deletion of a hyperedge.
Property 3.1.
No node is Dirty.
Insertion of a hyperedge . When a hyperedge is inserted into the input graph, it is assigned a level and a weight . The hyperedge gets inserted into the linked lists for all nodes . Furthermore, for every node , the weights increases by . For every endpoint , if , then the weight increases by . As a result of these operations, one or more endpoints of can now become UpDirty and Property 3.1 might no longer be satisfied. Hence, in order to restore Property 3.1 we call the subroutine described in Figure 1.
Deletion of a hyperedge . When a hyperedge is deleted from the input graph, we erase all the data structures associated with it. We remove the hyperedge from the linked lists for all , and erase the values and . For every node , the weight decreases by . Further, for every endpoint , if , then we decrease the weight by . As a result of these operations, one or more endpoints of can now become DownDirty, and Property 3.1 might get violated. Hence, in order to restore Property 3.1 we call the subroutine described in Figure 1.
The algorithms is simple – as long as some Dirty node remains, it runs either FIXUPDIRTY or FIXDOWNDIRTY to take care of UpDirty and DownDirty nodes respectively. One crucial aspect is that we prioritize UpDirty nodes over DownDirty ones.
FIXDOWNDIRTY(): Suppose that when the subroutine is called at time . By definition, we have and . We need to increase the value of if we want to ensure that no longer remains Dirty. This means that we should decrease the level of , so that some of the hyperedges incident on can increase their weights. Accordingly, we find the largest possible level such that , and move the node down to this level . If no such level exists, that is, if even , then we move the node down to level . Note that in this case there is no hyperedge with for such a hyperedge would have when is moved to level . In particular, we get .
Claim 3.2.
FIXDOWNDIRTY() makes the node SuperClean .
Proof.
Suppose node was at level when FIXDOWNDIRTY() was called at time and it ended up in level . If , then , and so becomes SuperClean after time . Henceforth assume . Since is the maximum level where , we have . Now note that since weights of hyperedges can increase by at most a factor when one end point drops exactly one level. This implies . Together we get that after time when is fixed to level , we have .
Now we argue about the upweights. Note that every hyperedge that contributes to must have . The weight of such a hyperedge remains unchanged as moves from level to . We infer that . Therefore after time when is fixed at level , we have In sum, becomes SuperClean after time . ∎
FIXUPDIRTY(): Suppose that when the subroutine is called at time . At this stage, we have either {} or {}. We need to increase the level of so as to reduce the weight faced by it. Accordingly, we find the smallest possible level where and move up to level . Such a level always exists because .
Claim 3.3.
After a call to the subroutine FIXUPDIRTY() at time , we have .
Proof.
Suppose that the node moves up from level to level . We now consider four possible cases.

Case 1. We have .
Since is the minimum possible level where , and since , we infer that . As the node moves up from level to level , the weight it faces can drop by at most a factor of . Hence, we get: . Therefore after time when the node moves to level , we have .

Case 2. We have , and there is an edge with at time . In this case we have in the beginning of timestep , since the edge with has weight . The rest of the proof is exactly similar to Case 1.

Case 3. We have , there is no edge with at time , and .
The value of does not change as moves up from level to level . Thus, we get: , for the node is UpDirty at level at time . Since the node does not move further up than level , we get: .

Case 4. We have , there is no edge with at time , and .
Since the node does not stop at level , we get: . Hence, we infer that is the minimum possible level where . As the node moves up from level to level , the weight it faces can drop by at most a factor of . Hence, we get: . Therefore, we have .
∎
It is clear that if and when FIXDIRTY() terminates, we are in a state which satisfies Invariant 2.2. In the next section we show that after hyperedge insertions and deletions, the total update time is indeed and so our algorithm has amortized update time.
4 Analysis of the algorithm
Starting from an empty graph , fix any sequence of updates. The term “update” refers to the insertion or deletion of a hyperedge in . We show that the total time taken by our algorithm to handle this sequence of updates is . We also show that our algorithm has an approximation ratio of .
Relevant counters.
We define three counters and . The first two counters account for the time taken to update the data structures while the third accounts for the time taken to find the index in both FIXDOWNDIRTY() and FIXUPDIRTY(). Initially, when the input graph is empty, all the three counters are set to zero. Subsequently, we increment these counters as follows.

Suppose node moves from level to level upon a call of FIXUPDIRTY(). Then for every hyperedge with , we increment by one.

Suppose node moves from level to level upon a call of FIXDOWNDIRTY(). Then for every hyperedge with , we increment the value of by one. Furthermore, we increment the value of by .
The next lemma upper bounds the total time taken by our algorithm in terms of the values of these counters. The proof of Lemma 4.1 appears in Section 4.5.
Lemma 4.1.
Our algorithm takes time to handle a sequence of updates.
We will show that and , which will imply an amortized update time of for our algorithm. Towards this end, we now prove three lemmas that relate the values of these three counters.
Lemma 4.2.
We have: .
Lemma 4.3.
We have: .
Lemma 4.4.
We have: .
4.1 Epochs, jumps and phases
Fix any node . An epoch of this node is a maximal timeinterval during which the node stays at the same level. An epoch ends when either (a) the node moves up to a higher level due to a call to FIXUPDIRTY, or (b) the node moves down to a lower level due to a call to the subroutine FIXDOWNDIRTY. These events are called jumps. Accordingly, there are UpJumps and DownJumps. Next, we define a phase of a node to be a maximal sequence of consecutive epochs where the levels of the node keep on increasing. The phase of a node is denoted by . Suppose that a phase consists of consecutive epochs of at levels . Then we have: . By definition, the epoch immediately before must have level larger than implying FIXDOWNDIRTY() landed at level . Similarly, the epoch subsequent to is smaller than implying FIXDOWNDIRTY() is called again.
4.2 Proof of Lemma 4.2
Suppose that a node moves down from (say) level to level at time (say) due to a call to the subroutine FIXDOWNDIRTY. Let and respective denote the increase in the counters and due to this event. We will show that , which will conclude the proof of the lemma. By definition, we have:
(4) 
Let denote the set of hyperedges incident on that contribute to the increase in due to the DownJump of at time . Specifically, we have: . Each edge contributes a weight towards the nodeweight . Thus, we get: . The last inequality holds since is DownDirty in the beginning of timestep . Rearranging the terms, we get: . The lemma now follows from equation (4).
4.3 Proof of Lemma 4.3
Suppose we call FIXDOWNDIRTY() at some time . Let just before the call, and let be the epoch with level of being . Let at time . By definition, increases by during the execution of FIXDOWNDIRTY(); let us call this increase . Thus, we have:
(5) 
Consider the time between and let us address how can decrease in this time while ’s level is fixed at . Either some hyperedge incident on is deleted, or some hyperedge incident on it decreases its weight. In the latter case, the level of such an hyperedge must increase above . Let denote the number of hyperedge deletions incident on during the timeinterval . Let denote the increase in the value of during the timeinterval due to the hyperedges incident on . Specifically, at time , we have . Subsequently, during the timeinterval , we increase the value of by one each time we observe that a hyperedge increases its level to something larger than . Note that throughout the timeinterval . Hence, each time we observe an unit increase in , this decreases the value of by at most . Just before time , the node made either an UpJump, or a DownJump. Hence, Claims 3.3 and 3.2 imply that . As at time , we infer that has dropped by at least during the timeinterval . In order to account for this drop in , the value of must have increased by at least . Since at time , at time we get: . Hence, (5) gives us:
(6) 
Each time the value of increases due to FIXDOWNDIRTY on some node, inequality (6) applies. If we sum all these inequalities, then the left hand side (LHS) will be exactly equal to the final value of , and the right hand side (RHS) will be at most . The factor appears in front of because each hyperedge deletion can contribute times to the sum , once for each of its endpoints. Similarly, the factor appears in front of because whenever the level of an hyperedge moves up due to the increase in the level of some endpoint , this contributes at most times to the sum , once for every other endpoint . Since LHS RHS, we get: . This concludes the proof of the lemma.
4.4 Proof of Lemma 4.4
Fix a node and consider a phase where goes through levels . Thus, the node enters the level at time (say) due to a call to FIXDOWNDIRTY. For , the node performs an UpJump at time (say) from the level to the level , due to a call to FIXUPDIRTY. This implies that . The phase ends, say, at time when the node again performs a DownJump from the level due to a call to FIXDOWNDIRTY.
Let denote the total increase in the value of the counter due to the phase . For , let denote the increase in the value of the counter due to the UpJump of at time . Thus, we have:
(7) 
We define two more counters: , . The former counter equals the number of hyperedge insertions/deletions incident on during the timeinterval . The latter counter equals the increase in the value of due to the hyperedges incident on during the timeinterval . Alternately, these two counters can be defined as follows. At time , we set and . Subsequently, whenever at any time , a hyperedge incident on gets inserted into or deleted from the input graph, we increment the value of by one. Further, whenever at any time , a hyperedge incident on gets its level decreased because of a DownJump of some node , we increment the value of by one.
Since enters the level at time due to a call to FIXDOWNDIRTY, Claim 3.2 implies that:
(8) 
Our main goal is to upper bound in terms of the final values of the counters and .
Claim 4.6.
For , .
Proof.
By Claim 3.3 we have , that is, the total weight incident on after it has gone through FIXUPDIRTY at time is at most . Now, each hyperedge which contributes to has weight, right after time , precisely . Putting together, we get . ∎
Using the above claim, we get the following upper bound on the sum of all but the last .
Claim 4.7.
We have: .
Proof.
If , then we have an empty sum , and hence the claim is trivially true. For the rest of the proof, we suppose that , which implies that . Thus, we get:
(9) 
To continue with the proof, summing over the inequalities from Claim 4.6, we get:
(10) 
Since the node performs an UpJump at time from level (see equation (9)), the node must be UpDirty at that time. It follows that . From equation (8), we have . Thus, during the time interval the value of increases by at least . This can be either due to (a) some hyperedge incident to being inserted, or (b) some hyperedge gaining its weight because of some endpoint going down. The former increases and the latter increases . Furthermore, the increase in due to every such hyperedge is at most . This gives us the following lower bound.
(11) 
The claim follows from equation (11). ∎
Claim 4.8.
We have: .
From equations 1, 7 and Claims 4.7, 4.8, we get:
(12) 
Using equation (12), now we can prove Lemma 4.4. For every phase of a node , as per equation (12) we can charge the increase in to the increase in corresponding to hyperedges incident of . Summing up over all nodes and phases, the LHS gives while the RHS gives . The coefficient before comes from the fact that every hyperedge insertion can contribute times to the RHS, once for each of its endpoints. The coefficient before comes from the fact that whenever the level of a hyperedge decreases due to the DownJump of a node , this event contributes at most times to the RHS: once for every other endpoint . Thus, we get:
4.4.1 Proof of Claim 4.8
We fork into two cases.
Case 1:
.
Case 2:
.
We start by noting that the weight of a node is always less than at every time.
Claim 4.9.
We have: at every time .
Proof.
The crucial observation is that fixing an UpDirty node never increases the weight of any node. Furthermore, a DownDirty node gets fixed only if no other node is UpDirty (see Figure 1).
In the beginning of timestep , the input graph is empty, and we clearly have . By induction, suppose that in the beginning of some timestep . Now, during timestep , the weight can increase only if one of the following events occur:

(a) A hyperedge containing gets inserted into the graph. This increase the value of by at most one. Thus, we have .

(b) We call the subroutine FIXDOWNDIRTY for some node . Note that fixing a DownDirty node can increase the weight by at most one, and hence this can increase the weight of a neighbour of also by at most one. It again follows that .
If , then we are back in the same situation as in timestep . Otherwise, if , then is UpDirty in the beginning of timestep . In this case, no DownDirty node gets fixed (and no hyperedge gets inserted) until we ensure that becomes smaller than one. Hence, the value of always remains smaller than . ∎
Claim 4.10.
We have: .
Proof.
While making the UpJump at time , the node does not stop at level . The claim follows. ∎
Claim 4.11.
We have: .
Proof.
Suppose that the claim does not hold. Then we get:
(15) 
The first inequality holds since the weights of the hyperedges with get scaled by at least a factor of when moves from level to , and the rest can only go down. The second inequality holds since by Claim 4.9 and the assumption . The last inequality holds since by equation (1). However, equation (15) contradicts Claim 4.10. ∎
Claim 4.11 states that . Since , equation (8) implies that . Thus during the timeinterval , the value of increases by at least . This increase can occur in three ways: (1) a hyperedge is inserted with before the UpJump at time (which contributes to ), (2) some hyperedge gains weight due to a DownJump of some node (say) , and after the DownJump (which contributes to ), and (3) some hyperedge had