Random Walks that Find Perfect Objects
and the Lovász Local Lemma
Abstract
We give an algorithmic local lemma by establishing a sufficient condition for the uniform random walk on a directed graph to reach a sink quickly. Our work is inspired by Moser’s entropic method proof of the Lovász Local Lemma (LLL) for satisfiability and completely bypasses the Probabilistic Method formulation of the LLL. In particular, our method works when the underlying state space is entirely unstructured. Similarly to Moser’s argument, the key point is that the inevitability of reaching a sink is established by bounding the entropy of the walk as a function of time.
1 Introduction
Let be a (large) set of objects and let be a collection of subsets of , each subset comprising objects sharing some (negative) feature. We will refer to each subset as a flaw and, following linguistic rather than mathematical convention, say that is present in if . We will say that an object is flawless (perfect) if no flaw is present in . For example, given a CNF formula on variables with clauses , we can define a flaw for each clause , comprising the subset of violating .
Given and we can often prove the existence of flawless objects using the Probabilistic Method. Indeed, in many interesting cases this is the only way we know how to do so. To employ the Probabilistic Method we introduce a probability measure on and consider the collection of (“bad”) events corresponding to the flaws (one event per flaw). The existence of flawless objects is then equivalent to the intersection of the complements of the bad events having strictly positive probability. Clearly, such positivity always holds if the events in are independent and none of them has measure 1. One of the most powerful tools of the Probabilistic Method is the Lovász Local Lemma (LLL) asserting that such positivity also holds under a condition of limited dependence among the events in . The idea of the Local Lemma was first circulated by Lovász in the early 1970s in an unpublished note. It was published by Erdős and Lovász in [10]. The general form below is also due in unpublished form to Lovász and was given by Spencer in [27].
General LLL.
Let be a set of events and let denote the set of indices of the dependency set of , i.e., is mutually independent of all events in . If there exist positive real numbers such that for all ,
(1) 
then the probability that none of the events in occurs is at least .
Remark 1.
In [11], Erdős and Spencer noted that one can replace the LLL’s requirement that each bad event is dependent with few other bad events with the weaker requirement that each bad event is negatively correlated with few other bad events. That is, for each bad event there should only be few other bad events whose nonoccurrence may boost ’s probability of occurring; the nonoccurrence of any subset of the remaining events should leave either unaffected, or make it less likely. A natural setting for the lopsided LLL arises when one seeks a collection of permutations satisfying a set of constraints and considers the uniform measure on them. While the bad events (constraint violations) are now typically densely dependent (as fixing the image of even just one element affects all others), one can often establish sufficient negative correlation among the bad events to apply the lopsided LLL.
Lopsided LLL ([11]).
Let be a set of events. For each , let be a subset of such that , for every . If there exist positive real numbers such that for all ,
(2) 
then the probability that none of the events in occurs is at least .
In the context of the general LLL it is natural to define the dependence digraph of a collection of events as having a vertex for each event and an arc iff , noting that there exist systems of events such that contains arc but not arc . The lopsided dependence digraph is the sparsification of wherein each event points only to the events that may boost it, i.e., the elements of the set . Let be the undirected graph that results by ignoring arc direction in . Also, observe that condition (2) can be trivially rewritten (expanded) as
(3) 
Relatively recently, Bissacot et al. [4] improved the lopsided LLL when the graph is not trianglefree. Specifically, they showed that the conclusion of the lopsided LLL remains valid if the summation in (3) is restricted to those sets which are independent in .
1.1 Constructive Versions
As one can imagine, after proving that contains flawless objects via the LLL it is natural to ask if some flawless object can be found efficiently. Making the LLL constructive has been a long quest, starting with the work of Beck [3], with subsequent works of Alon [2], Molloy and Reed [20], Czumaj and Scheideler [6], Srinivasan [28] and others. Each such work established a method for finding flawless objects efficiently, but in all cases under significant additional conditions relative to the LLL. The breakthrough was made by Moser [21] who showed that a shockingly simple algorithm nearly matches the LLL condition for CNF formulas. Very shortly afterwards, Moser and Tardos in a landmark paper [22] made the general LLL constructive for all product measures over explicitly presented variables.
Specifically, in the socalled variable setting of [22], each event is associated with the set of variables that determine it so that iff . Moser and Tardos proved that if the condition (1) of the general LLL holds, then repeatedly selecting any occurring event (flaw present) and resampling every variable in independently of all others, leads to an elementary event where no event in holds (flawless object) after a polynomial number of resamplings. Pegden [25] strengthened the result of Moser and Tardos [22] by showing that its conclusion still holds if the condition (1) of the general LLL is replaced by (3), where the summation is restricted to independent sets, i.e., under the condition of Bissacot et al. [4] mentioned above. Kolipaka and Szegedy in [17] showed that the algorithm of Moser and Tardos, in fact, converges in polynomial time under the criterion of Shearer [26], the most generous condition under which for symmetric dependency graphs. As the criterion of Shearer is not efficiently verifiable, Kolipaka, Szegedy and Xu [16] gave a series of intermediate conditions, between the general LLL and Shearer’s criterion, for the algorithm of [22] to terminate, most notably the efficiently verifiable Clique LLL. On the other hand, with the notable exception of CNFSAT, none of these results applies to the lopsided LLL which remained nonconstructive.
Very recently Harris and Srinivasan [14] made the lopsided LLL constructive for the uniform measure on Cartesian products of permutations. Among other results this yielded an efficient algorithm for constructing Latin Squares when each color appears at most times, matching the best nonconstructive bound due to Bissacot et al. [4] (who improved the original bound of Erdős and Spencer [11] by exploiting the local density of the lopsided dependency graph). Harris and Srinivasan [14] pointed out that while the permutation setting is the most common use case, the lopsided LLL has been gainfully applied to other settings [18, 19] including hypergraph matchings, set partitions and spanning trees, and asked if their results can be extended beyond permutations. In particular, they left as a canonical open problem whether the results of Dudek, Frieze and Ruciński [9] regarding Hamilton Cycles in edge colored hypergraphs can be made constructive.
2 A New Framework
Inspired by the breakthrough of Moser [21] we take a more direct approach to finding flawless objects, bypassing the probabilistic formulation of the existence question. Specifically, we replace the measure on by a directed graph on and we seek flawless objects by taking random walks on . With this in mind, we refer to the elements of as states. As in Moser’s work [21], each state transformation (step of the walk) will be taken to address a flaw present at . Naturally, a step may eradicate other flaws beyond the one addressed but may also introduce new flaws (and, in fact, may fail to eradicate the addressed flaw). By replacing the measure with a directed graph we achieve two main effects:

Both the set of objects and every flaw can be entirely amorphous. That is, does not need to have product form , as in the work of Moser and Tardos [22], or any form of symmetry, as in the work of Harris and Srinivasan [14]. For example, can be the set of all Hamiltonian cycles of a graph, a set of very high complexity.

The set of transformations for addressing a flaw can differ arbitrarily among the different states , allowing the actions to adapt to the “environment”. This is in sharp contrast with all past algorithmic versions of the LLL where either no or very minimal adaptivity was possible. As we discuss in Section 4, this moves the Local Lemma from the Probabilistic Method squarely within the purview of Algorithm Design.
Concretely, for each , let , i.e., is the set of flaws present in . For each and we require a set that must contain at least one element other than which we refer to as the set of possible actions for addressing flaw in state . To address flaw in state we select uniformly at random an element and walk to state , noting that possibly . Our main point of departure is that now the set of actions for addressing a flaw in each state can depend arbitrarily on the state, , itself.
We represent the set of all possible state transformations as a multidigraph on formed as follows: for each state , for each flaw , for each state place an arc in , i.e., an arc labeled by the flaw being addressed. Thus, may contain pairs of states with multiple arcs, each such arc labeled by a different flaw, each such flaw having the property that moving to is one of the actions for addressing at , i.e., . Since we require that the set contains at least one element other than for every flaw in we see that a vertex of is a sink iff it is flawless.
We focus on digraphs satisfying the following condition.
Atomicity.
is atomic if for every flaw and state there is at most one arc incoming to labeled by .
The purpose of atomicity is to capture “accountability of action”. In particular, note that if is atomic, then every walk on can be reconstructed from its final state and the sequence of labels on the arcs traversed, as atomicity allows one to trace the walk backwards unambiguously. To our pleasant surprise, in all applications we have considered so far we have found atomicity to be “a feature not a bug”, serving as a very valuable aid in the design of flaws and actions, i.e., of algorithms. A fruitful way to think about atomicity is to consider the case where and have product structure over a set of variables, e.g., a Constraint Satisfaction Problem. In that case the following suffice to imply atomicity:

Each constraint (flaw) forbids exactly one joint value assignment to its underlying variables.

Each state transition modifies only the variables of the violated constraint (flaw) that it addresses.
Condition 1 expresses a purely syntactic requirement: compound constraints must be broken down to constituent parts akin of satisfiability constraints. So, for example, to encode graph colorability we must write constraints (flaws) per edge, one for each color. Decomposing constraints in this manner enables a uniform treatment at no additional cost. In many cases it is, in fact, strictly advantageous as it affords a more refined accounting of conflict between constraints. Condition 2 on the other hand is a genuine restriction reflecting the idea of “focusing” introduced by Papadimitriou [23], i.e., that every state transformation should be the result of attempting to eradicate some specific flaw.
To see that Conditions 1 and 2 imply atomicity imagine that there exist arcs and , i.e., two state transformations addressing the same flaw leading to the same state . Since must be present in both and , Condition 1 implies that if , then there exists at least one variable not bound by which takes different values in . In that case, though, Condition 2 implies that will have the same value before and after each of the two transformations, leading to a contradiction.
Having defined the multidigraph on we will now define a digraph on the set of flaws , reflecting some of the structure of .
Potential Causality.
For each arc in and each flaw present in we say that causes if or . If contains any arc in which causes we say that potentially causes .
Potential Causality Digraph.
The digraph of the potential causality relation, i.e., the digraph on where iff potentially causes , is called the potential causality digraph. The neighborhood of a flaw is .
In the interest of brevity we will call the causality digraph, instead of the potential causality digraph. It is important to note that contains an arc if there exists even one state transition aimed at addressing that causes to appear in the new state. In that sense, is a “pessimistic” estimator of causality (or, alternatively, a lossy compression of ). This pessimism is both the strength and the weakness of our approach. On one hand, it makes it possible to extract results about algorithmic progress without tracking the evolution of state. On the other hand, it only gives good results when can remain sparse even in the presence of such stringent arc inclusion. We feel that this tension is meaningful: maintaining the sparsity of requires that the actions for addressing each flaw across different states are coherent with respect to the flaws they cause.
Without loss of generality (and to avoid certain trivialities), we can assume that is strongly connected, implying for every . To see this, let be the strongly connected components of and consider the DAG with vertices , where for , points to iff there exist and such that exists in . If we have a sufficient condition for finding flawless objects when the causality digraph is strongly connected, then we can take any source vertex in the DAG and repeatedly address flaws in until we reach a state that is flawless, at which point we remove from the DAG. If has other flaws, we select a new source vertex and repeat the same idea continuing from . The actions that will be taken to address flaws in will never introduce flaws in etc.
So far we have not discussed which flaw to address in each flawed state, demanding instead a nonempty set of actions for each flaw present in a state . We discuss the reason for this in Section 4.3. For now, suffice it to say that we consider algorithms which employ an arbitrary ordering of and in each flawed state address the greatest flaw according to in a subset of .
Definition 1.
If is any ordering of , let be the function mapping each subset of to its greatest element according to , with . We will sometimes abuse notation and for a state , write for and also write for when is clear from context.
Definition 2.
Let be the result of retaining for each state only the outgoing arcs with label .
The next definition reflects that since actions are selected uniformly, the number of actions available to address a flaw, i.e., the breadth of the “repertoire”, is important.
Amenability.
The amenability of a flaw is
(4) 
The amenability of a flaw will be used to bound from below the amount of randomness consumed every time is addressed. (The minimum in (4) is often inoperative with being the same for all .)
3 Statement of Results
Our first result concerns the simplest case where, after choosing a single fixed permutation of the flaws, in each flawed state the algorithm simply addresses the greatest flaw present in according to , i.e., the algorithm is the uniform random walk on .
Theorem 1.
If for every flaw ,
then for any ordering of and any , the uniform random walk on starting at reaches a sink within steps with probability at least , where .
Theorem 1 has three features worth discussing, shared by all our further results below.
Arbitrary initial state. The fact that can be arbitrary means that any foothold on suffices to apply the theorem, without needing to be able to sample from according to some measure. While sampling from has generally not been an issue in existing applications of the LLL, as we discuss in Section 4, this has only been true precisely because the sets and the measures considered have been highly structured.
Arbitrary number of flaws. The running time depends only on the number of flaws present in the initial state, , not on the total number of flaws . This has an implication analogous to the result of Hauepler, Saha, and Srinivasan [12] on core events: even when is very large, e.g., superpolynomial in the problem’s encoding length, we can still get an efficient algorithm if we can show that is small, e.g., by proving that in every state only polynomially many flaws may be present. This feature provides great flexibility in the design of flaws, as demonstrated in one of our applications, presented in Section 10.
Cutoff phenomenon. The bound on the runningtime is sharper than a typical high probability bound, being instead akin to a mixing time cutoff bound [7], wherein the distance to the stationary distribution drops from near 1 to near 0 in a very small number of steps past a critical point. In our setting, the walk first makes steps without any guarantee of progress, but from that point on every single step has constant probability of being the last step. While, pragmatically, a high probability bound would be just as useful, the fact that our bound naturally takes this form suggests a potential deeper connection with the theory of Markov chains.
Theorem 1 follows from the following significantly more general result. We present the derivation of Theorem 1 from Theorem 2 in Section 6. Observe the similarity between the condition of Theorem 2 and the condition (1) of the general LLL with replacing .
Theorem 2 (Main result).
If there exist positive real numbers such that for every flaw ,
(5) 
then for any ordering of and any , the uniform random walk on starting from reaches a sink within steps with probability at least , where
Remark 2.
In applications, typically, and .
3.1 Dense Neighborhoods
In a number of applications the subgraph induced by the neighborhood of each flaw in the causality graph contains several arcs. We improve Theorem 2 in such settings by employing a recursive algorithm. This has the effect that the flaw addressed in each step depends on the entire trajectory up that point not just the current state, i.e., the walk in nonMarkovian. It is for this reason that we required a nonempty set of actions for every flaw present in a state, and why the definition of the causality digraph does not involve flaw choice. Specifically, for any ordering of and any the recursive walk is the nonMarkovian random walk on that occurs by invoking procedure Eliminate below. Observe that if in line 8 we did not intersect with the recursion would be trivialized and the walk would be the uniform random walk on . This is because the first time any “while” condition would be satisfied, causing the corresponding recursive call to return, would be when .
Definition 3.
Let be the undirected graph on where iff both and exist in the causality digraph . For any , let .
Observe that, trivially, the condition of Theorem 2 can be restated as requiring that for every flaw ,
(6) 
where, throughout, we use the convention that a product devoid of factors equals 1, i.e., .
Theorem 3.
If there exist positive real numbers such that for every flaw ,
(7) 
then for any ordering of and any , the recursive walk on starting at reaches a sink within steps with probability at least , where , and
Remark 3.
Remark 4.
Theorem 3 can be strengthened by introducing for each flaw a permutation of and replacing with in line 9 the of Recursive Walk. With this change, in (7) it suffices to sum only over satisfying the following: if the subgraph of induced by contains an arc , then . As such a subgraph can not contain both and we see that .
3.2 A LeftHanded Algorithm
While Theorems 1–3 do not care about the flaw ordering , inspired by the socalled LeftHanded version of the LLL introduced by Pedgen [24], we give a condition under which the flaw order can be chosen in a provably beneficial way. This is done by organizing the flaws in an order akin to an elimination sequence. Specifically, the idea is to seek a permutation and a “responsibility digraph” , derived from the causality digraph , so as to “shift responsibility” from flaws failing to satisfy condition (5) of Theorem 2, to flaws that have slack.
Definition 4.
For an ordered set of vertices , say that arc is forward if and backward if . Given a causality digraph and a permutation of ordering the vertices of , we say that is a responsibility digraph for with respect to if:

Every forward arc and selfloop of exists in .

If a backward arc of does not exist in , then for each such that exists in , exists in as well.
The neighborhood of a flaw in a responsibility graph is .
For any permutation of , any responsibility digraph with respect to , and any , the lefthanded walk is the random walk induced on by modifying the Recursive Walk as follows.
Theorem 4.
For any permutation of and any responsibility digraph with respect to , if there exist positive real numbers such that for every flaw ,
then for any , the lefthanded walk on starting at reaches a sink within steps with probability at least , where , and
4 Comparison to Our Work
Besides dispensing with the need for to have product structure (variables) or symmetry (permutations), our setting has two additional benefits.
4.1 Statedependent Transformations
The LLL, framed as a result in probability, begins with a probability measure on the set of objects . In terms of proving the existence of flawless objects, its value lies in that it delivers strong results even when the measure is chosen without any consideration of the flaws (bad events). Indeed, most LLL applications simply employ the uniform measure on , a property that can render the LLL indistinguishable from magic. It is worth noting that in the presence of variables, the uniform measure is nothing but the product measure generated by sampling each variable according to the uniform measure on its domain.
All algorithmic versions of the LLL up to now can be seen as walks on constrained by the measure. For product measures, i.e., in the setting of Moser and Tardos [22], this means that the only transformation allowed is resampling all variables of a bad event, with each variable resampled independently of all others, using the same distribution every time the variable is resampled, i.e., obliviously to the current state. The partial resampling framework of Harris and Srinivasan [13] refines this to allow resampling a subset of an event’s variables, but again only independently of one another and obliviously to the current state. Similarly, for the uniform measure on permutations [14]: the permuted elements whose images form a violated constraint must be reshuffled in a very specific and stateoblivious way, mandated by consistency with the uniform measure.
In contrast, our framework dispenses with the measure on altogether allowing the set of transformations for addressing each flaw to depend arbitrarily on the current state. This has three distinct effects:

It allows us to deal with settings in which both the set of objects and the set of flaws are amorphous, as in the case of rainbow Hamilton cycles and rainbow perfect matchings, something not possible with any previous algorithmic LLL results.

In the case of permutations, where some structure is present, we derive the same main results as [14] with dramatically simpler proofs. Moreover, we have far greater freedom in the choice of algorithms since there is no constraint imposed by some measure.

Finally, for the variable setting of Moser and Tardos [22] we gain “adaptivity to state”. This allows us to address one of the oldest and most vexing concerns about the LLL (see the survey of Szegedy [29]), exemplified by the LLL’s inability to establish the elementary fact that a graph with maximum degree can be colored with colors. Specifically, imagine that to recolor a monochromatic edge we select an endpoint of arbitrarily and assign a new color to . When the choice of must be uniform among all colors, as mandated when using the uniform measure in the variable setting, the obliviousness of the choice necessitates the use of a large number of colors relative to in order for new violations to become sufficiently rare for the method to terminate. Specifically, the LLL can only work when . On the other hand, in our setting, the color can be selected uniformly among the available colors for , i.e., the colors not appearing in ’s neighborhood, by taking the set of actions to be precisely the set of states that result by assigning available colors to in . Thus, as soon as , the causality digraph becomes empty and rapid termination follows trivially.
4.2 Dependencies vs. Actions
Unlike the variable setting of Moser and Tardos [22] where the dependency relation between events is symmetric, our causality relation, similarly to the lopsided LLL, is not. We consider asymmetry a significant structural feature of our work since, as is wellknown [29], the directed setting is strictly stronger than the undirected setting. For example, there exist systems of events for which there exists a lopsided dependence digraph sparser than any undirected dependence graph. Moreover, asymmetry is essential in our development of structured clause choice in the lefthanded version of our theorem.
At a high level, our results capture the directedness of the lopsided LLL, but with the far more flexible causality digraph replacing the lopsided dependence digraph. Concretely, our framework replaces the limited negative dependence condition of the lopsided LLL, which can be highly nontrivial to establish [18], with limited causality under atomicity, a condition that is both significantly less restrictive and far easier to check. Moreover, as mentioned earlier and to our pleasant surprise, in all applications we have considered so far we have found atomicity to be a very valuable aid in the design of flaws and actions.
For example, in Section 8 we give the first efficient algorithm for finding rainbow Hamilton cycles in hypergraphs, as guaranteed to exist by the nonconstructive results of [9, 8]. When we tried to determine flaws and actions for this setting, to our delight we realized that we could just use one of the main technical propositions of [8], as it is equivalent to proving that for each flaw and there exists a set of actions such that the corresponding digraph is atomic. As [8] is completely independent of our work we consider this “coincidence” a nice testament to the naturalness of atomicity.
In a different direction, in Section 10 we give an application regarding the ColorBlind index of Graphs. That setting highlights the importance of the directness of the causality graph demonstrates how directedness readily enables the formulation of “obvious” flaws and actions. Finally, our Theorem 3 combines the benefits of directedness with the improvement of Bissacot et al., by restricting the summation to independent sets. For example, in Section 11 we show how Theorem 3 allows us to also give an efficient algorithm for Latin Transversals matching the bound of [14]. (Theorem 3 can also benefit the application to rainbow matchings in Section 9 but we chose to use Theorem 1 to keep the exposition simple).
While the main contribution of our framework lies in providing freedom in the design of the set of actions for addressing each flaw in each state (and thus going beyond the LLL), its main limitation is that we are restricted in performing uniform random walks in the corresponding directed graph . That means, for example, that our framework does not capture applications of the variable setting in which the product measure is not uniform over the domain of each variable, while the variable setting of Moser and Tardos [22] captures these cases. We leave closing this gap as future work.
4.3 Flaw Selection
As mentioned earlier, in Theorems 1–3 the necessary condition is independent of the flaw order and, therefore, if the condition is met the algorithm reaches a sink quickly for every permutation . As this is an unnecessarily luxurious conclusion, it is natural to try to sharpen the results by selecting the flaw order first, so that the causality digraph is the image of the (much) sparser instead of . However, since an arc will exist in the causality digraph as long as there is even one transition addressing that causes in , it is not at all clear that sparsifying using a generic helps significantly. At the same time, if there exists a “special” that does help significantly, coming up with it is nontrivial. For example, in the setting of satisfiability, if are clauses that share variable with opposite signs, then not having the arc in requires either that addressing should never involve flipping , cutting by half, or finding a permutation of the clauses such that in every state in which is the greatest violated clause, is satisfied by some variable other than . The only nontrivial case we know where the latter can be done is when is satisfiable by the pure literal heuristic.
As far as we know, the method by which a bad event (flaw) is selected in each step does not affect the performance of any of the algorithmic extensions of the LLL even though in the setting of [22] this choice can be arbitrary. The only use we know of this freedom lies in enabling parallelization when is a structured set, i.e., when has product structure[22, 17, 5], or it is a set of permutations [14]. Since we allow to be completely amorphous, it is not readily clear how to approach parallelization in our setting.
Finally, we note that flaw choice in our framework is not really restricted to using a single permutation. For example, in the nonrecursive setting, before beginning the walk we can select an arbitrary infinite sequence of permutations of and in the th step of the walk address the greatest flaw present according to . If we are back to the singlepermutation setting, while if, for example, each is an independent uniformly random permutation, the algorithm addresses a uniformly random flaw present in each step. At the same time, we must make clear that our framework does not accomodate arbitrary flaw selection functions and, in fact, we do not see how to extend it beyond permutationbased choice. To keep the presentation of our results uniform (and compact) we have stated both Theorems 2 and 4 in terms of a single permutation. We do point out the one place in our proofs that changes (trivially) to handle multiple permutations.
5 Mapping Bad Trajectories to Forests
We prove Theorems 2–4 in three parts. In the first part, carried out in this section, we show how to represent each sequence of steps that does not reach a sink as a forest with vertices, where the forests have different characteristics for each of the walks of Theorems 2–4. Then, in Section 6, we state a general lemma for bounding the running time of different walks in terms of properties of their corresponding forests and show how it readily implies each of Theorems 2–4. Finally, in Section 7 we prove the lemma itself. In a first reading the reader may want to skip Section 5.3 (and, perhaps also Section 5.2). The sections can be read later, in order, after the material of Section 5.1 has been absorbed.
In the following to lighten notation we will assume that is fixed but arbitrary.
Definition 5.
A walk is called a trajectory. A trajectory is bad if it only goes through flawed states. Let be the set of bad trajectories starting at .
Our first step is the same as Moser’s [21], generalized to the notion of atomicity. It amounts to defining an almost1to1 map from bad trajectories to sequences of flaws. While the map is not 11, crucially, it becomes 11 with the addition of a piece of information whose size is independent of .
Definition 6.
If is a bad trajectory, the sequence , i.e., the sequence of flaws labeling the arcs , is the witness of .
Claim 1.
If is atomic, then the map from bad trajectories is onetoone.
Proof.
The atomicity of implies that is the unique state in with an arc . Etc. ∎
Thus, is bounded by the number of possible witness sequences multiplied by .
5.1 Forests of the Uniform Walk (Theorem 2)
Recall that for any ordering of the flaws we denote by the digraph that results from if at each state we only retain the outgoing arcs labeled by . To analyze the uniform random walk on we will represent witnesses as sequences of sets reflecting causality.
Let be the set of flaws “introduced” by the th step of the walk, where a flaw is said to “introduce itself” if it remains present after an action from is taken. Formally,
Definition 7.
Let . For , let .
Let comprise those flaws addressed in the course of the trajectory. Thus, , where comprises any flaws in that were eradicated “collaterally” by an action taken to address some other flaw, and comprises any flaws in that remained present in every subsequent state after their introduction without being addressed. Formally,
Definition 8.
The Break Sequence of a bad trajectory is , where for ,
Given we can determine inductively, as follows. Define , while for ,
(8) 
By construction, the set is guaranteed to contain . Since returns^{1}^{1}1If instead of we had a sequence of permutations , we would simply use to determine from . the greatest flaw in its input according to , it must be that . We note that this is the only place we ever make use of the fact that the function is derived by an ordering of the flaws, thus guaranteeing that for every and , if then .
We next give another 1to1 map, mapping each Break Sequence to a vertexlabelled rooted forest. Specifically, the Break Forest of a bad trajectory has trees and vertices, each vertex labelled by an element of . To construct it we first lay down vertices as roots and then process the sets in order, each set becoming the progeny of an already existing vertex (empty sets, thus, giving rise to leaves).
Observe that even though neither the trees, nor the nodes inside each tree of the Break Forest are ordered, we can still reconstruct since the set of labels of the vertices in equals for all .
5.2 Forests of the Recursive Walk (Theorem 3)
We will represent each bad trajectory, , of the Recursive Walk as a vertexlabeled unordered rooted forest, having one tree per invocation of procedure address by procedure eliminate. Specifically, to construct the Recursive Forest we add a root vertex per invocation of address by eliminate and one child to every vertex for each (recursive) invocation of address that it makes. As each vertex corresponds to an invocation of address (step of the walk) it is labeled by the invocation’s flawargument. Observe now that (the invocations of address corresponding to) both the roots of the trees and the children of each vertex appear in in their order according to . Thus, given the unordered rooted forest we can order its trees and the progeny of each vertex according to and recover as the sequence of vertex labels in the preorder traversal of the resulting ordered rooted forest.
Recall the definition of graph on from Definition 3. We will prove that the flaws labeling the roots of a Recursive Forest are independent in and that the same is true for the flaws labelling the progeny of every vertex of the forest. To do this we first prove the following.
Proposition 1.
If address() returns at state , then .
Proof.
Let be any state subsequent to the address() invocation. If any flaw in is present at , the “while” condition in line 8 of the Recursive Walk prevents address() from returning. On the other hand, if is present in , then there must have existed an invocation address(), subsequent to invocation address(), wherein addressing caused . Consider the last such invocation. If is the state when this invocation returns, then , for otherwise the invocation could not have returned, and by the choice of invocation, is not present in any subsequent state between and . ∎
Let denote the argument of the th invocation of address by eliminate. By Proposition 1, is a decreasing sequence of sets. Thus, the claim regarding the root labels follows trivially: for each , the flaws in are not present in and, therefore, are not present in , for any . The proof for the children of each node is essentially identical. If a node corresponding to an invocation address() has children corresponding to (recursive) invocations with arguments , then the sequence of sets is decreasing. Thus, the flaws in are not present in and, therefore, not present in , for any .
5.3 Forests of the LeftHanded Walk (Theorem 4)
Recall that is an arbitrary permutation of and that the Lefthanded Walk is the Recursive Walk modified by replacing with in line 8, where is a responsibility graph for with respect to . We map the bad trajectories of the LeftHanded Walk into vertexlabeled unordered rooted forests, exactly as we did for the bad trajectories of the Recursive Walk, i.e., one tree per invocation of address by eliminate, one child per recursive invocation of address, all vertices labeled by the flawargument of the invocation. The challenge for the Lefthanded Walk is to prove that the labels of the roots are distinct and, similarly, that the labels of the children of each node are distinct. (For Break Forests both properties were true automatically; for Recursive Forests we established the stronger property that each of these sets of flaws is independent). To do this we first prove the following analogue of Proposition 1.
Definition 9.
Let denote the set of flaws strictly greater than according to . For a state and a flaw , let .
Proposition 2.
If address returns at state , then and .
Proof.
The execution of address generates a recursion tree, each node labeled by its flawargument. Thus, the root is labelled by and each child of the root is labelled by a flaw in . Let . For a state , let be the set of flaws in that are present in . We claim that if and address terminates at , then . This suffices to prove the lemma as:

By the claim, any flaw in must be introduced by the action taken by the original invocation address. Thus, .

All flaws in introduced by are in , since contains all forward edges and selfloops of . Thus, . In particular, can only be present in if .

No flaw in can be present in since address returned at .
To prove the claim, consider the recursion tree of address. If and , then there has to be a path from the root of the recursion tree of address to a node such that but for each . To see this, notice that since was absent in but is present in , it must have been introduced by some flaw addressed during the execution of address. But if belonged in the neighborhood with respect to of any of the flaws on the path from the root to , the algorithm would have not terminated. However, such a path can not exist, as it would require all of the following to be true, violating the definition of responsibility digraphs (let for notational convenience): (i) , (ii) , (iii) , and (iv) . ∎
To establish the distinctness of the root labels, observe that each time procedure eliminate is invoked at a state , by definition of , we have . By Proposition 2, if the invocation returns at state , then neither nor any greater flaws are present in . Therefore, eliminate invokes address at most once for each . To see the distinctness of the labels of the children of each node, consider an invocation of address(). Whenever this invocation recursively invokes address(), where , by definition of , every flaw in is absent from . By Proposition 2, whenever each such invocation returns neither nor any of the flaws in are present implying that address() invokes address() at most once for each .
6 A General Forest Lemma and Proof of Theorems 1–4
Recall that we are considering random walks on the multidigraph on which has an arc for each , flaw , and . Recall also that the different walks of Theorems 2–4 differ only on which flaw to address among those present in the current state . Having chosen to address a flaw , all three walks proceed in the exact same manner, selecting the next state uniformly at random. In Section 5 we saw how to map the bad trajectories of the different walks into unordered rooted forests so that given a trajectory’s forest and final state we can reconstruct it.
Next we will formulate and prove a general tool for bounding the running time of different walks on .
Lemma 1 (Witness Forests).
Consider any random walk on which (i) in every flawed state , after choosing (arbitrarily) which flaw to address, selects the next state uniformly at random, and (ii) whose bad trajectories can be mapped into unordered rooted forests satisfying the following properties, so that given a trajectory’s forest we can reconstruct the sequence of flaws addressed along the trajectory:

Each vertex of the forest is labeled by a flaw .

The flaws labeling the roots of the forest are distinct and, as a set, belong in the set .

The flaws labeling the children of each vertex are distinct.

If a vertex is labelled by flaw , the labels of its children, as a set, belong in the set .
If there exist positive real numbers such that for every flaw ,
then for any , a walk started at reaches a sink within steps with probability at least , where , and
6.1 Proof of Theorems 2–4 from Lemma 1
6.2 Proof of Theorem 1 from Theorem 2
Let be the least common multiple of the integers . Let . Observe that since for every , and that for any set ,
(9) 
7 Proof of Lemma 1
7.1 Versions of Flaws
Recall that we are considering random walks on the multidigraph on which has an arc for each , flaw , and . For the proof it will be convenient to transform to another multidigraph as described below. The transformation is trivial from an algorithmic point of view, but helps with the eventual counting. Let be the least common multiple of the integers .
To form the multidigraph we replace each arc in with arcs from to , carrying labels . We refer to each such label as a version of flaw . To move in from a state , exactly as in , the walk first determines which flaw to address and then chooses uniformly at random. The only difference is that having done so, now the walk also consumes an additional amount of randomness to “choose a version” of , i.e., to chose one of the arcs from to . Thus, the probability distribution on sequences of states of the walk in is identical to the one in (indeed the two walks can be coupled so that the sequences are always equal).
Definition 10.
A trajectory on where is a version of the flaw addressed at the step is called a versioned trajectory. A versioned trajectory is bad if it only goes only through flawed states. Let be the set of all versioned bad trajectories.
Observe that to move in from any flawed state to the next state the walk must select among
(10) 
possibilities, implying that every versioned bad trajectory has probability at most . Having a uniform upper bound of probability as a function of length is precisely why we introduced versioned flaws.
To prove Lemma 1 we will give such that the probability that a versioned trajectory on is bad is exponentially small in . Per our discussion above to prove this it suffices to prove that is exponentially small in for . Since is atomic, we can reconstruct any bad versioned trajectory from and the sequence of versioned flaws addressed. We are thus left to count the number of possible sequences of versioned flaws.
Per the hypothesis of Lemma 1, each bad trajectory on is associated with a rooted labeled witness forest with vertices such that given the forest we can reconstruct the sequence of flaws addressed along the trajectory. To count sequences of versioned flaws we relabel the vertices of the witness forest to carry not only the flaw addressed, but also the integer denoting its version (in the corresponding walk on ). We refer to the resulting object as the versioned witness forest. Recall that neither the trees, nor the nodes inside each tree in the witness forest are ordered. To facilitate counting we fix an arbitrary ordering of and map each versioned witness forest into the unique ordered forest that result by ordering the trees in the forest according to the labels of their roots and similarly ordering the progeny of each vertex according to (recall that both the flaws labeling the roots and the flaws labeling the children of each vertex are distinct).
Having induced this ordering for the purpose of counting, we will encode each versioned witness forest as a rooted, ordered ary forest with exactly nodes, where (recall that is the least common multiple of the integers ). In a rooted, ordered ary forest both the roots and the at most children of each vertex are ordered. We think of the root of as having reserved for each flaw a group of slots, where the th group of slots corresponds to the th largest flaw in according to . If is the th largest flaw in according to and its version in is , then we fill the th slot of the th group of slots (recall that the flaws labeling the roots of the witness forest are distinct and that, as a set, belong in the set ).
Each node of corresponds to a node of the witness forest and therefore to a flaw that was addressed at some point in the trajectory of the algorithm. Recall now that each node in the witness forest that is labelled by a flaw has children labelled by distinct flaws in . We thus think of each node of as having precisely slots reserved for each flaw (and, thus, at most reserved slots in total). For each whose version is , we fill the th slot reserved for and make it a child of in . Thus, from we can reconstruct the sequence of versioned flaws addressed with the algorithm.
At this point we could proceed and bound by the number of all ary ordered forests. Indeed, doing so would yield Theorem 2. Such a counting, though, would ignore the fact that the set of flaws labelling the progeny of a node labelled by is not an arbitrary element of but an element of . Thus, not every ordered ary forest is a possible versioned witness forest. To quantify this observation, we use ideas from [25]. Specifically, we introduce a branching process that produces only ordered ary forests that correspond to versioned witness forests and bound by analyzing it. Before describing the branching process, we introduce some conventions and definitions regarding versions of flaws:

For each we will denote by the set formed by replacing each by its versions . For example, contains every version of every flaw.

For each flaw , we define to be the set that results by replacing each by the