An efficient strongly connected components algorithm
in the fault tolerant model
Abstract
In this paper we study the problem of maintaining the strongly connected components of a graph in the presence of failures. In particular, we show that given a directed graph with and , and an integer value , there is an algorithm that computes in time for any set of size at most the strongly connected components of the graph . The running time of our algorithm is almost optimal since the time for outputting the SCCs of is at least . The algorithm uses a data structure that is computed in a preprocessing phase in polynomial time and is of size .
Our result is obtained using a new observation on the relation between strongly connected components (SCCs) and reachability. More specifically, one of the main building blocks in our result is a restricted variant of the problem in which we only compute strongly connected components that intersect a certain path. Restricting our attention to a path allows us to implicitly compute reachability between the path vertices and the rest of the graph in time that depends logarithmically rather than linearly in the size of the path. This new observation alone, however, is not enough, since we need to find an efficient way to represent the strongly connected components using paths. For this purpose we use a mixture of old and classical techniques such as the heavy path decomposition of Sleator and Tarjan [29] and the classical DepthFirstSearch algorithm. Although, these are by now standard techniques, we are not aware of any usage of them in the context of dynamic maintenance of SCCs. Therefore, we expect that our new insights and mixture of new and old techniques will be of independent interest.
1 Introduction
Computing the strongly connected components (SCCs) of a directed graph , where and , is one of the most fundamental problems in computer science. There are several classical algorithms for computing the SCCs in time that are taught in any standard undergrad algorithms course [9].
In this paper we study the following natural variant of the problem in dynamic graphs. What is the fastest algorithm to compute the SCCs of , where is any set of edges or vertices. The algorithm can use a polynomial size data structure computed in polynomial time for during a preprocessing phase.
The main result of this paper is:
Theorem 1.1
There is an algorithm that computes the SCCs of , for any set of edges or vertices, in time. The algorithm uses a data structure of size computed in time for during a preprocessing phase.
Since the time for outputting the SCCs of is at least , the running time of our algorithm is optimal (up to a polylogarithmic factor) for any fixed value of .
This dynamic model is usually called the fault tolerant model and its most important parameter is the time that it takes to compute the output in the presence of faults. It is an important theoretical model as it can be viewed as a restriction of the deletion only (decremental) model in which edges (or vertices) are deleted one after another and queries are answered between deletions. The fault tolerant model is especially useful in cases where the worst case update time in the more general decremental model is high.
There is wide literature on the problem of decremental SCCs. Recently, in a major breakthrough, Henzinger, Krinninger and Nanongkai [18] presented a randomized algorithm with total update time and broke the barrier of for the problem. Even more recently, Chechik et al. [7] obtained an improved total running time of .
However, these algorithms and in fact all the previous algorithms have an worst case update time for a single edge deletion. This is not a coincidence. Recent developments in conditional lower bounds by Abboud and V. Williams [1] and by Henzinger, Krinninger, Nanongkai and Saranurak [19] showed that unless a major breakthrough happens, the worst case update time of a single operation in any algorithm for decremental SCCs is . Therefore, in order to obtain further theoretical understanding on the problem of decremental SCCs, and in particular on the worst case update time it is only natural to focus on the restricted dynamic model of fault tolerant.
In the recent decade several different researchers used the fault tolerant model to study the worst case update time per operation for dynamic connectivity in undirected graphs. Pǎtraşcu and Thorup [26] presented connectivity algorithms that support edge deletions in this model. Their result was improved by the recent polylogarithmic worst case update time algorithm of Kapron, King and Mountjoy [21]. Duan and Pettie[13, 14] used this model to obtain connectivity algorithms that support vertex deletions.
In directed graphs, very recently, Georgiadis, Italiano and Parotsidis [16] considered the problem of SCCs but only for a single edge or a single vertex failure, that is . They showed that it is possible to compute the SCCs of for any (or of for any ) in time using a data structure of size that was computed for in a preprocessing phase in time. Our result is the first generalized result for any constant size . This comes with the price of an extra factor in the running time, a slower preprocessing time and a larger data structure. In [16], Georgiadis, Italiano and Parotsidis also considered the problem of answering strong connectivity queries after one failure. They show construction of an size oracle that can answer in constant time whether any two given vertices of the graph are strongly connected after failure of a single edge or a single vertex.
In a recent result [2] we considered the problem of finding a sparse subgraph that preserves single source reachability. More specifically, given a directed graph and a vertex , a subgraph of is said to be a Fault Tolerant Reachability Subgraph (FTRS) for if for any set of at most edges (or vertices), a vertex is reachable from in if and only if is reachable from in . In [2] we proved that there exists a FTRS for with at most edges.
Using the FTRS structure, it is relatively straightforward to obtain a data structure that, for any pair of vertices and any set of size , answers in time queries of the form:
“Are and in the same SCC of ?” 
The data structure consists of a FTRS for every . It is easy to see that and are in the same SCC of if and only if is reachable from in and is reachable from in . So the query can be answered by checking, using graph traversals, whether is reachable from in and whether is reachable from in . The cost of these two graph traversals is . The size of the data structure is .
This problem, however, is much easier since the vertices in the query reveal which two FTRS we need to scan. In the challenge that we address in this paper all the SCCs of , for an arbitrary set , have to be computed. However, using the same data structure as before, it is not really clear apriori which of the FTRS we need to scan.
We note that our algorithm uses the FTRS which seems to be an essential tool but is far from being a sufficient one and more involved ideas are required. As an example to such a relation between a new result and an old tool one can take the deterministic algorithm of Łącki [23] for decremental SCCs in which the classical algorithm of Italiano [20] for decremental reachability trees in directed acyclic graphs is used. The main contribution of Łącki [23] is a new graph decomposition that made it possible to use Italiano’s algorithm [20] efficiently.
1.1 An overview of our result
We obtain our time algorithm using several new ideas. One of the main building blocks is surprisingly the following restricted variant of the problem.
Given any set of failed edges and any path which is intact in , output all the SCCs of that intersect with (i.e. contain at least one vertex of ).
To solve this restricted version, we implicitly solve the problem of reachability from (and to ) in , for each . Though it is trivial to do so in time using FTRS of each vertex on , our goal is to preform this computation in time, that is, in running time that is independent of the length of (up to a logarithmic factor). For this we use a careful insight into the structure of reachability between and . Specifically, if is reachable from , then is also reachable from any predecessor of on , and if is not reachable from , then it cannot be reachable from any successor of as well. Let be any vertex on , and let be the set of vertices reachable from in . Then we can split at to obtain two paths: and . We already know that all vertices in have a path to , so for we only need to focus on set . Also the set of vertices reachable from any vertex on must be a subset of , so for we only need to focus on set . This suggests a divideandconquer approach which along with some more insight into the structure of FTRS helps us to design an efficient algorithm for computing all the SCCs that intersect .
In order to use the above result to compute all the SCCs of , we need a clever partitioning of into a set of vertex disjoint paths. A DepthFirstSearch (DFS) tree plays a crucial role here as follows. Let be any path from root to a leaf node in a DFS tree . If we compute the SCCs intersecting and remove them, then the remaining SCCs must be contained in subtrees hanging from path . So to compute the remaining SCCs we do not need to work on the entire graph. Instead, we need to work on each subtree. In order to pursue this approach efficiently, we need to select path in such a manner that the subtrees hanging from are of small size. The heavy path decomposition of Sleator and Tarjan [29] helps to achieve this objective.^{1}^{1}1We note that the heavy path decomposition was also used in the fault tolerant model in STACS’10 paper of [22], but in a completely different way and for a different problem.
Our algorithm and data structure can be extended to support insertions as well. More specifically, we can report the SCCs of a graph that is updated by insertions and deletions of edges in the same running time.
1.2 Related work
The problem of maintaining the SCCs of a graph was studied in the decremental model. In this model the goal is to maintain the SCCs of a graph whose edges are being deleted by an adversary. The main parameters in this model are the worst case update time per an edge deletion and the total update from the first edge deletion until the last. Frigioni et al.[15] presented an algorithm that has an expected total update time of if all the deleted edges are chosen at random. Roditty and Zwick [27] presented a LasVegas algorithm with an expected total update time of and expected worst case update time per a single edge deletion of . Łącki [23] presented a deterministic algorithm with a total update time of , and thus solved the open problem posed by Roditty and Zwick in [27]. However, the worst case update time per a single edge deletion of his algorithm is . Roditty [28] improved the worst case update time of a single edge deletion to . Recently, in a major breakthrough, Henzinger, Krinninger and Nanongkai [18] presented a randomized algorithm with total update time. Very recently, Chechik et al. [7] obtained a total update time of . Note that all the previous works on decremental SCC are with worst case update time. Whereas, our result directly implies worst case update time as long as the total deletion length is constant.
Most of the previous work in the fault tolerant model is on variants of the shortest path problem. Demetrescu, Thorup, Chowdhury and Ramachandran [10] designed an size data structure that can report the distance from to avoiding for any in time. Bernstein and Karger [3] improved the preprocessing time of [10] to . Duan and Pettie [12] designed such a data structure for two vertex faults of size . Weimann and Yuster [31] considered the question of optimizing the preprocessing time using Fast Matrix Multiplication (FMM) for graphs with integer weights from the range . Grandoni and Vassilevska Williams [17] improved the result of [31] based on a novel algorithm for computing all the replacement paths from a given source vertex in the same running time as solving APSP in directed graphs.
For the problem of single source shortest paths Parter and Peleg [25] showed that there is a subgraph with edges that supports one fault. They also showed a matching lower bound. Recently, Parter [24] extended this result to two faults with edges for undirected graphs. She also showed a lower bound of .
Baswana and Khanna [22] showed that there is a subgraph with edges that preserves the distances from up to a multiplicative stretch of upon failure of any single vertex. For the case of edge failures, sparse fault tolerant subgraphs exist for general . Bilò et al. [4] showed that we can compute a subgraph with edges that preserves distances from up to a multiplicative stretch of upon failure of any edges. They also showed that we can compute a data structure of size that is able to report the stretched distance from in time.
1.3 Organization of the paper
We describe notations, terminologies, some basic properties of DFS, heavypath decomposition, and FTRS in Section 2. In Section 3, we describe the fault tolerant algorithm for computing the strongly connected components intersecting any path. We present our main algorithm for handling failures in Section 4. In Section 5, we show how to extend our algorithm and data structure to also support insertions.
2 Preliminaries
Let denote the input directed graph on vertices and edges. We assume that is strongly connected, since if it is not the case, then we may apply our result to each strongly connected component of . We first introduce some notations that will be used throughout the paper.

: A DFS tree of .

: The subtree of rooted at a vertex .

: The tree path from to in . Here is assumed to be an ancestor of .

: The depth of vertex in .

: The graph obtained by reversing all the edges in graph .

: The subgraph of a graph induced by the vertices of subset .

: The graph obtained by deleting the edges in set from graph .

: The set of all incoming edges to in graph .

: The subpath of path from vertex to vertex , assuming and are in and precedes .

:: : The path formed by concatenating paths and in . Here it is assumed that the last vertex of is the same as the first vertex of .
Our algorithm for computing SCCs in a fault tolerant environment crucially uses the concept of a fault tolerant reachability subgraph (FTRS) which is a sparse subgraph that preserves reachability from a given source vertex even after the failure of at most edges in . A FTRS is formally defined as follows.
Definition 2.1 (Ftrs)
Let be any designated source. A subgraph of is said to be a Fault Tolerant Reachability Subgraph (FTRS) of with respect to if for any subset of edges, a vertex is reachable from in if and only if is reachable from in .
In [2], we present the following result for the construction of a FTRS for any .
Theorem 2.1 ([2])
There exists an time algorithm that for any given integer , and any given directed graph on vertices, edges and a designated source vertex , computes a FTRS for with at most edges. Moreover, the indegree of each vertex in this FTRS is bounded by .
Our algorithm will require the knowledge of the vertices reachable from a vertex as well as the vertices that can reach . So we define a FTRS of both the graphs  and with respect to any source vertex as follows.
The following lemma states that the subgraph of a FTRS induced by can serve as a FTRS for the subgraph given that satisfies certain properties.
Lemma 2.1
Let be any designated source and be a FTRS of with respect to . Let be a subset of containing such that every path from to any vertex in is contained in . Then is a FTRS of with respect to .
Proof:
Let be any set of at most failing edges, and be any vertex reachable from in
. Since is reachable from in and is a FTRS of ,
so must be reachable from in as well. Let be any path from to
in . Then (i) all edges of are present in and (ii) none of the edges of
appear on . Since it is already given that every path from to any vertex in is contained
in , therefore, must be present in . So every vertex of belongs to . This fact
combined with the inferences (i) and (ii) imply that must be present in .
Hence is FTRS of with respect to .
The next lemma is an adaptation of Lemma 10 from Tarjan’s classical paper on Depth First Search [30] to our needs.
Lemma 2.2
Let be a DFS tree of . Let be two vertices without any ancestordescendant relationship in , and assume that is visited before in the DFS traversal of corresponding to tree . Every path from to in must pass through a common ancestor of in .
Proof:
Let us assume on the contrary that there exists a path from to in that does not
pass through any common ancestor of , in . Let be the LCA of in , and be
the child of lying on in . See Figure 1.
Let be the set of vertices which are either visited before in or lie in the subtree ,
and be the set of vertices visited after in .
Thus belongs to set , and belongs to set .
Let be the last vertex in that lies in set , and be the successor of on path .
Since none of vertices of is a common ancestor of and , therefore, the edge must belong to set .
So the following relationship must hold true
.
But such a relationship is not possible since all the outneighbors of must be visited before the
DFS traversal finishes for vertex . Hence we get a contradiction.
2.1 A heavy path decomposition
The heavy path decomposition of a tree was designed by Sleator and Tarjan [29] in the context of dynamic trees. This decomposition has been used in a variety of applications since then. Given any rooted tree , this decomposition splits into a set of vertex disjoint paths with the property that any path from the root to a leaf node in can be expressed as a concatenation of at most subpaths of paths in . This decomposition is carried out as follows. Starting from the root, we follow the path downward such that once we are at a node, say , the next node traversed is the child of in whose subtree is of maximum size, where the size of a subtree is the number of nodes it contains. We terminate upon reaching a leaf node. Let be the path obtained in this manner. If we remove from , we are left with a collection of subtrees each of size at most . Each of these trees hang from through an edge in . We carry out the decomposition of these trees recursively. The following lemma is immediate from the construction of a heavy path decomposition.
Lemma 2.3
For any vertex , the number of paths in which start from either or an ancestor of in is at most .
We now introduce the notion of ancestor path.
Definition 2.2
A path is said to be an ancestor path of , if is an ancestor of in .
In this paper, we describe the algorithm for computing SCCs of graph after any edge failures. Vertex failures can be handled by simply splitting a vertex into an edge , where the incoming and outgoing edges of are directed to and from , respectively.
3 Computation of SCCs intersecting a given path
Let be a set of at most failing edges, and be any path in from to which is intact in . In this section, we present an algorithm that outputs in time the SCCs of that intersect .
For each , let be the vertex of of minimum index (if exists) that is reachable from in . Similarly, let be the vertex of of maximum index (if exists) that has a path to in . (See Figure 2).
We start by proving certain conditions that must hold for a vertex if its SCC in intersects .
Lemma 3.1
For any vertex , the SCC that contains in intersects if and only if the following two conditions are satisfied.
(i) Both and are defined, and
(ii) Either , or appears before on .
Proof: Consider any vertex . Let be the SCC in that contains and assume intersects . Let and be the first and last vertices of , respectively, that are in . Since and are in there is a path from to in . Moreover, cannot reach a vertex that precedes in since such a vertex will be in as well and it will contradict the definition of . Therefore, . Similarly we can prove that . Since and are defined to be the first and last vertices from on , respectively, it follows that either , or precedes on . Hence conditions (i) and (ii) are satisfied.
Now assume that conditions (i) and (ii) are true.
The definition of and implies that there is a path from
to , and a path from to . Also, condition (ii) implies that there is a path from
to . Thus , and are in the same SCC and it intersects .
The following lemma states the condition under which any two vertices lie in the same SCC, given that their SCCs intersect .
Lemma 3.2
Let be any two vertices in whose SCCs intersect . Then and lie in the same SCC if and only if and .
Proof:
In the proof of Lemma 3.1, we show that if SCC of intersects ,
then and are precisely the first and last vertices on that lie in the SCC of .
Since SCCs forms a partition of , vertices and will lie in the same SCC if and only if
and .
It follows from the above two lemmas that in order to compute the SCCs in that intersect with , it suffices to compute and for all vertices in . It suffices to focus on computation of for all the vertices of , since can be computed in an analogous manner by just looking at graph . One trivial approach to achieve this goal is to compute the set consisting of all vertices reachable from each by performing a BFS or DFS traversal of graph . Using this straightforward approach it takes time to complete the task of computing for every , while our target is to do so in time.
Observe the nested structure underlying ’s, that is, . Consider any vertex . The nested structure implies for every that must be on the portion of . Similarly, it implies for every that must be on the portion of . This suggests a divide and conquer approach to efficiently compute . We first compute the sets and in time each. For each , we assign NULL to as it is not reachable from any vertex on ; and for each we set to . For vertices in set , is computed by calling the function BinarySearch(). See Algorithm 1.
In order to explain the function BinarySearch, we first state an assertion that holds true for each recursive call of the function BinarySearch. We prove this assertion in the next subsection.
 Assertion 1:

If BinarySearch() is called, then is precisely the set of those vertices whose lies on the path .
We now explain the execution of function BinarySearch(). If , then we assign to for each as justified by Assertion 1. Let us consider the case when . In this case we first compute the index . Next we compute the set consisting of all the vertices in that are reachable from . This set is computed using the function Reach() which is explained later in Subsection 3.2. As follows from Assertion 1, for each vertex must belong to path . Thus, for all must lie on path , and for all must lie on path . So for computing for vertices in and , we invoke the functions BinarySearch() and BinarySearch(), respectively.
3.1 Proof of correctness of algorithm
In this section we prove that Assertion 1 holds for each call of the BinarySearch function. We also show how this assertion implies that is correctly computed for every .
Let us first see how Assertion 1 implies the correctness of our algorithm. It follows from the description of the algorithm that for each , the function BinarySearch() is invoked for some . Assertion 1 implies that must be the set of all those vertices such that . As can be seen, the algorithm in this case correctly sets to for each .
We now show that Assertion 1 holds true in each call of the function BinarySearch. It is easy to see that Assertion 1 holds true for the first call BinarySearch(). Consider any intermediate recursive call BinarySearch(), where . It suffices to show that if Assertion 1 holds true for this call, then it also holds true for the two recursive calls that it invokes. Thus let us assume is the set of those vertices whose lies on the path . Recall that we compute index lying between and , and find the set consisting of all those vertices in that are reachable from . From the nested structure of the sets , it follows that for all must lie on path , and for all must lie on path . That is, is precisely the set of those vertices whose lies on the path , and is precisely the set of those vertices whose lies on the path . Thus Assertion 1 holds true for the recursive calls BinarySearch() and BinarySearch() as well.
3.2 Implementation of function Reach
The main challenge left now is to find an efficient implementation of the function Reach which has to compute the vertices of its input set that are reachable from a given vertex in . The function Reach can be easily implemented by a standard graph traversal initiated from in the graph (recall that is a FTRS of in ). This, however, will take time which is not good enough for our purpose, as the total running time of BinarySearch in this case will become . Our aim is to implement the function Reach in time. In general, for an arbitrary set this might not be possible. This is because might contain a vertex that is reachable from via a single path whose vertices are not in , therefore, the algorithm must explore edges incident to vertices that are not in as well. However, the following lemma, that exploits Assertion 1, suggests that in our case as the call to Reach is done while running the function BinarySearch we can restrict ourselves to the set only.
Lemma 3.3
If BinarySearch is called and , then for each path from to a vertex in graph in , all the vertices of must be in the set .
Proof:
Assertion 1 implies that is precisely the set of those vertices in which are
reachable from but not reachable from in .
Consider any vertex . Observe that is reachable from by the path ::.
Moreover, is not reachable from , because otherwise will also be reachable from ,
which is not possible since . Thus vertex lies in the set .
Lemma 3.3 and Lemma 2.1 imply that in order to find the vertices in that are reachable from , it suffices to do traversal from in the graph , the induced subgraph of in , that has edges. Therefore, based on the above discussion, Algorithm 2 given below, is an implementation of function Reach that takes time.
The following lemma gives the analysis of running time of BinarySearch().
Lemma 3.4
The total running time of BinarySearch is .
Proof:
The time complexity of BinarySearch() is dominated by the total time taken by
all invocation of function Reach.
Let us consider the recursion tree associated with BinarySearch(). It can be seen that
this tree will be of height .
In each call of the BinarySearch, the input set is partitioned into two disjoint sets.
As a result, the input sets associated with all recursive calls at any level in
the recursion tree form a disjoint partition of .
Since the time taken by Reach is , so the total time taken by all invocations
of Reach at any level is .
As there are at most levels in the recursion tree, the
total time taken by BinarySearch() is .
We conclude with the following theorem.
Theorem 3.1
Let be any set of at most failed edges, and be any path in . If we have prestored the graphs and for each , then we can compute all the SCCs of which intersect with in time.
4 Main Algorithm
In the previous section we showed that given any path , we can compute all the SCCs intersecting efficiently, if is intact in . In the case that contains failed edges from then is decomposed into paths, and we can apply Theorem 3.1 to each of these paths separately to get the following theorem:
Theorem 4.1
Let be any given path in . Then there exists an size data structure that for any arbitrary set of at most edges computes the SCCs of that intersect the path in time, where () is the number of edges in that lie on .
Now in order to use Theorem 4.1 to design a fault tolerant algorithm for SCCs, we need to find a family of paths, say , such that for any , each SCC of intersects at least one path in . As described in the Subsection 1.1, a heavy path decomposition of DFS tree serves as a good choice for . Choosing as a DFS tree helps us because of the following reason: let be any roottoleaf path, and suppose we have already computed the SCCs in intersecting . Then each of the remaining SCCs must be contained in some subtree hanging from path . The following lemma formally states this fact.
Lemma 4.1
Let be any set of failed edges, and be any path in . Let be any SCC in that intersects but does not intersect any path that is an ancestor path of in . Then all the vertices of must lie in the subtree .
Proof:
Consider a vertex on whose SCC in
is not completely contained in the subtree .
We show that must contain an ancestor of in , thereby
proving that it intersects an ancestorpath of in .
Let be any vertex in that is not in the subtree .
Let and be paths from to
and from to , respectively, in .
From Lemma 2.2 it follows that either or
must pass through a common ancestor of and in . Let this ancestor be .
Notice also that since and form a cycle all their vertices are in . Therefore,
and are in the same SCC in .
Moreover, since and , their common ancestor in is an ancestor of .
Since and it is an ancestor of in , the lemma follows.
Lemma 4.1 suggests that if we process the paths from in the nondecreasing order of their depths, then in order to compute the SCCs intersecting a path , it suffices to focus on the subgraph induced by the vertices in only. This is because the SCCs intersecting that do not completely lie in would have already been computed during the processing of some ancestor path of .
We preprocess the graph as follows. We first compute a heavy path decomposition of DFS tree . Next for each path , we use Theorem 4.1 to construct the data structure for path and the subgraph of induced by vertices in . We use the notation to denote this data structure. Our algorithm for reporting SCCs in will use the collection of these data structures associated with the paths in as follows.
Let denote the collection of SCCs in initialized to . We process the paths from in nondecreasing order of their depths. Let be any path in and let be the set of vertices belonging to . We use the data structure to compute SCCs of intersecting . Let these be . Note that some of these SCCs might be a part of some bigger SCC computed earlier. We can detect it by keeping a set of all vertices for which we have computed their SCCs. So if , then we can discard , else we add to collection . Algorithm 3 gives the complete pseudocode of this algorithm.
Note that, in the above explanation, we only used the fact that is a DFS tree, and could have been any path decomposition of . We now show how the fact that is a heavypath decomposition is crucial for the efficiency of our algorithm. Consider any vertex . The number of times is processed in Algorithm 3 is equal to the number of paths in that start from either or an ancestor of . For this number to be small for each , we choose to be a heavy path decomposition of . On applying Theorem 4.1, this immediately gives that the total time taken by Algorithm 3 is . In the next subsection, we do a more careful analysis and show that this bound can be improved to .
4.1 Analysis of time complexity of Algorithm 3
For any path and any set of failing edges, let denote the number of edges of that lie on . It follows from Theorem 4.1 that the time spent in processing by Algorithm 3 is . Hence the time complexity of Algorithm 3 is of the order of
In order to calculate this we define a notation as if , and otherwise, for each and . So the time complexity of Algorithm 3 becomes
Observe that for any vertex and , is equal to if is either or an ancestor of , otherwise it is zero. Consider any vertex . We now show that is at most . Let denote the set of those paths in which starts from either or an ancestor of . Then . Note that is at most , and Lemma 2.3 implies that the number of paths in is at most . This shows that is at most which is , since .
Hence the time complexity of Algorithm 3 becomes . We thus conclude with the following theorem.
Theorem 4.2
For any vertex directed graph , there exists an size data structure that, given any set of at most failing edges, can report all the SCCs of in time.
5 Extension to handle insertion as well as deletion of edges
In this section we extend our algorithm to incorporate insertion as well as deletion of edges. That is, we describe an algorithm for reporting SCCs of a directed graph when there are at most edge insertions and at most edge deletions.
Let denote the size data structure, described in Section 4, for handling failures. In addition to , we store the two FTRS: and for each vertex in . Thus the space used remains the same, i.e. . Now let be the ordered pair of updates, with being the set of failing edges and being the set of newly inserted edges. Also let and .
Our first step is to compute the collection , consisting of SCCs of graph . This can be easily done in time using the data structure . Now on addition of set , some of the SCCs in may get merged into bigger SCCs. Let be the subset of consisting of endpoints of edges in . Note that if the SCC of a vertex gets altered on addition of , then its new SCC must contain at least one edge from , and thus also a vertex from set . Therefore, in order to compute SCCs of , it suffices to recompute only the SCCs of vertices lying in the set .
Lemma 5.1
Let be a graph consisting of edge set , and the FTRS and , for each . Then , for each .
Proof: Consider a vertex . Since , . We show that is indeed equal to .
Let be any vertex reachable from in , by a path, say . Our aim is to show that is reachable from in as well. Notice that we can write as ::::::::, where are edges in and are segments of obtained after removal of edges of set . Thus lie in . For to , let and be respectively the first and last vertices of path . Since and , the FTRS of all the vertices to is contained in . Thus for to , vertex must be reachable from by some path, say , in graph . Hence :::::: is a path from to in graph .
In a similar manner we can show that if a vertex has a path to in graph , then
will also have path to in graph . Thus must be equal to .
So we compute the auxiliary graph as described in Lemma 5.1. Note that contains only edges. Next we compute the SCCs of graph using any standard algorithm [9] that runs in time which is linear in terms of the number of edges and vertices. This algorithm will take time, since is at most . Finally, for each , we check if the has broken into smaller SCCs in , if so, then we merge all of them into a single SCC. We can accomplish this entire task in a total time only. This completes the description of our algorithm. For the pseudocode see Algorithm 4.
We conclude with the following theorem.
Theorem 5.1
For any vertex directed graph , there exists an size data structure that, given any set of at most edge insertions and at most edge deletions, can report the SCCs of graph in time.
References
 [1] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures imply strong lower bounds for dynamic problems. In 55th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2014, Philadelphia, PA, USA, October 1821, 2014, pages 434–443, 2014.
 [2] Surender Baswana, Keerti Choudhary, and Liam Roditty. Fault tolerant subgraph for single source reachability: generic and optimal. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 1821, 2016, pages 509–518, 2016.
 [3] Aaron Bernstein and David Karger. A nearly optimal oracle for avoiding failed vertices and edges. In STOC’09: Proceedings of the 41st annual ACM symposium on Theory of computing, pages 101–110, New York, NY, USA, 2009. ACM.
 [4] Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti. Multipleedgefaulttolerant approximate shortestpath trees. In 33rd Symposium on Theoretical Aspects of Computer Science, STACS 2016, February 1720, 2016, Orléans, France, pages 18:1–18:14, 2016.
 [5] Shiri Chechik. Faulttolerant compact routing schemes for general graphs. Inf. Comput., 222:36–44, 2013.
 [6] Shiri Chechik, Sarel Cohen, Amos Fiat, and Haim Kaplan. (1 + epsilon)approximate fsensitive distance oracles. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 1619, pages 1479–1496, 2017.
 [7] Shiri Chechik, Thomas Dueholm Hansen, Giuseppe F. Italiano, Jakub Lacki, and Nikos Parotsidis. Decremental singlesource reachability and strongly connected components in õ(mn) total update time. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 911 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 315–324, 2016.
 [8] Shiri Chechik, Michael Langberg, David Peleg, and Liam Roditty. fsensitivity distance oracles and routing schemes. Algorithmica, 63(4):861–882, 2012.
 [9] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms (3. ed.). MIT Press, 2009.
 [10] Camil Demetrescu, Mikkel Thorup, Rezaul Alam Chowdhury, and Vijaya Ramachandran. Oracles for distances avoiding a failed node or link. SIAM J. Comput., 37(5):1299–1318, 2008.
 [11] Michael Dinitz and Robert Krauthgamer. Faulttolerant spanners: better and simpler. In Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing, PODC 2011, San Jose, CA, USA, June 68, 2011, pages 169–178, 2011.
 [12] Ran Duan and Seth Pettie. Dualfailure distance and connectivity oracles. In SODA’09: Proceedings of 19th Annual ACM SIAM Symposium on Discrete Algorithms, pages 506–515, Philadelphia, PA, USA, 2009. Society for Industrial and Applied Mathematics.
 [13] Ran Duan and Seth Pettie. Connectivity oracles for failure prone graphs. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 58 June 2010, pages 465–474, 2010.
 [14] Ran Duan and Seth Pettie. Connectivity oracles for graphs subject to vertex failures. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 1619, pages 490–509, 2017.
 [15] Daniele Frigioni, Tobias Miller, Umberto Nanni, and Christos D. Zaroliagis. An experimental study of dynamic algorithms for transitive closure. ACM Journal of Experimental Algorithmics, 6:9, 2001.
 [16] Loukas Georgiadis, Giuseppe F. Italiano, and Nikos Parotsidis. Strong connectivity in directed graphs under failures, with applications. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 1619, pages 1880–1899, 2017.
 [17] Fabrizio Grandoni and Virginia Vassilevska Williams. Improved distance sensitivity oracles via fast singlesource replacement paths. In 53rd Annual IEEE Symposium on Foundations of Computer Science, FOCS 2012, New Brunswick, NJ, USA, October 2023, 2012, pages 748–757, 2012.
 [18] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Sublineartime decremental algorithms for singlesource reachability and shortest paths on directed graphs. In Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31  June 03, 2014, pages 674–683, 2014.
 [19] Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol Saranurak. Unifying and strengthening hardness for dynamic problems via the online matrixvector multiplication conjecture. In Proceedings of the FortySeventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 1417, 2015, pages 21–30, 2015.
 [20] Giuseppe F. Italiano. Finding paths and deleting edges in directed acyclic graphs. Inf. Process. Lett., 28(1):5–11, 1988.
 [21] Bruce M. Kapron, Valerie King, and Ben Mountjoy. Dynamic graph connectivity in polylogarithmic worst case time. In Proceedings of the TwentyFourth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 68, 2013, pages 1131–1142, 2013.
 [22] Neelesh Khanna and Surender Baswana. Approximate shortest paths avoiding a failed vertex: Optimal size data structures for unweighted graphs. In 27th International Symposium on Theoretical Aspects of Computer Science, STACS 2010, March 46, 2010, Nancy, France, pages 513–524, 2010.
 [23] Jakub Lacki. Improved deterministic algorithms for decremental transitive closure and strongly connected components. In Proceedings of the TwentySecond Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 2325, 2011, pages 1438–1445, 2011.
 [24] Merav Parter. Dual failure resilient BFS structure. In Proceedings of the 2015 ACM Symposium on Principles of Distributed Computing, PODC 2015, DonostiaSan Sebastián, Spain, July 21  23, 2015, pages 481–490, 2015.
 [25] Merav Parter and David Peleg. Sparse faulttolerant BFS trees. In Algorithms  ESA 2013  21st Annual European Symposium, Sophia Antipolis, France, September 24, 2013. Proceedings, pages 779–790, 2013.
 [26] Mihai Patrascu and Mikkel Thorup. Planning for fast connectivity updates. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), October 2023, 2007, Providence, RI, USA, Proceedings, pages 263–271, 2007.
 [27] L. Roditty and U. Zwick. Improved dynamic reachability algorithms for directed graphs. SIAM J. Comput., 37(5):1455–1471, 2008.
 [28] Liam Roditty. Decremental maintenance of strongly connected components. In Proceedings of the TwentyFourth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 68, 2013, pages 1143–1150, 2013.
 [29] Daniel D. Sleator and Robert E. Tarjan. A data structure for dynamic trees. Journal of Computer and System Sciences, 26:362–391, 1983.
 [30] Robert Endre Tarjan. Depthfirst search and linear graph algorithms. SIAM J. Comput., 1(2):146–160, 1972.
 [31] Oren Weimann and Raphael Yuster. Replacement paths and distance sensitivity oracles via fast matrix multiplication. ACM Transactions on Algorithms, 9(2):14, 2013.