A General Framework for Multi-level Subsetwise Graph Sparsifiers

A General Framework for Multi-level Subsetwise Graph Sparsifiers

Reyan Ahmed    Keaton Hamm    Mohammad Javad Latifi Jebelli    Stephen Kobourov    Faryad Darabi Sahneh    Richard Spence   
Abstract

Given an undirected weighted graph , a subsetwise sparsifier over a terminal set is a subgraph having a certain structure which connects the terminals. Examples are Steiner trees (minimal-weight trees spanning ) and subsetwise spanners (subgraphs such that for given , for ). Multi-level subsetwise sparsifiers are generalizations in which terminal vertices require different levels or grades of service.

This paper gives a flexible approximation algorithm for several multi-level subsetwise sparsifier problems, including multi-level graph spanners, Steiner trees, and –connected subgraphs. The algorithm relies on computing an approximation to the single level instance of the problemFor the subsetwise spanner problem, there are few existing approximation algorithms for even a single level; consequently we give a new polynomial time algorithm for computing a subsetwise spanner for a single level. Specifically, we show that for , , and , there is a subsetwise –spanner with total weight , where is the weight of the Steiner tree of over the subset . This is the first algorithm and corresponding weight guarantee for a multiplicative subsetwise spanner for nonplanar graphs.

We also generalize a result of Klein to give a constant approximation to the multi-level subsetwise spanner problem for planar graphs. Additionally, we give a polynomial-size ILP for optimally computing pairwise spanners of arbitrary distortion (beyond linear distortion functions), and provide experiments to illustrate the performance of our algorithms.

Graph spanners, Steiner trees, Subsetwise spanners, Multi-level graph representation

University of Arizona, Tucson, United States University of Arizona, Tucson, United States University of Arizona, Tucson, United States University of Arizona, Tucson, United States University of Arizona, Tucson, United States University of Arizona, Tucson, United States \Copyright {CCSXML} <ccs2012> <concept> <concept_id>10003752.10003809</concept_id> <concept_desc>Theory of computation Design and analysis of algorithms</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003752.10003809.10003636</concept_id> <concept_desc>Theory of computation Approximation algorithms analysis</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10003752.10003809.10003635.10010036</concept_id> <concept_desc>Theory of computation Sparsification and spanners</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> \ccsdesc[300]Theory of computation Design and analysis of algorithms \ccsdesc[300]Theory of computation Approximation algorithms analysis \ccsdesc[100]Theory of computation Sparsification and spanners

1 Introduction

Graph sparsifiers and sketches have become a fundamental object of study due to the breadth of applications which benefit from them and the power that such techniques have for data compression, computational speedup, network routing, and many other tasks [25]. Some canonical examples of graph sparsifiers are spanners [15], spectral sparsifiers [27], spanning trees, and Steiner trees [19]. Sparsifier constructions attempt to delete as much edge weight from the graph as possible while maintaining certain properties of the underlying graph; for instance, spanners are subgraphs which approximately preserve the shortest path distances in the initial graph, whereas spectral sparsifiers maintain the properties of the graph Laplacian matrix and are useful in fast solvers for linear systems [28].

Multi-level graph representations have been increasingly utilized, as many networks contain within them a notion of priority of nodes. Indeed, displaying a map of a road network with varying detail based on zoom level is an instance where vertices (intersections) are given priority based on the size of the roads. Alternatively, following a natural disaster, a city might designate priorities for rebuilding infrastructure to connect buildings; buildings with higher priority will be connected first, for example the city will first ensure that routes from major hubs to hospitals are repaired. Multi-level variants of the Steiner tree problem and the multiplicative spanner problem were studied in [2] and [3], respectively.

In this paper, we give some general and flexible algorithms for finding certain types of graph sparsifiers for multi-level graphs (which may also be cast as grade-of-service problems). The class of sparsifiers includes many variants of Steiner trees and spanners, among others. In the spanner case, the algorithms presented here necessitate a solution to the subsetwise spanner problem in which one seeks to find a subgraph in which distances between certain prescribed vertices are preserved up to some small distortion factor. Specifically, given a distortion function satisfying (typically for ), and a subset , a subsetwise spanner of is a subgraph such that for all , where denotes the length of the shortest - path in . We describe a compact integer linear program (ILP) formulation to this problem, and also give a novel and simple polynomial time approximation algorithm and prove corresponding bounds on the weight of multiplicative subsetwise spanners (Theorems 3.1 and 3.1); such bounds have thus far not been considered in the literature. These weight bounds are both in terms of the optimal solution to the problem and of the weight of the Steiner tree over the given subset. In particular, for , , and , we obtain a subsetwise –spanner (i.e. and ) with weight times the weight of the Steiner tree over (which is the minimum-weight tree spanning the vertices ). In the subsetwise spanner case, this is a stronger notion of lightness than previous spanner guarantees in terms of the weight of the minimal spanning tree of the graph. Moreover, this weight bound is for arbitrary graphs, whereas the only previous bound of this type was for planar graphs [22].

We also provide numerical experiments to illustrate our algorithms in the appendix, and show the effect on the single level subroutines used in our main algorithm.

1.1 Problem Statement

First, we quantify the kinds of sparsifiers that our algorithms are flexible enough to handle. In what follows, graphs will be weighted, connected, and undirected. We say that is an admissible graph sparsifier of over terminals provided is a connected subgraph of and . We typically assume that has some sort of structure, such as it is a tree, or it approximately maintains distances between terminals. Moreover, we assume that for a given type of sparsifier, there exists a merging operation such that if terminals are not disjoint, and are admissible sparsifiers of over and , respectively, then is an admissible sparsifier of over . The multi-level sparsifier problem under consideration is as follows.

Problem 1 (Multi-level Admissible Graph Sparsifier (MLAGS) Problem)).

Given a weighted, connected, undirected graph , a nested sequence of terminals , and an edge cost function for each level , (typically is a function of and ), compute a minimum-cost sequence of admissible sparsifiers where is an admissible sparsifier (of the same type) for over . The cost of a solution is defined to be

There is an equivalent formulation of this problem in terms of grades of service; see [3] for a full explanation.

1.2 Examples

One example of an admissible sparsifier over a set of terminals is a Steiner tree. A Steiner tree of over is a subtree of  that spans , possibly including other vertices, and is denoted . The classical Steiner tree problem (ST) is to find a minimum-weight edge set that spans all terminals. ST is one of Karp’s initial NP–hard problems [19], is APX–hard [7], and cannot be approximated within a factor of unless P = NP [12]. The edge-weighted ST problem admits a simple 2-approximation [17] by computing a minimum spanning tree on the metric closure111Given , the metric closure of is the complete graph , where edge weights are equal to the lengths of shortest paths in . of . The LP–based approximation algorithm of Byrka et al. [9] guarantees a ratio of . Several other variants of the ST problem on graphs are admissible sparsifiers including –dense ST, prize-collecting ST, and bottleneck ST. Details about these ST problem variants can be found in the online compendium and references therein [18].

Another example of admissible sparsifier is a –connected subgraph [24], in which (similar to the ST problem) a set of terminal vertices are given, and the goal is to find the minimum weight subgraph such that each pair of the terminals is connected with at least vertex disjoint paths.

Another example of an admissible sparsifier is the pairwise spanner with arbitrary distortion. Multiplicative spanners, initially introduced by Peleg and Schäffer [15] are subgraphs which approximately preserve distances up to a multiplicative factor, i.e., for all . The most general type of spanner is the pairwise spanner, where one is given a set of pairs , and a distortion function satisfying , and one attempts to find the subgraph of smallest weight or minimal number of edges such that

Any subgraph satisfying this condition is called a pairwise spanner of with distortion , or a subsetwise –spanner for short. Common distortion functions are (multiplicative spanners), (additive spanners), or (linear, or (–spanners). In the case that , we simply use the term spanner, whereas if , for some , then we use the term subsetwise spanner.

The merging operator is sparsifier specific. For example, in case of spanners and –connected subgraphs, it would simply be the union of the subgraphs, while for Steiner trees it would be to take the union and prune edges to enforce the tree structure.

Note that spectral sparsifiers are not admissible since they are not subgraphs as edges can be added to obtain a spectral sparsifier of a graph.

1.3 Related Work

There are several results known about multi-level or grade-of-service Steiner tree problems for weighted graphs. Balakrishnan et al. [6] give a –approximation algorithm for the 2–level network design problem with proportional edge costs where . Charikar et al. [10] describe a simple –approximation for the Quality-of-Service (QoS) Multicast Tree problem with proportional edge costs (termed the rate model), which was later improved to by randomized doubling. Karpinski et al. [20] use an iterative contraction scheme to obtain a -approximation. Ahmed et al. [2] have further improved the approximation ratio to , by combining top-down and bottom-up strategies.

Among the few discussions of multi-level graph spanners is [3], where level-dependent approximation guarantees are given assuming a single level subsetwise spanner oracle. However, it would be a significant improvement if approximation algorithms for subsetwise spanners were used instead of an oracle in [3]. Coppersmith et al. [13] study the subsetwise distance preserver problem (the case ) and show that given an undirected weighted graph and a subset of size , one can construct a linear size preserver in polynomial time. Cygan et al. [14] give polynomial time algorithms to compute subsetwise and pairwise additive spanners for unweighted graphs and show that there exists an additive pairwise spanner of size with additive stretch. They also show how to construct size subsetwise additive 2–spanners. Abboud et al. [1] improved that result by showing how to construct size subsetwise additive 2–spanners. Kavitha [21] shows that there is a polynomial time algorithm which constructs subsetwise spanners of size and for additive stretch 4 and 6, respectively. Bodwin et al. [8] give an upper bound on the size of subsetwise spanners with polynomial additive stretch factor. To the authors’ knowledge, there are no existing guarantees for multiplicative subsetwise spanners except those of Klein [22] who gives a polynomial time algorithm that computes a subsetwise multiplicative spanner of an edge weighted planar graph for a constant stretch factor with constant approximation ratio.

Hardness of approximation of multi-level spanners follows from the single level case. Peleg and Schäffer [15] show that determining if there exists a –spanner of with or fewer edges is NP–complete. Further, it is NP–hard to approximate the (unweighted) –spanner problem for to within a factor of even when restricted to bipartite graphs [23].

2 A General Rounding Up Approach

Here, we give a very general and flexible extension of the rounding approach of Charikar et al. [11] for designing Steiner trees, which we apply to compute approximate solutions to the MLAGS problem based on solving the problem at a subset of levels. Furthermore, we also consider a more general cost model compared to [3] and [2] for multi-level design. We assume the cost of an edge on a given level can be decomposed as for all , where is a cost scaling function. That is, the cost scales uniformly over edges for a given level. Hence our cost model is more general than the case considered in [3] where as is an arbitrary non-decreasing function.

Additionally, consider a rounding up function, or “level quantizer,”

which satisfies . Suppose also that there exists a positive constant

(1)

and there exists a constant such that if appears on level , then

(2)

The latter can also be written as .

Our general rounding up algorithm for computing a MLAGS is as follows. For each , compute an admissible sparsifier over terminals , and let be the subgraphs returned. Then for we will set as follows:

(3)

where is defined as the largest element in less than or equal to . In other words, the graph on level is the merging (in the sense of the operator ) of all computed sparsifiers on higher levels, as well as the computed sparsifier . For example, if and , then , , and . The rounding up approach of Charikar et al. [11] is a special case where .

for  do
      an –approximation to the optimal admissible sparsifier of over
end for
for  do
      (as in (3))
end for
return
Algorithm 1 Rounding Approximation Algorithm for MLAGS()
{theorem}

Given a graph , terminal sets , a level cost function , and a rounding function with rounding set which satisfies conditions (1) and (2), Algorithm 1 yields an –approximation to the MLAGS problem, where .

Proof.

By assumption (1), if we consider to be the optimal solution to the rounded MLAGS problem where edges are assigned levels instead of , then an edge costs at most times what it would have in the optimal solution for the unrounded MLAGS problem. By summing over , this implies that , where OPT is the cost of the optimal solution to the full MLAGS problem.

Again consider the optimal solution to the rounded problem with cost . Given any edge in this solution, if its level is , then we may replace the edge with edges on levels . The total cost of these edges is then

by (2). At each rounded level, Algorithm 1 computes an –approximation to the optimal subsetwise spanner; it follows that the spanner returned by the algorithm has cost no worse than which is at most from the previous analysis, and the proof is complete. ∎

Note that the proof of Theorem 2 is independent on the type of sparsifier desired, and hence the algorithm is quite flexible. A similar approach was used to approximate minimum-cost multi-level Steiner trees in [2], but the above analysis shows this approach also works for spanners and –connected subgraphs, for example.

Now we can compute the approximation guarantee given . Let denote the cost of a minimum sparsifier of over . As , we have . Then the following holds ( is the cost of the output of Algorithm 1 for a given ).

{lemma}

For any set , we have , where .

Proof.

This follows from the merging bound; note that the edges of the subgraph appear on all levels. The edges from the subgraph appear on levels, namely 1, 2, …, . ∎

Then applying Lemma 2 for a particular , we have

where we have used the observation that Therefore, the right-hand side of the above inequality provides a approximation guarantee assuming knowledge of . We can also find a generic bound by looking at the worst case scenario for . Without loss of generality, we may assume that , so that . Since is an increasing function, the worst case of level costs will be of the form and for some . Therefore, the general approximation guarantee (regardless of costs of the subset sparsifiers of each level) is

2.1 Examples

A natural example is to take , i.e., a linear cost growth along levels. Following Charikar et al. [11], we may take . In this case, using an oracle to compute the subsetwise spanner at each level yields a –approximation to the MLAGS problem for a multiplicative –spanner. The same approximation holds in this case for multi-level Steiner trees [2]. Indeed, , whence we may choose , and if edge gets its rate rounded to , then

whence , thus yielding a 4–approximation. This yields the following corollary improves on Theorems 1 and 3 in [3], as the approximation ratio is independent of the number of levels. {corollary} Let . If an oracle which computes the optimal subsetwise multiplicative –spanner of over terminals is used in Algorithm 1, and , then Algorithm 1 produces a –approximation to the optimal multi-level graph spanner problem. Note that the use of an oracle such as the ILP given in Appendix A is costly; this issue will be addressed in Section 3.

It is of interest to note that choosing for some other base does not improve the approximation ratio. In this case, one can show that , and is approximately , and thus the approximation ratio is which is minimized when .

Using a coarser quantizer instead yields a worse approximation. Consider the coarsest quantizer which sets for all and , with as before. In this case, , and , which means that the best one can do is an –approximation. This approach corresponds to the Bottom Up approach described in [3].

If no rounding is done (i.e., ), then one computes a sparsifier at each level and merges them together according to the operation going down the levels. This can be considered a Top Down approach to the problem, and yields an upper bound on the approximation ratio of .

2.2 Composite Algorithm

One way to find a good approximation algorithm is to run Algorithm 1 for all subsets containing 1, and then choosing the multi-level sparsifier with the smallest cost. This requires sparsifier computations followed by finding the minimum over solutions. Consequently, this method is costly to implement; however, it provides the lowest cost and the best guarantees possible using single-level solvers. We call this the composite algorithm.

for all with  do
     Compute MLAGS() via Algorithm 1
end for
return solution with minimum cost.
Algorithm 2 Composite()

We can find the approximation guarantee of the composite algorithm using the following linear program (LP):

Find subject to

This optimization problem suggested above is similar to that in [2]. In particular, for linear costs () and , the solution returned by the composite algorithm has cost no worse than assuming an oracle that computes sparsifiers optimally. This approach uses sparsifier computations, and computes the minimum over candidate solutions.

2.3 Computing the Best Rounding Set

Suppose having computed all single-level solutions, we are interested to find what would provide the least multi-level solution. We formulate this as a minimization problem. Define binary variables such that and otherwise. For example if and , and .

{lemma}

Given a vector , the choice of from the following ILP minimizes , where is the th smallest element of and .

Proof.

Using the indicator variables, the objective function can be expressed as
because and the other ’s are zero. In the above formulation, the first constraint indicates that for every given or , at most one is equal to one. The second constraint indicates that for a given , if for some , then there is also a such that . In other words, the result determines a proper choice of levels by ensuring continuity of intervals. The last constraints guarantees that . ∎

3 Metric Closure Multi-level Spanners

Observe that when the admissible sparsifier is a subsetwise spanner, the composite algorithm proposed above necessitates having a good algorithm for computing a subsetwise spanner on a single level with a given distortion. Unfortunately, there are precious few algorithms that provably approximate the subsetwise spanner problem. In this section, the cost at each level is assumed to be .

{definition}

[Metric Closure] Given a graph and a set of terminals , the metric closure of over is defined as the complete weighted graph on vertices with the weight of edge given by .

Here, we give a new and simple algorithm for computing a subsetwise spanner with general distortion function . We will use an –spanner subroutine in this algorithm which must work for weighted graphs. Note that most spanner algorithms for weighted graphs are usually of the multiplicative type.

3.1 Metric Closure Subsetwise Spanners

Algorithm 3 describes a subsetwise spanner construction; later we will generalize this to multi-level spanners in Algorithm 4, but even for the case of a single level this provides a novel approximation for a subsetwise spanner.

metric closure of over
spanner of with distortion
edges in corresponding to
return
Algorithm 3 Metric Closure Subsetwise Spanner()

Note that the spanner obtained in the second step can come from any known algorithm which computes a spanner for a weighted graph, which adds an element of flexibility. This also allows us to obtain some new results on the weight of multiplicative subsetwise spanners.

It is known (see [4]) that if is a positive integer, then one can construct a –spanner for a given graph (over all vertices, not a subset), in time, which has weight , where is a minimum spanning tree of . Using this in the second stage of the Algorithm 3, we can conclude the following.

{theorem}

Let , , , and be given. Using the spanner construction of [4] as a subroutine, Algorithm 3 yields a subsetwise –spanner of in time with total weight . Moreover,

Proof.

Let be the metric closure of over , and let be the minimum weight subsetwise –spanner for , where is any distortion function. Note that must contain a tree, , which spans , whence

(4)

by definition of Steiner trees. From the approximation result for Steiner trees (see [29]) we have

(5)

It follows that . Now, using the results of [4] for the special case where and , we can construct a –spanner of which satisfies

(6)

Combining these we have the desired estimate

Finally, follows from (5). ∎

Both bounds given in Theorem 3.1 are interesting for different reasons. The first stated bound shows that Algorithm 4 yields an –approximation to the optimal solution. The second bound gives a better notion of lightness of a subsetwise spanner. When , the minimal spanning tree and the Steiner tree over are the same, and hence lightness bounds for spanners in this case are stated in terms of the weight of . However, for it is not generally true that , but by definition the Steiner tree has smaller weight. Thus, this notion of lightness for subsetwise spanners stated in terms of the weight of the Steiner tree is more natural. Klein [22] uses this notion of lightness for subsetwise spanners of planar graphs, but the results presented here are the first for general graphs.

The following gives a precise (as opposed to asymptotic) bound for a subsetwise spanner by utilizing the greedy algorithm of Althöfer et al. [5].

{theorem}

Let , , and be given. Using the greedy spanner algorithm as a subroutine, Algorithm 3 yields a subsetwise –spanner of in time with total weight . Moreover, .

Proof.

The greedy algorithm takes time, and the rest is similar to the proof of previous theorem. ∎

Note that in the greedy spanner algorithm of Althöfer et al. [5], the minimum spanning tree is a subgraph of the solution. However, the optimal Steiner tree might not necessarily be a subset of the final solution. Nevertheless, the produced subset spanner will include a Steiner tree with cost at most twice the optimal one.

3.2 Multi-level Metric Closure Spanner

Here, we propose the multi-level version of the spanner in Algorithm 4. Note this is different than using Algorithm 3 as a subroutine in Algorithm 1.

metric closure of over
spanner of with distortion
edges in corresponding to
for  do
     For any add the shortest path in to
end for
return
Algorithm 4 Metric Closure Multilevel Spanner()

Here we provide a general bound which is also tight in certain cases and can be derived easily.

{proposition}

Given a graph and terminals , and a multiplicative stretch factor (so the distortion is ), let be the total cost of the multilevel –spanner of Algorithm 4. Then Moreover, if then

Proof.

The bounds follow from the fact that and . ∎

Now, we show that the bound is tight for the case . Consider a graph with terminal sets such that there is a unique shortest path of length between any pairs of terminals. Assume that we do not have any other vertex in beyond the ones appearing on these shortest paths. The diameter of is clearly and we need shortest paths to be copied across all levels. Therefore, we end up with the cost for the multilevel spanner.

4 Polynomial Time Approximation Algorithms

Solving a single level ILP takes much less time than solving a multi-level ILP, especially as the number of levels increases. This fact was one of the motivations behind the rounding algorithm. If a single level ILP is used as an oracle subroutine as in Corollary 2.1 then a constant approximation ratio is obtained, but at the expense of the subroutine being exponential time.

It is natural to consider what happens when the subsetwise spanner Algorithm 3 is used as the approximation algorithm in Algorithm 1. By Theorem 3.1, Algorithm 3 is a –approximation on any single level, and hence combining this with Theorem 2, we find that Algorithm 1 yields an –approximation to the multi-level graph spanner problem with stretch factor for all levels.

If the input graph is planar then instead of using Algorithm 3 we can use the algorithm provided by Klein [22] to compute a subsetwise spanner for the set of levels we get from the rounding up algorithm. The polynomial time algorithm in [22] has constant approximation ratio, assuming that the stretch factor is constant. Hence, we have the following Corollary.

{corollary}

Let be a weighted planar graph with terminals, . Let be a constant, and suppose the level cost function is . Then there exists polynomial time, constant approximation (independent of the number of levels) for the optimal multi-level -spanner problem.

For additive spanners there exists algorithms to compute subsetwise spanner of size , and for additive stretch 2, 4 and 6, respectively [1, 21]. If we use these algorithms in Algorithm 1 to compute subsetwise spanners for different levels, then we have the following Corollary.

{corollary}

Let be an unweighted graph with exponentially decreasing terminals, such that . Then there exists polynomial time algorithms to compute multi-level graph spanners with additive stretch 2, 4 and 6, of size , , and , respectively.

5 Experimental results

To evaluate the performance of different variants of Algorithm 1 we use several experiments. This requires optimal single-level solvers for Algorithm 1, which we obtain using an ILP formulation of the problem; see Appendix A. We generate graphs using the Erdős–Rényi random graph model [16]; more details for the experiment are in Appendix B.

We then consider the multiplicative graph spanner version of the MLAGS problem, and analyze the effect of using an ILP vs. the metric closure subsetwise spanner Algorithm 3 for the single level approximation therein. Figure 1 shows the impact of different parameters (number of vertices , number of levels , and stretch factors ) using box plots for the two approaches corresponding to 3 trials for each parameter. In general, the performance of both variants decreases as the number of levels and stretch factor increases. It is notable that the metric closure approximation algorithm yields very similar results to those from ILP solver.

Figure 1: Performance of composite Algorithm 2 using ILP and metric closure subsetwise spanner Algorithm 3 as the single level solver in Algorithm 1 on Erdős–Rényi graphs w.r.t. the number of vertices, the number of levels, and the stretch factors.

In the Appendix, we consider three variants for the rounding set: , which we call bottom-up (BU), , which we call top-down (TD), and the optimal , which we call (CMP). Varying the size of the input graph, number of levels, and stretch factor , we test the performance of Algorithm 1. If the ILP is used in the single level solver, we call these oracle TD, BU, and CMP, and otherwise we call them the metric closure TD, BU, and CMP solutions.

Figures are shown in Appendix B, and we only summarize the results here. Generally, both the runtime and the approximation ratio for the multi-level graph spanner problem increase as the size of the parameter increases for the oracle TD, BU, and CMP algorithms. We cannot compute the exact solution for large graphs, but the metric closure variant scales well. Thus for large graphs the estimation of the approximation ratio is given to be , for example. In this case, the runtime increases with larger input graphs, but interestingly increasing the number of levels and stretch factors does not have a strong impact on the performance.

6 Conclusions and Future Work

We have given a general framework for solving multi-level graph sparsification problems utilizing single level approximation algorithms as a subroutine. When an oracle is used afor the single level instances, our algorithm can yield a constant approximation to the optimal multi-level solution that is independent of the number of levels. Using the metric closure subsetwise spanner algorithm as a subroutine for the multi-level spanner algorithm, we derive an approximation algorithm which depends on the size of the terminal set but not the number of levels. It would be natural to look for multi-level algorithms that do not rely on single level instances of the problem but build the solution simultaneously on all levels.

Additionally, we gave a new algorithm for finding subsetwise spanners of weighted graphs via the metric closure over the subset, and gave novel weight bounds in the case of multiplicative spanners. It would be worthwhile to explore new approximation algorithms for the subsetwise spanner problem, which would strengthen the results in this paper.

References

  • [1] A. Abboud and G. Bodwin (2016) Lower bound amplification theorems for graph spanners. In Proceedings of the 27th ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 841–856. Cited by: §1.3, §4.
  • [2] A. R. Ahmed, P. Angelini, F. D. Sahneh, A. Efrat, D. Glickenstein, M. Gronemann, N. Heinsohn, S. Kobourov, R. Spence, J. Watkins and A. Wolff (2018) Multi-level Steiner trees. In 17th International Symposium on Experimental Algorithms, (SEA), pp. 15:1–15:14. External Links: Link, Document Cited by: §B.1, §1.3, §1, §2.1, §2.2, §2, §2.
  • [3] R. Ahmed, K. Hamm, M. J. L. Jebelli, S. Kobourov, F. D. Sahneh and R. Spence (2019) Approximation algorithms and an integer program for multi-level graph spanners. External Links: arXiv:1904.01135 Cited by: Appendix A, Appendix A, §B.3, §1.1, §1.3, §1, §2.1, §2.1, §2.
  • [4] S. Alstrup, S. Dahlgaard, A. Filtser, M. Stöckel and C. Wulff-Nilsen (2017-09) Constructing light spanners deterministically in near-linear time. External Links: arXiv:1904.01135 Cited by: §3.1, §3.1, §3.1.
  • [5] I. Althöfer, G. Das, D. Dobkin and D. Joseph (1990) Generating sparse spanners for weighted graphs. In SWAT 90, J. R. Gilbert and R. Karlsson (Eds.), Berlin, Heidelberg, pp. 26–37. External Links: ISBN 978-3-540-47164-6 Cited by: §3.1, §3.1.
  • [6] A. Balakrishnan, T. L. Magnanti and P. Mirchandani (1994) Modeling and heuristic worst-case performance analysis of the two-level network design problem. Management Sci. 40 (7), pp. 846–867. External Links: Document Cited by: §1.3.
  • [7] M. Bern and P. Plassmann (1989) The Steiner problem with edge lengths 1 and 2. Inform. Process. Lett. 32 (4), pp. 171–176. External Links: Document Cited by: §1.2.
  • [8] G. Bodwin and V. V. Williams (2016) Better distance preservers and additive spanners. In Proceedings of the Twenty-seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’16, Philadelphia, PA, USA, pp. 855–872. External Links: ISBN 978-1-611974-33-1, Link Cited by: §1.3.
  • [9] J. Byrka, F. Grandoni, T. Rothvoß and L. Sanità (2013) Steiner tree approximation via iterative randomized rounding. J. ACM 60 (1), pp. 6:1–6:33. External Links: Document Cited by: §1.2.
  • [10] M. Charikar, J. Naor and B. Schieber (2004-04) Resource optimization in QoS multicast routing of real-time multimedia. IEEE/ACM Transactions on Networking 12 (2), pp. 340–348. External Links: Document, ISSN 1063-6692 Cited by: §1.3.
  • [11] M. Charikar, J. (. Naor and B. Schieber (2004) Resource optimization in QoS multicast routing of real-time multimedia. IEEE/ACM Trans. Networking 12 (2), pp. 340–348. External Links: Document Cited by: §2.1, §2, §2.
  • [12] M. Chlebík and J. Chlebíková (2008) The Steiner tree problem on graphs: inapproximability results. Theoret. Comput. Sci. 406 (3), pp. 207–214. External Links: Document Cited by: §1.2.
  • [13] D. Coppersmith and M. Elkin (2006) Sparse sourcewise and pairwise distance preservers. SIAM Journal on Discrete Mathematics 20 (2), pp. 463–501. Cited by: §1.3.
  • [14] M. Cygan, F. Grandoni and T. Kavitha (2013) On pairwise spanners. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013), N. Portier and T. Wilke (Eds.), Leibniz International Proceedings in Informatics (LIPIcs), Vol. 20, Dagstuhl, Germany, pp. 209–220. External Links: ISBN 978-3-939897-50-7, ISSN 1868-8969, Link, Document Cited by: §1.3.
  • [15] P. David and S. A. A. (1989) Graph spanners. Journal of Graph Theory 13 (1), pp. 99–116. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.1002/jgt.3190130114 Cited by: §1.2, §1.3, §1.
  • [16] P. Erdős and A. Rényi (1959) On random graphs, i. Publicationes Mathematicae (Debrecen) 6, pp. 290–297. Cited by: §B.1, §5.
  • [17] E. N. Gilbert and H. O. Pollak (1968) Steiner minimal trees. SIAM J. Appl. Math. 16 (1), pp. 1–29. External Links: Document Cited by: §1.2.
  • [18] M. Hauptmann and M. Karpiński (2013) A compendium on steiner tree problems. Inst. für Informatik. Cited by: §1.2.
  • [19] R. M. Karp (1972) Reducibility among combinatorial problems. In Complexity of Computer Computations, R. E. Miller, J. W. Thatcher and J. D. Bohlinger (Eds.), pp. 85–103. External Links: Document Cited by: §1.2, §1.
  • [20] M. Karpinski, I. I. Mandoiu, A. Olshevsky and A. Zelikovsky (2005) Improved approximation algorithms for the quality of service multicast tree problem. Algorithmica 42 (2), pp. 109–120. External Links: Document Cited by: §1.3.
  • [21] T. Kavitha (2017-11-01) New pairwise spanners. Theory of Computing Systems 61 (4), pp. 1011–1036. External Links: ISSN 1433-0490, Document, Link Cited by: §1.3, §4.
  • [22] P. N. Klein (2006) A subset spanner for planar graphs, with application to subset tsp. In Proceedings of the Thirty-eighth Annual ACM Symposium on Theory of Computing, STOC ’06, New York, NY, USA, pp. 749–756. External Links: ISBN 1-59593-134-1, Link, Document Cited by: §1.3, §1, §3.1, §4.
  • [23] G. Kortsarz (2001-01-01) On the hardness of approximating spanners. Algorithmica 30 (3), pp. 432–450. External Links: ISSN 1432-0541, Document, Link Cited by: §1.3.
  • [24] B. Laekhanukit (2011) An improved approximation algorithm for minimum-cost subset k-connectivity. In International Colloquium on Automata, Languages, and Programming, pp. 13–24. Cited by: §1.2.
  • [25] Y. Liu, T. Safavi, A. Dighe and D. Koutra (2018) Graph summarization methods and applications: a survey. ACM Computing Surveys (CSUR) 51 (3), pp. 62. Cited by: §1.
  • [26] M. Sigurd and M. Zachariasen (2004) Construction of minimum-weight spanners. In Algorithms – ESA 2004, S. Albers and T. Radzik (Eds.), Berlin, Heidelberg, pp. 797–808. External Links: ISBN 978-3-540-30140-0 Cited by: Appendix A.
  • [27] D. A. Spielman and S. Teng (2011) Spectral sparsification of graphs. SIAM Journal on Computing 40 (4), pp. 981–1025. Cited by: §1.
  • [28] D. A. Spielman and S. Teng (2014) Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications 35 (3), pp. 835–885. Cited by: §1.
  • [29] B. Y. Wu and K. Chao (2004) Spanning trees and optimization problems. CRC Press. Cited by: §3.1.

Appendix A Integer Programming Formulation for Pairwise Spanners

Here we describe an ILP solver which gives a minimum cost solution to the pairwise spanner problem with arbitrary distortion function . This may be used as an oracle subroutine in Algorithm 1.

Sigurd and Zachariasen [26] give an integer linear programming (ILP) formulation for the minimum cost pairwise –spanner problem, where individual paths are decision variables (hence the ILP has exponentially many parameters in terms of the number of edges). In [3], a compact flow-based formulation for the minimum-cost pairwise –spanner problem using variables and constraints is given; however, it turns out that the ILP is valid for generic distortion functions. Let be the subset of vertex pairs on which distances are desired to be preserved, and let be any function satisfying for all (note the function need not be continuous). Let if is included in the spanner, and otherwise. Given , let be the bidirection graph of obtained by replacing every edge with two directed edges and each of weight (thus ). Given , and an unordered pair of vertices , define indicator variables by if edge is included in the selected path in the spanner , and 0 otherwise. Let and denote the set of incoming and outgoing edges from , respectively.

Next we select a total order of all vertices so that the path constraints (9)–(10) are well-defined. In (8)–(12) we assume in the total order, so spanner paths are from to . The ILP is as follows.

(7)
(8)
(9)
(10)
(11)
(12)

Ordering the edges induces binary variables, or variables in the full spanner problem where . Note that if and are connected by multiple paths in of length , we need only set for edges along some path. The following is the main theorem for this ILP.

{theorem}

Given a graph , a subset , and any distortion function which satisfies for all , the solution to the ILP given by (7)–(12) is an optimally light (or sparse if is undirected) pairwise spanner for with distortion .

Proof.

Let denote an optimal pairwise spanner of with distortion , and let OPT denote the cost of (number of edges or total weight if is unweighted or weighted, respectively). Let denote the minimum cost of the objective function in the ILP (7). First we notice that from the minimum cost spanner , a solution to the ILP can be constructed as follows: for each edge , set . Then for each unordered pair with , compute a shortest path from to in , and set for each edge along this path, and if is not on .

As each shortest path necessarily has cost at most , constraint (8) is satisfied. Constraints (9)–(10) are satisfied as is a simple path. Constraint (11) also holds as cannot traverse the same edge twice in opposite directions. In particular, every edge in appears on some shortest path; otherwise, removing such an edge yields a pairwise spanner of lower cost. Hence .

Conversely, an optimal solution to the ILP induces a feasible pairwise spanner with distortion . Indeed, consider an unordered pair with , and the set of decision variables satisfying . By (9) and (10), these edges form a simple path from to . The sum of the weights of these edges is at most by (8). Then by (11), the chosen edges corresponding to appear in the spanner, which is induced by the set of edges with . Hence .

Combining the above observations, we see that , and the proof is complete. ∎

In the multiplicative spanner case, the number of ILP variables can be significantly reduced (see [3] for more details). These reductions are somewhat specific to multiplicative spanners, and so it would be interesting to determine if other simplifications are possible for more general distortion.

Note that the distortion does not have to be continuous, which allows for tremendous flexibility in the types of pairwise spanners the above ILP can produce.

Appendix B Experiments

b.1 Setup

We use the Erdős–Rényi [16] model to generate random graphs. Given a number of vertices, , and probability , the model assigns an edge to any given pair of vertices with probability . An instance of with is connected with high probability for  [16]). For our experiments we allow to range from 5 to 300, and set .

For experimentation, we consider only the multiplicative graph spanner version of the MLAGS problem, hence we abbreviate this as MLGS; for similar experimental results on multi-level Steiner trees, see [2]. An instance of the MLGS problem is characterized by four parameters: the graph generator, the number of vertices , the number of levels , and stretch factor . As there is randomness involved, we generated 3 instances for every choice of parameters (e.g., ER, , , ).

We generated MLGS instances with 1 to 6 levels (), where terminals are selected on each level by randomly sampling vertices on level so that the size of the terminal sets decreases linearly. As the terminal sets are nested, can be selected by sampling from (or from if ). We used four different stretch factors in our experiments, . Edge weights are randomly selected from .

b.2 Algorithms and outputs

We implemented several variants of Algorithm 1, which yield different results based on the rounding set as well as the single level approximation algorithm. In our experiment we used three setups for : bottom-up (BU) in which , top-down (TD) in which , and composite (CMP) which selects the optimal set of levels as in Section 2. We used Python 3.5, and utilized the same high-performance computer for all experiments (Lenovo NeXtScale nx360 M5 system with 400 nodes). When using an oracle for single levels in Algorithm 1, we use the ILP formulation provided in Appendix A using CPLEX 12.6.2.

For each instance of the MLGS problem, we compute the costs of the MLSG returned using the BU, TD, CMP approaches, and also compute the minimum cost MLGS using the ILP in Appendix A. For the first set of experiments, we use the ILP as an oracle to find the minimum weight spanner for each level; in this case we refer to the results as Oracle BU, TD, and CMP. In the second set of experiments, we use the metric closure subsetwise spanner Algorithm 3 as the single level subroutine, which we refer to as Metric Closure BU, TD, and CMP. We show the performance ratio for each heuristic in the –axis (defined as the heuristic cost divided by OPT), and how the ratio depends on the input parameters (number of vertices , number of levels , and stretch factors ).

Finally, we discuss the running time of the algorithms. All box plots show the minimum, interquartile range and maximum, aggregated over all instances using the parameter being compared.

b.3 Results

Figures 25 show the results of the oracle TD, BU, and CMP. We show the impact of different parameters (number of vertices , number of levels , and stretch factors ) using line plots for the three approaches separately in Figures 2-4. Figure 5 shows the performance of the three variants together in box plots. In Figure 2 we can see that all variants perform better when the size of the vertex set increases. Figure 3 shows that all variants perform worse as the number of levels increases. In Figure 4 we see that in general, performance decreases as the stretch factor increases.

(a) Bottom up
(b) Top down
(c) Composite
Figure 2: Performance of oracle bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of vertices. Ratio is defined as the cost of the returned MLGS divided by OPT.
(a) Bottom up
(b) Top down
(c) Composite
Figure 3: Performance of oracle bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of levels.
(a) Bottom up
(b) Top down
(c) Composite
Figure 4: Performance of oracle bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the stretch factors.
Figure 5: Performance of oracle bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of vertices, the number of levels, and the stretch factors.

The most time consuming part of the experiment is the execution time of the ILP for solving MLGS instances optimally. Hence, we first show the running times of the exact solution of the MLGS instances in Figure 6 with respect to the number of vertices , number of levels , and stretch factors . For all parameters, the running time tends to increase as the size of the parameter increases. In particular, the running time with stretch factor 4 (Fig. 6, right) was much worse. We can reduce the size of the ILP by removing some constraints based on different techniques discussed in [3]. However, these size reduction techniques are less effective as the stretch factor increases. We show the running times of computing oracle bottom-up, top-down and composite solutions in Figure 7. Notice that, although the running time of composite should be worse, sometimes top down takes more time. The reason is that we have an additional edge-pruning step after computing subsetwise spanner. In top down, every level has this pruning step, which is causing additional computation time and affecting the runtime especially when the graph is large.

Figure 6: Experimental running times for computing exact solutions w.r.t. the number of vertices, the number of levels, and the stretch factors.
Figure 7: Experimental running times for computing oracle bottom-up, top-down and combined solutions w.r.t. the number of vertices, the number of levels, and the stretch factors.

The ILP is too computationally expensive for larger input sizes and this is where the heurstic can be particularly useful. We now consider a similar experiment using the metric closure algorithm to compute subsetwise spanners, as described in Section 3. We show the impact of different parameters in Figures 810. Figure 11 shows the performance of the three algorithms together in box plots. We can see that the heuristics perform very well in practice.

(a) Bottom up
(b) Top down
(c) Composite
Figure 8: Performance of heuristic bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of vertices.
(a) Bottom up
(b) Top down
(c) Composite
Figure 9: Performance of heuristic bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of levels.
(a) Bottom up
(b) Top down
(c) Composite
Figure 10: Performance of heuristic bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the stretch factors.
Figure 11: Performance of heuristic bottom-up, top-down and composite on Erdős–Rényi graphs w.r.t. the number of vertices, the number of levels, and the stretch factors.

Our final experiments test the heuristic performance on a set of larger graphs. We generated the graphs using the Erdős–Rényi model, with . We evaluated more levels () with stretch factors . Here, the ratio is determined by dividing the BU, TD and CMP cost by (as computing the optimal MLGS would be too time consuming). Figure 12 shows the performance of the bottom-up, top-down and composite algorithms with respect to and . Figure 13 shows the aggregated running times per instance, which significantly worsen as increases. The results indicate that while running times increase with larger input graphs, the number of levels and the stretch factors seem to have little impact on performance. Notably when the metric closure algorithm is used in place of the ILP for the single level solver (Fig. 13), the running times decrease for larger stretch factors.

Figure 12: Performance of heuristic bottom-up, top-down and composite on large Erdős–Rényi graphs w.r.t. the number of vertices, the number of levels, and the stretch factors. The ratio is determined by dividing the objective value of the combined (min(BU, TD, CMP)) heuristic.
Figure 13: Experimental running times for computing heuristic bottom-up, top-down and composite solutions on large Erdős–Rényi graphs w.r.t. the number of vertices, the number of levels, and the stretch factors.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
361057
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description