FixedParameter Approximations for Center Problems
in Low
Highway Dimension Graphs^{†}^{†}thanks: I would like to thank Jochen Könemann for
reading an early draft of this paper. Also I would like to thank two anonymous
reviewers for their insightful remarks that helped to greatly improve the
paper. A preliminary version appeared at the 42nd International Colloquium on
Automata, Languages, and Programming (ICALP 2015). This work was supported by
ERC Starting Grant PARAMTIGHT (No. 280152), project CEITI (GAČR
no. P202/12/G061) of the Czech Science Foundation, and by the Center for
Foundations of Modern Computer Science (Charles Univ. project UNCE/SCI/004).
Abstract
We consider the Center problem and some generalizations. For Center a set of center vertices needs to be found in a graph with edge lengths, such that the distance from any vertex of to its nearest center is minimized. This problem naturally occurs in transportation networks, and therefore we model the inputs as graphs with bounded highway dimension, as proposed by Abraham et al. [SODA 2010].
We show both approximation and fixedparameter hardness results, and how to overcome them using fixedparameter approximations, where the two paradigms are combined. In particular, we prove that for any computing a approximation is W[2]hard for parameter , and NPhard for graphs with highway dimension . The latter does not rule out fixedparameter approximations for the highway dimension parameter , but implies that such an algorithm must have at least doubly exponential running time in if it exists, unless ETH fails. On the positive side, we show how to get below the approximation factor of by combining the parameters and : we develop a fixedparameter approximation with running time . Additionally we prove that, unless P=NP, our techniques cannot be used to compute fixedparameter approximations for only the parameter .
We also provide similar fixedparameter approximations for the weighted Center and Partition problems, which generalize Center.
lemthm
\newaliascntcrlthm
\newaliascntdfnthm
\newaliascntpropthm
\aliascntresetthelem
\aliascntresetthecrl
\aliascntresetthedfn
\NewEnvirondoitall\noexpandarg\IfSubStr\BODY\IfSubStr\BODY
\IfSubStr\BODY&
1 Introduction
In this paper we consider the Center problem and some of its generalizations. For the problem, locations need to be found in a network, so that every node in the network is close to a location. More formally, the input is specified by an integer and a graph with positive edge lengths. A feasible solution to the problem is a set of centers such that . The aim is to minimize the maximum distance between any vertex and its closest center. That is, let denote the shortestpath distance between two vertices of according to the edge lengths, and be the ball of radius around . We need to minimize the cost of the solution , which is the smallest value for which . We say that a center covers a vertex if . Hence we can see the problem as finding centers covering all vertices of with balls of minimum radius.
The Center problem naturally arises in transportation networks, where, for instance, it models the need to find locations for manufacturing plants, hospitals, police stations, or warehouses under a budget constraint. Unfortunately it is NPhard to solve the problem in general [29], and the same holds true in various models for transportation networks, such as planar graphs [28] and metrics using Euclidean (), Manhattan (), or Chebyshev () distance measures [14]. A more recent model for transportation networks uses the highway dimension, which was introduced as a graph parameter by Abraham et al. [1]. The intuition behind its definition comes from the empirical observation [7, 8] that in a road network, starting from any point and travelling to a sufficiently far point along the quickest route, one is bound to pass through some member of a sparse set of “access points”. There are several formal definitions for the highway dimension that differ slightly [1, 3, 2, 16]. All of them, however, imply the existence of locally sparse shortest path covers. Therefore, in this paper we consider this as a generalization of the original highway dimension definitions.
Definition \thedfn.
Given a graph with edge lengths and a scale , let contain all vertex sets given by shortest paths in of length more than and at most . A shortest path cover is a hitting set for the set system , i.e., for each . We call the vertices in hubs. A hub set is called locally sparse if for every vertex the ball of radius around contains at most vertices from . The highway dimension of is the smallest integer such that there is a locally sparse shortest path cover for every scale in .
Abraham et al. [1] introduced the highway dimension in order to explain the fast running times of various shortestpath heuristics. However, they also note that “conceivably, better algorithms for other [optimization] problems can be developed and analysed under the small highway dimension assumption”. In this paper we investigate the Center problem and focus on graphs with low highway dimension as a model for transportation networks. One advantage of using such graphs is that they do not only capture road networks but also networks with transportation links given by airtraffic or railroads. For instance, introducing connections due to airplane traffic will render a network nonplanar, while it can still be argued to have low highway dimension: longer flight connections tend to be served by bigger but sparser airports, which act as hubs. This can, for instance, be of interest in applications where warehouses need to be placed to store and redistribute goods of globally operating enterprises. Unfortunately however, in this paper we show that the Center problem also remains NPhard on graphs with low highway dimension.
Two popular and wellstudied ways of coping with NPhard problems is to devise approximation [29, 30] and parameterized [13, 11] algorithms. For the former we demand polynomial running times but allow the computed solution to deviate from the optimum cost. That is, we compute a approximation, which is a feasible solution with a cost that is at most times worse than the best possible for the given instance. A problem that allows a polynomialtime approximation for any input is approximable, and is called the approximation factor of the corresponding algorithm. The rationale behind parameterized algorithms is that some parameter of the input is small and we can therefore afford running times that are superpolynomial in , while, however, we demand optimum solutions. That is, we compute a solution with optimum cost in time for some computable function that is independent of the input size . A problem that has a fixedparameter algorithm for a parameter is called fixedparameter tractable (FPT) for . What however, if a problem is neither approximable nor FPT? In this case it may be possible to overcome the complexity by combining these two paradigms. In particular, the objective becomes to develop fixedparameter approximation (FPA) algorithms that compute a approximation in time for a parameter .
The idea of combining the paradigms of approximation and fixedparameter tractability has been suggested before. However, only few results are known for this setting (cf. [26]). In this paper we show that for the Center problem it is possible to overcome lower bounds for its approximability and its fixedparameter tractability using parameterized approximations. For many different input classes, such as planar graphs [28], and  and metrics [14], the Center problem is approximable via the algorithm for general metrics of Hochbaum and Shmoys [19], but not approximable for any , unless P=NP. We show that, unless FPT=W[2], for general graphs there is no FPA algorithm for the parameter . Additionally, we prove that, unless P=NP, Center is not approximable on graphs with highway dimension . This does not rule out FPA algorithms for the highway dimension parameter, and we leave this as an open problem. However, the result implies that if such an algorithm exists, then its running time must be enormous. In particular, unless the exponential time hypothesis (ETH) [21, 22] fails, there can be no FPA algorithm with doubly exponential running time in the highway dimension .
In face of these hardness results, it seems tough to beat the approximation factor of for Center, even when considering fixedparameter approximations for either the parameter or the highway dimension. Our main result, however, is that we can obtain a significantly better approximation factor for Center when combining these two parameters. Such an algorithm is useful when aiming for high quality solutions, for instance, in a setting where only few warehouses should be built in a transportation network, since warehouses are expensive or stored goods should not be too dispersed for logistical reasons.
It is known [2] that locally sparse shortest path covers can be computed for graphs of highway dimension in polynomial time, if each shortest path is unique. We will assume that the latter is always the case, since we can slightly perturb the edge lengths. In particular, using a folklore method we may distort distances such that any approximation in the perturbed instance also is a approximation in the original instance. In the following theorem summarizing our main result, the first given running time assumes approximate shortest path covers. In general it is NPhard to compute the highway dimension [16], but it is unknown whether this problem is FPT. If this is the case and the running time is sufficiently small, this can be used as an oracle in our algorithm.
Theorem 1.
For any graph with vertices and highway dimension , there is an algorithm that computes a approximation to the Center problem in time . If locally sparse shortest path covers are given by an oracle, the running time is .
We leave open whether approximation factors better than can be obtained for the combined parameter . It was recently proved [15] that Center is W[1]hard for this parameter , but no inapproximability is implied by this result. We note that a recent result by Becker et al. [10] obtains a fixedparameter approximation scheme for Center in low highway dimension graphs, i.e., an algorithm computing a approximation in time for any . However, this result needs a more restrictive definition of the highway dimension than used in this paper. In particular, there are graphs that have bounded highway dimension due to Section 1, but for which the algorithm by Becker et al. [10] is not applicable (for a more detailed discussion on the relation between different definitions of highway dimension we refer to [16, Section 9]). Although we also leave open whether FPA algorithms exist for the parameter alone, we are able to prove that the techniques we use for Theorem 1 cannot omit using both and as parameters. To obtain a FPA algorithm with running time for any function independent of , a lot more information of the input would need to be exploited than the algorithm of Theorem 1 does. To explain this, we now turn to the used techniques.
1.1 Used techniques
A crucial observation for our algorithm is that at any scale , a graph of low highway dimension is structured in the following way (see Figure 1). We will prove that the vertices are either at distance at most from some hub, or they lie in clusters of diameter at most that are at distance more than from each other. Hence, for the cost of the optimum Center solution, at scale a center that resides in a cluster cannot cover any vertices of some other cluster. In this sense the clusters are “independent” of each other. At the same time we are able to bound the number of hubs of scale in terms of and the highway dimension. Roughly, this is comparable to graphs with small vertex cover, since the vertices that are not part of a vertex cover form an independent set. In this sense the highway dimension is a generalization of the vertex cover number (this is in fact the reason why computing the highway dimension is NPhard [16]).
At the same time the Center problem is a generalization of the Dominating Set problem. This problem is W[2]hard [13], which, as we will show, is also why Center is W[2]hard to approximate for parameter . However, Dominating Set is FPT using the vertex cover number as the parameter [5]. This is one of the reasons why combining the two parameters and yields a FPA algorithm for Center. In fact the similarity seems so striking at first that one is tempted to reduce the problem of finding a approximation for Center on low highway dimension graphs to solving Dominating Set on a graph of low vertex cover number. However, it is unclear how this can be made to work. Instead we devise an involved algorithm that is driven by the intuition that the two problems are similar.
The algorithm will guess the cost of the optimum solution in order to exploit the structure of the graph given by the locally sparse shortest path cover for scale . In particular, the shortest path covers of other scales do not need to be locally sparse in order for the algorithm to succeed. We will show that there are graphs for which Center is not approximable, unless P=NP, and for which the shortest path cover for scale is locally sparse. Hence our techniques, which only consider the shortest path cover of scale , cannot yield a FPA algorithm for parameter . The catch is though that the reduction produces graphs which do not have locally sparse shortest path covers for scales significantly larger than . Hence a FPA algorithm for parameter might still exist. However, such an algorithm would have to take larger scales into account than just , and as mentioned above, it would have to have at least doubly exponential running time in .
Proving that no FPA algorithm for parameter exists for Center, unless FPT=W[2], is straightforward given the original reduction of Hsu and Nemhauser [20] from the W[2]hard Dominating Set problem. For parameter , however, we develop some more advanced techniques. For the reduction we show how to construct a graph of low highway dimension given a metric of low doubling dimension (see Section 4 for a formal definition), so that distances between vertices are preserved by a factor. The doubling dimension [18] is a parameter that captures the bounded volume growth of metrics, such as given by Euclidean and Manhattan distances. Since Center is not approximable in metrics [14], unless P=NP, and these have constant doubling dimension, we are able to conclude that the hardness translates to graphs of highway dimension .
1.2 Generalizations
In addition to Center, in Section 5 we obtain similar positive results for two generalizations of the problem by appropriately modifying our techniques. For the weighted Center problem, the vertices have integer weights and the objective is to choose centers of total weight at most to cover all vertices with balls of minimum radius. This problem is approximable [19, 29] and no better approximation factor is known. However, we are able to modify our techniques to obtain a FPA algorithm for the combined parameter .
An alternative way to define the Center problem is in terms of finding a star cover of size in a metric, where the cost of the solution is the longest of any star edge in the solution. More generally, in their seminal work Hochbaum and Shmoys [19] defined the Partition problem. Here a family of (unweighted) graphs is given and the aim is to partition the vertices of a metric into sets and connect the vertices of each set by a graph from the family . The solution cost is measured by the “bottleneck”, which is the longest distance between any two vertices of the metric that are connected by an edge in a graph from the family . The case when contains only stars is exactly the Center problem, given the shortestpath metric as input. The Partition problem is approximable [19], where is the largest diameter of any graph in . We show that a FPA algorithm for the combined parameter exists, where is the largest radius of any graph in . Hence for graph families in which this improves on the general algorithm by Hochbaum and Shmoys [19]. This is for example the case when contains “stars of paths”, i.e., stars for which each edge is replaced by a path of length at most . The diameter of such a graph can be , while the radius is at most , and hence .
1.3 Related work
Given its applicability to various problems in transportation networks, but also in other contexts such as image processing and datacompression, the Center problem has been extensively studied in the past. We only mention closely related results here, that were not mentioned before. For parameters cliquewidth and treewidth, Katsikarelis et al. [23] show that Center is W[1]hard, but they also give fixedparameter approximation schemes for each of these parameters. For the treedepth parameter, they show that the problem is FPT. For unweighted planar and map graphs the Center problem is FPT [12] for the combined parameter , where is the cost of the optimum solution. Note though that and are somewhat opposing parameters in the sense that typically if is small then will be large, and vice versa. A very recent result [24] gives an efficient polynomialtime approximation scheme (EPTAS) for Center on weighted planar graphs, which approximates both the optimum cost and the number of centers . That is, in time the algorithm computes a approximation that uses at most centers, for any . Interestingly, this immediately implies a fixedparameter approximation scheme for parameters and on weighted planar graphs: setting to a value smaller than forces the algorithm to compute a solution with at most centers (since is an integer), while the cost is within an factor of the optimum. Marx and Pilipczuk [27] prove that in planar graphs an optimum Center solution can be computed in time . On the other hand, a recent result [15] shows that Center is W[1]hard in planar graphs with constant doubling dimension, for the combined parameter , where is the highway dimension and the treewidth of the input graph. Thus this problem remains hard, even when assuming that it abides to all aforementioned models of transportation networks at once. For any metric an FPA algorithm for the combined parameter can be obtained [4], where is the dimension of the geometric space. This can also be generalized [15] to an FPA algorithm for the combined parameter , where is the doubling dimension.
Abraham et al. [1] introduced the highway dimension, and study it in several papers [1, 3, 2]. Their main interest is in explaining the good performance of various shortestpath heuristics assuming low highway dimension. In [2] they show that a locally sparse shortest path cover can be computed in polynomial time for any scale if the highway dimension of the input graph is , and each shortest path is unique. Feldmann et al. [16] consider computing approximations for various other problems that naturally arise in transportation networks. They show that quasipolynomial time approximation schemes can be obtained for problems such as Travelling Salesman, Steiner Tree, or Facility Location, if the highway dimension is constant. For this however a more restrictive definition of the highway dimension than used here is needed (see [16, Section 9] for more details). The algorithms are obtained by probabilistically embedding a low highway dimension graph into a bounded treewidth graph while introducing arbitrarily small distortions of distances. Known algorithms to compute optimum solutions on low treewidth graphs then imply the approximation schemes. It is interesting to note that this approach does not work for the Center problem since, in contrast to the above mentioned problems, its objective function is not linear in the edge lengths. As noted before however, a recent result by Becker et al. [10] obtains a fixedparameter approximation scheme for Center for combined parameter using a deterministic embedding, building on the results in [16]. But again, for this the more restrictive definition of highway dimension also used in [16] is needed. The only other theoretical results on the highway dimension that we are aware of at this point are by Bauer et al. [9] and by Kosowski and Viennot [25]. Bauer et al. [9] show that for any graph there exist edge lengths such that the highway dimension is , where is the pathwidth of . Kosowski and Viennot [25] introduce the skeleton dimension of a graph and compare it to the highway dimension in the context of shortest path heuristics.
2 Center and highway dimension versus Dominating Set and vertex covers
We begin by observing that the vertices of a low highway dimension graph are highly structured for any scale : the vertices that are far from any hub of a shortest path cover for scale are clustered into sets of small diameter and large intercluster distance (see Figure 1). A similar observation was already made in [16], where clusters were called towns. We need a slightly different definition of clusters than in [16] however, which is why we do not use the same terminology here. For a set let be the shortestpath distance from to the closest vertex in .
Definition \thedfn.
Fix and a shortest path cover for scale in a graph . We call an inclusionwise maximal set with for all a cluster, and we denote the set of all clusters by . The noncluster vertices are those which are not contained in any cluster of .
Note that the set is specific for the scale and the hub set . The following lemma summarizes the structure of the clusters and noncluster vertices. Here we let be the minimum distance between vertices of two sets and .
Lemma \thelem.
Let be the cluster set for a scale and a shortest path cover . For each noncluster vertex , . The diameter of any cluster is at most , and for any distinct pair of clusters .
Proof.
The first two claims follow immediately from the definition of the clusters. For the third claim let , such that any cluster is a subset of . We first argue that there are no vertices for which . If these existed, by Section 1 there would be a hub hitting the shortest path between them. However, this path would have length since and are at distance more than from , contradicting our assumption that .
As a consequence, for any three vertices with and we have , and since we know that , this implies that in fact . Hence the relation of being at distance at most in is transitive, and it is obviously also symmetric and reflexive, i.e., it is an equivalence relation on . Moreover, any two vertices that do not belong to the same equivalence class, i.e. , must be at distance more than , as . By Section 2 the clusters are exactly the equivalence classes of , and so for any two distinct clusters . ∎
A vertex cover is a subset of vertices such that every edge is incident to some vertex of . In particular, if all edges have unit length, then a shortest path cover for scale is a vertex cover. Hence shortest path covers are generalizations of vertex covers. A dominating set is a subset of vertices such that every vertex is adjacent to some vertex of . In a graph with unit edge lengths, a feasible Center solution of cost is a dominating set. In this sense the Center problem is a generalization of the Dominating Set problem, for which a dominating set of minimum size needs to be found. The Dominating Set problem is hard [13] for its canonical parameter (i.e., the size of the optimum dominating set), but it is FPT [5] for the parameter given by the vertex cover number, which is the size of the smallest vertex cover of a given graph. As the following simple lemma shows, if is the cost of the optimum Center solution, the number of hubs of the shortest path cover is bounded in and the local sparsity of . Thus our setting generalizes the Dominating Set problem on graphs with bounded vertex cover number. It is interesting to note that in contrast to the Dominating Set problem being FPT for the vertex cover number [5], our more general setting is W[1]hard [15].
Lemma \thelem.
Let be the optimum cost of the Center problem in a given instance . If a shortest path cover of for scale is locally sparse, then .
Proof.
The optimum Center solution covers the whole graph with balls of radius each. By Section 1 there are at most hubs of in each ball. ∎
We are able to exploit this intuition for our algorithm in Section 3. On a high level, our algorithm follows the lines of the following simple procedure to solve Dominating Set on graphs with bounded vertex cover number. As a subroutine we will solve an instance of the Set Cover problem, for which a collection of subsets of a universe is given together with a subset of the universe.^{1}^{1}1Usually but for convenience we define the problem slightly more general here. A set cover for is a collection of the sets in covering , i.e., . The aim is to compute a minimumsized set cover for the set system . Given an input graph and a vertex cover of small size (which can, for instance, be an approximation), we perform the following three steps, in each of which we find a respective subset , , of the optimum dominating set of .

Guess the subset of vertices in the vertex cover that belong to the dominating set .

Since the vertices not in the vertex cover form an independent set, any vertex of , which is not adjacent to a vertex in must be in . Thus we can let consist of all such vertices from .

By our choice of , if there are any vertices left in that are not adjacent to , they must be in . Furthermore these vertices must be adjacent to some vertices in contained in , by our choice of . We can thus solve an instance of Set Cover, where is given by the subset of vertices in that are not adjacent to , and the set system is given by the neighbourhoods of vertices in restricted to . The remaining set consists of the vertices in whose neighbourhoods form the smallest solution of this Set Cover instance.
For the first step of the above algorithm there are possible guesses for . For each such guess, the second step can be performed in polynomial time. For the third step we need to solve Set Cover for an instance with a small universe . This can be done in time using the algorithm of Fomin et al. [17]. Since in our case and , this amounts to a running time of . This Set Cover algorithm is based on dynamic programming. During its execution the smallest set cover for every subset of the universe is computed, and these optimum solutions are stored in a table. Therefore, instead of running an algorithm for Set Cover for each guess of in the third step above, we may run the algorithm of Fomin et al. [17] only once beforehand: we set the universe to all of , and the set system will contain all neighbourhood sets of vertices in . This way the needed optimum solution for the corresponding subset of can be retrieved in constant time in the third step of our procedure. As we need to retrieve an optimum set cover for every guess of , this improves the overall running time, which is now .
In our Center algorithm we will use the same method of precomputing a table containing all optimum Set Cover solutions for subsets of a universe. We summarize the properties of the needed Set Cover algorithm in the following.
3 The fixedparameter approximation algorithm
We begin with a brief highlevel description of the algorithm. As observed in Section 2, we can think of solving Center in a low highway dimension graph as a generalization of solving Dominating Set in a graph with bounded vertex cover number. Our algorithm (see Algorithm 1) is driven by this intuition. After guessing the optimum Center cost and computing together with its cluster set , we will see how the algorithm computes three approximate center sets , , and (analogous to the three respective sets , , for Dominating Set). For the first set the algorithm guesses a subset of the hubs of that are close to the optimum center set. This can be done in time exponential in and the local sparsity of the hub set, because there are at most that many hubs for scale by Section 2. We will observe that by Section 2 an optimum center lying in a cluster cannot cover any vertices that are part of another cluster. This makes it easy to determine a second set of approximate centers, each of which will lie in a cluster that must contain an optimum center. The third set of centers will consist of cluster vertices that cover the remaining vertices not yet covered by and . These remaining uncovered vertices will all be noncluster vertices, and we find by solving a Set Cover instance, similar to the third step in our procedure for Dominating Set.
More concretely, consider an input graph with an optimum Center solution of cost . In Algorithm 1 to Algorithm 1 of Algorithm 1 we try scales in increasing order, to guess the correct value for which . For each guessed value of the algorithm computes a shortest path cover together with its cluster set in Algorithm 1. By [2], locally sparse shortest path covers are computable in polynomial time if the input graph has highway dimension . In Algorithm 1 we therefore set to the bound of the local sparsity guaranteed in [2] (if locally sparse shortest path covers are given by an oracle, we may at this point set ). In order to keep the running time low, the algorithm checks that the number of hubs is not too large in Algorithm 1: since by Section 2 we have , we can dismiss any shortest path cover containing more hubs.
Assume that was found. In the following, for an index we denote by and the regions covered by some set of optimum centers (with balls of radius ) and approximate centers (with balls of radius ), respectively. In Algorithm 1 the algorithm guesses a minimumsized set of hubs in , such that the balls of radius around hubs in cover all optimum noncluster centers. That is, if denotes the set of optimum noncluster centers, each of which is at distance at most from some hub in , then and is a minimumsized such set. We choose this set of hubs as the first set of centers for our approximate solution in Algorithm 1. Note that due to the minimality of we have . Also since for any center in there is a center (i.e., a hub) at distance at most in .
The next step is to compute a set of centers so that all clusters of the cluster set of are covered. Some of the clusters are already covered by the first set of centers , and thus in this step we want to take care of all remaining uncovered clusters, i.e., those contained in . By the definition of , any remaining optimum center in must lie in a cluster. Furthermore, the distance between clusters of is more than by Section 2, so that a center of in a cluster cannot cover any vertices of another cluster . Hence if we guessed correctly, we can be sure that each cluster must contain a center of . For each (remaining) cluster we thus pick an arbitrary vertex in Algorithm 1 and declare it a center of the second set for our approximate solution. Hence if the optimum set of centers for is , we have (if some cluster of contains more than one optimum center in order to cover different parts of the noncluster vertices, may be larger than ). Moreover, since the diameter of each cluster is at most by Section 2, we get .
At this time we know that all clusters in are covered by the region . Hence if any uncovered vertices remain in for our current approximate solution, they must be noncluster vertices. Just as , by our definition of , every remaining optimum center in lies in some cluster. Since and , any remaining uncovered vertex of must be in the region covered by centers in . Next we show how to compute a set such that the region includes all remaining vertices of the graph and . Note that the latter means that the number of centers in is at most , since , , and are disjoint.
To control the size of we will compute the smallest number of centers that cover parts of with balls of radius . In particular, in Algorithm 1 we guess the set of hubs that lie in the region (note that we exclude hubs of from this set). We then compute a center set of minimum size such that . For this we reduce the problem of computing centers covering to the Set Cover problem with fixed universe size, as shown in Algorithm 1 to Algorithm 1. This reduction is performed before entering the loops guessing and to optimize the running time. The universe of the Set Cover instance consists of all hubs in the shortest path cover , while the set system of the instance is obtained by restricting the balls of radius around cluster vertices to the hubs. By Theorem 2 there is an algorithm that computes the optimum Set Cover solution for every subset of the universe. This algorithm is called in Algorithm 1 of Algorithm 1 to fill a lookup table with these optima. We can thus retrieve the optimum Set Cover solution for the subset in Algorithm 1, and let contain each cluster vertex for which the set of hubs contained in the ball is part of the optimum solution covering , which is stored in the entry of the table. As the next lemma shows, we obtain the required properties for .
Lemma \thelem.
Assume the algorithm guessed the correct scale and the correct sets and . The set is of size at most and .
Proof.
The second property clearly follows since the sets in form a set cover for , such that every hub in is at distance at most from some . To see that , it suffices to show that the vertices in correspond to a feasible Set Cover solution for . If was guessed correctly, this set contains only hubs in the region . As is covered by balls of radius around the centers in , the union contains . Moreover, these sets are contained in the set system , since all centers of are contained in clusters by definition of . Thus the sets form a set cover for in the instance . ∎
It remains to show that the three computed center sets , , and cover all vertices of , which we do in the following lemma. In particular, the union will pass the feasibility test in Algorithm 1 of the algorithm.
Lemma \thelem.
Assume the algorithm guessed the correct scale and the correct sets and . The approximate center sets , , and cover all vertices of , i.e., .
Proof.
The proof is by contradiction: assume there is a that is not covered by the computed approximate center sets. The idea is to identify a hub on the shortest path between and an optimum center covering . We will show that this hub must however be in and therefore is in fact in , since also turns out to be close to .
To show the existence of , we begin by arguing that the closest hub to is neither in nor in . We know that each cluster of is in , so that must be a noncluster vertex. Thus by Section 2, . The region in particular contains all vertices that are at distance at most from any hub in . Since and , this means that . From we can also conclude that as follows. By Section 3, covers all hubs of with balls of radius . Hence if then would be at distance at most from a center of , i.e., .
From we can conclude the existence of as follows. Consider an optimum center that covers , i.e., . Recall that and . Since , this means that is neither in nor in so that . By definition of , any hub at distance at most from a center in is in , unless it is in . Hence, as , the distance between and must be more than . Since , we get . We also know that , because covers . Hence the shortest path cover must contain a hub that lies on the shortest path between and . In particular, and . Analogous to the argument used for above, in particular contains all vertices at distance at most from , so that since . However, then the distance bound for and yields , as .
Since contains all noncluster centers but , by Section 2 we get , which implies . But then is contained in the ball , which we know is part of the third region since . This contradicts the assumption that was not covered by the approximate center set. ∎
Note that the proof of Section 3 does not imply that , as was the case for and . It suffices though to establish the correctness of the algorithm. Finally, we conclude the proof of Theorem 1 by analysing the runtime of the algorithm.
Proof of Theorem 1.
By Section 2 and Section 3, if Algorithm 1 correctly guesses the cost and the two hub sets and , then and . By Section 2, so that the correct value for will not be skipped in Algorithm 1. Hence by trying all possible values for in increasing order, Algorithm 1 will terminate with a feasible solution that covers all vertices with balls of radius , due to Algorithm 1. To prove Theorem 1 it remains to bound the running time.
There are at most possible values for that need to be tried by the outermost loop, one for every pair of vertices. Hence the only steps of Algorithm 1 that incur exponential running times are when guessing and and when filling the table of the dynamic program for the Set Cover problem. These steps are only performed for shortest path covers of size at most due to Algorithm 1. Since we explicitly exclude the hubs in when choosing , each hub of a shortest path cover can either be in , in , or in none of them when trying all possibilities. Hence this gives possible outcomes. Filling the table takes time according to Theorem 2, while retrieving an optimum solution for in Algorithm 1 can be done in constant time. Thus the total running time to compute a approximation is . If the input graph has highway dimension , Abraham et al. [2] show how to compute approximations of shortest path covers in polynomial time if shortest paths have unique lengths. The latter can be assumed by slightly perturbing the edge lengths in such a way that any approximation in the perturbed instance also is a approximation in the original instance. Therefore we can set during the execution of our algorithm. If there is an oracle that gives locally sparse shortest path covers for each scale, then we can set instead. Thus the claimed running times follow. ∎
4 Hardness results
We begin by observing that the original reduction of Hsu and Nemhauser [20] for Center also implies that there are no FPA algorithms.
Theorem 3.
It is W[2]hard for parameter to compute a approximation to the Center problem for any .
Proof (cf. [20, 29]).
The reduction is from the Dominating Set problem, which is W[2]hard [13] for the standard parameter, i.e., the size of the smallest dominating set of the input graph . The reduction simply introduces unit lengths for each edge of , guesses the size of , and sets . Any feasible center set of cost corresponds to a dominating set, and vice versa. On the other hand, a center set has cost at least if and only if it is not a dominating set. Hence if the size of is guessed in increasing order starting from , must be equal to the first time a approximation of cost is obtained by an algorithm for Center. By guessing the size of in increasing order, this would result in an time algorithm to compute the optimum dominating set if there was a FPA algorithm for parameter for Center. ∎
We now turn to proving that approximations are hard to compute on graphs with low highway dimension. For this we introduce a general reduction from low doubling metrics to low highway dimension graphs in the next lemma. A metric has doubling dimension if for every , each set of diameter is the union of at most sets of diameter . The aspect ratio of a metric is the maximum distance between any two vertices of divided by the minimum distance, i.e., .
Lemma \thelem.
Given any metric with constant doubling dimension and aspect ratio , for any there is a graph of highway dimension on the same vertex set such that for all , Furthermore, can be computed in polynomial time from the metric.
Proof.
First off, by scaling we may assume w.l.o.g. that the minimum distance of the given metric is . In particular this means that the maximum distance is . A fundamental property [18] of low doubling dimension metrics is that for any set of points with aspect ratio , the number of points can be at most . The proof of this property is a simple recursive application of the doubling dimension definition. For each scale where we will identify a sparse set , which in any ball of radius has aspect ratio . The idea is to use the vertices of as hubs in a shortest path cover for scale , which then are locally sparse in any such ball. We will make sure that there is an index with a hub set for any possible distance between vertex pairs in the resulting graph . We need to make sure though that the shortest path for any pair of vertices passes through a corresponding hub of some . We achieve this by adding edges between the hubs in , which act as shortcuts. That is, the edges of will be slightly longer than the distances in the metric given by , and we will make the distances shorter with increasing scales in order to guarantee that the shortest paths pass through corresponding hubs.
More concretely, consider any set of vertices. A subset is a cover of if for every there is a such that , and is a packing of if for all distinct . A net of is a set that is a cover and a packing of . It is easy to see that such a net can be computed greedily in time. We will use sets that form a hierarchy of nets as hubs. In particular, and is a net of for each , where is the index of the largest scale. Note that due to the triangle inequality of the metric, each is a cover of .
In , for each we connect two vertices by an edge of length . If a vertex pair is contained in several sets of different scales, we only add the shortest edge according to the above rule, i.e., the edge for the largest index . Hence the distance in between any is at most . In particular, for any since . Note also that and hence .
To bound the highway dimension of , consider any pair , and let be such that . Recall that the minimum distance according to is (as ), while the maximum distance is . Accordingly, in all distances lie in so the index exists. We show that the shortest path between and passes through a hub of . We do this by upper bounding in terms of using a path that contains vertices of . Then we lower bound the length of any path that does not pass through and show that it is longer than the shortest path.
Let be the closest hub to and let be the closest hub to . We begin by determining some distance bounds for these vertices. Since is a cover of in the metric according to , the distances in from to and from to are at most each. It also means that , since we can get from to through and in the metric. We know that and thus we have . Using these bounds we get
We now show that every path that does not use any hub of is longer than . Since the hub sets of different scales form a hierarchy, any hub of scale with is also a hub for scale . Hence if does not pass through any hub of , it also cannot contain any vertex of where . Thus, if where and , any edge on will be of length at least . The sum