Subexponential parameterized algorithms for graphs of polynomial growth††thanks: The research of D. Marx leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement no. 280152. The research of M. Pilipczuk is supported by Polish National Science Centre grant UMO-2013/09/B/ST6/03136. Part of the research has been done when the authors were participating in the “Fine-grained complexity and algorithm design” program at the Simons Institute for Theory of Computing in Berkeley.
We show that for a number of parameterized problems for which only time algorithms are known on general graphs, subexponential parameterized algorithms with running time are possible for graphs of polynomial growth with growth rate (degree) , that is, if we assume that every ball of radius contains only vertices. The algorithms use the technique of low-treewidth pattern covering, introduced by Fomin et al.  for planar graphs; here we show how this strategy can be made to work for graphs with polynomial growth.
Formally, we prove that, given a graph of polynomial growth with growth rate and an integer , one can in randomized polynomial time find a subset such that on one hand the treewidth of is , and on the other hand for every set of size at most , the probability that is . Together with standard dynamic programming techniques on graphs of bounded treewidth, this statement gives subexponential parameterized algorithms for a number of subgraph search problems, such as Long Path or Steiner Tree, in graphs of polynomial growth.
We complement the algorithm with an almost tight lower bound for Long Path: unless the Exponential Time Hypothesis fails, no parameterized algorithm with running time is possible for any and an integer .
In recent years, research on parameterized algorithms had a strong focus on understanding the optimal form of dependence on the parameter in the running time of parameterized algorithms. For many of the classic algorithmic problems on graphs, algorithms with running time exist, and we know that this form of running time is best possible, assuming the Exponential-Time Hypothesis (ETH) [23, 8, 27]. This means that we have an essentially tight understanding of these problems when considering graphs in their full generality, but does not rule out the possibility of improved algorithms when restricted to some class of graphs. Indeed, many of these problems become significantly easier on certain important graph classes. The most well-studied form of this improvement is the so-called “square root phenomenon” on planar graphs (and some if its generalizations): there is a large number of parameterized problems that admit time algorithms on planar graphs [9, 10, 11, 12, 13, 14, 16, 17, 21, 32, 20, 15, 24, 25, 7, 30, 31]. Many of these positive results can be explained by the theory of bidimensionality  and explicity or implicitly relies on the relation between treewidth and grid minors.
Very recently, a superset of the present authors showed a new technique to obtain subexponential algorithms in planar graphs for problems related to the Subgraph Isomorphism problem [19, 18], such as the Long Path problem of finding a simple path of length in the input graph. The approach of [19, 18] can be summarized as follows: a randomized polynomial-time algorithm is showed that, given a planar graph and an integer , selects a random induced subgraph of treewidth sublinear in in such a manner that, for every connected -vertex subgraph of , the probability that survives in the selected subgraph is inversely-subexponential in . Such a statement, dubbed low-treewidth pattern covering, together with standard dynamic programming techniques on graphs of bounded treewidth, gives subexponential algorithms for a much wider range of Subgraph Isomorphism-type problems than bidimensionality; for example, while bidimensionality provides a subexponential algorithm for Long Path in undirected graphs, it seems that the new approach of [19, 18] is needed for directed graphs.
The proof of the low treewidth pattern covering theorem of [19, 18] involves a number of different partitioning techniques in planar graphs. In this work, we take one of this technique — called clustering procedure, based on the metric decomposition tool of Linial and Saks  and the recursive decomposition used in the construction of Bartal’s HSTs  — and observe that it is perfectly suited to tackle the so-called graphs of polynomial growth.
To explain this concept formally, let us introduce some notation. All graphs in this paper are unweighted, and the distance function measures the minimum possible number of edges on a path from to in . For a graph , integer , and vertex by we denote the set of vertices that are within distance less than from in , , while by we denote the set of vertices within distance exactly , that is, . We omit the subscript if the graph is clear from the context.
Definition 1 (polynomial growth, ).
We say that a graph (or a graph class ) has polynomial growth of degree (growth rate) if there exists a universal constant such that for (every graph and) every radius and every vertex we have
The algorithmic consequences (and some of its variants) of this definition have been studied in the literature in various contexts (see, for example, [2, 22, 4, 1]). A standard example of a graph of polynomial growth with degree is a -dimensional grid. Graph classes of polynomial growth include graphs of bounded doubling dimension (with unit-weight edges), a popular assumption restricting the growth of a metric space in approximation algorithms or routing in networks (cf. the thesis  of Chan or  and references therein).
The clustering procedure, or metric decomposition tool of , can be described as follows. As long as the analysed graph is not empty, we carve out a new cluster as follows. We pick any vertex as a center of the new cluster, and set its radius . Iteratively, with some chosed probablity , we accept the current radius, and with the remaining probability we increase by one and repeat. That is, we choose with geometric distribution with success probability . Once a radius is accepted, we set as a new cluster, and delete from . In this manner, is carved out as a separated cluster, at the cost of sacrificing . A typical usage would be as follows: If one choose of the order of , then a simple analysis shows that every cluster has radius w.h.p., while a fixed set of size is fully retained in the union of clusters with constant probability. By a careful two-step application of the clustering procedure, we show the following low treewidth pattern covering statement for graphs of polynomial growth.
For every graph class of polynomial growth with growth rate , there exists a polynomial-time randomized algorithm that, given a graph and an integer , outputs a subset with the following properties:
treedepth of is ;
for every set of size at most , the probability that is .
Note that Theorem 2 uses the notion of treedepth, a much more restrictive graph measure than treewidth (cf. ), that in particular implies the same treewidth bound. Thus, together with standard dynamic programming techniques on graphs of bounded treewidth, Theorem 2 gives the following.
There exist randomized parameterized algorithms with running time bound for Long Path, Vertex Cover Local Search, and Steiner Tree parameterized by the size of the solution tree, when restricted to a graph class of polynomial growth with growth rate .
We refer to the introduction of  for bigger discussion on applications of low treewidth pattern covering statements.
We complement the algorithmic statement of Theorem 2 with the following lower bound.
If there exists an integer , a real , and an algorithm that decides if a given subgraph of a -dimensional grid of side length contains a Hamiltonian path in time , then the ETH fails.
2 Upper bound: proof of Theorem 2
In this section we prove Theorem 2. Without loss of generality, we assume .
The algorithm works in two steps. In the first one, the goal is to chop the graph into components of radius , which — by the polynomial growth property — are of polynomial size. Then, in the second phase, we consider every component independently, sparsifying it further. These two steps are described in the subsequent two section.
2.1 Chopping the graph into parts of polynomial size
The goal of the first step is to delete a number of vertices from the graph so that on one hand every connected component of has radius , and on the other hand the probability of deleting a vertex from an unknown pattern of size at most is small. Formally, we show the following lemma.
Let be as in Theorem 2. There exists a constant and a polynomial-time randomized algorithm that, given a graph and positive integer , outputs a subset such that
every connected component of is of radius at most ;
for every set of size at most , the probability that is at least .
For a constant to be fixed later, we perform the following iterative process. We start with and . In -th iteration (), we consider the graph . If the graph is empty, we stop. Otherwise, we pick an arbitrary vertex and pick a radius according to the geometric distribution with success probability , capped at value (i.e., if the choice of the radius is greater than , we set ). For further analysis, we would like to look at the choice of the radius as the following iterative process: we start with and iteratively accept the current radius with probability or increase it by one and repeat with probability , stopping unconditionally at radius . Given and , we set and . That is, we remove from all vertices within distance at most from , while retaining in only those that are within distance less than .
Clearly, as we remove a vertex from at every step, the process stops after at most steps. Let be the last index of the interation. Consider the graph . Recall that in the -th step we put into , but remove not only from but also . Consequently, the vertex sets of the connected components of are exactly sets for . Since the radii are capped at value , every connected component of has radius at most .
We now claim the following.
For every of size at most , the probability that is at least .
Fix of size at most . Note that only if at some iteration , some vertex is exactly within distance from in the graph . We now bound the probability that this happens, split into two subcases: either or .
Case 1: hitting a vertex within distance . Let . Note that if is exactly within distance from in the graph , then necessarily . On the other hand, by the polynomial growth property,
We consider ourselves lucky if whenever , we have , that is, the process choosing does not hit the cap of for every center in . Note that, for a fixed iteration , we have
Thus, for sufficiently large constant (depending only on and ), we have that
We infer that, for such a choice of , the probability that we are not lucky is at most .
Case 2: hitting a vertex within distance . It is convenient to think here of the choice of the radius as an interative process that starts from , accepts the current radius with probability , or increases its by one and repeats with probability . For a fixed iteration and a choice of , consider a potential radius when there is a vertex within distance exactly from in . If we do not accept this radius (which happens with probability ), the vertex is included in and is surely included in . Consequently, in the whole process we care about not accepting a given radius only times, at most once for every vertex . We infer that the probability that for some iteration there is a vertex within distance exactly from and is at most .
Considering both cases, by union bound, the probability that is at least
The last estimate uses the assumption .
2.2 Handling a component of polynomial size
In this section we show the following lemma.
Let be as in Theorem 2. For every constant there exists a constant and a polynomial-time randomized algorithms that, given a positive integer , and a connected graph of radius , outputs a subset such that
treedepth of is ;
for every set of size at most , the probability that is at least .
We emphasize here the linear dependency on in the exponent of the probability bound. This dependency, similarly as in the analysis of , allows us to easily analyse independent runs of the algorithm on multiple connected components.
Proof of Lemma 7..
The random process we employ is similar to the one of the previous section, but more involved. Let be a constant to be fixed later.
We start with , and . In the -th iteration of the process, we consider the graph . If the graph is empty, we stop. Otherwise, we pick an arbitrary vertex and pick a radius according to the geometric distribution with success probability , capped at value (i.e., as before, if the choice of the radius is greater than , we set ). In other words, we start with and iteratively accept the current radius with probability or increase it by one and repeat with the remaining probability, stopping unconditionally at radius .
As before, we set and . However, now, as the radii are smaller, we may want to retain some vertices of , as they can be part of the pattern ; for this, we use the sets . With probability we put and . With the remaining probability, we proceed as follows. Uniformly at random, we choose a number and a set of vertices of (or all of them, if there are less than vertices in this set). We put .
Let be the index of the last iteration. If , the we output . Otherwise, we output . Let us now verify that has the desired properties.
The treedepth of is .
The claim is trivial if , so assume otherwise; in particular, . We use the following inductive definition of treedepth: treedepth of an empty graph if , while for any graph on at least one vertex we have that
Upon deleting from the at most vertices of , we are left with . Similarly as in the previous section, every connected component of is of radius at most . Consequently, every connected component of is of size at most . The claim follows.
For every set of size at most , the probability that is at least for some constant depending only on , , and .
Fix a pattern . The claim is trivial for so assume otherwise. In particular, if , then we can estimate the desired probability as
Consider a fixed iteration , and the moment when, knowing , we choose the radius . Given and , we say that a radius is bad if
Let be a sequence of bad radii. First, note that , and thus . Furthermore, as for every we have , we have
Since , we infer that
We are interested in the following event : every chosen radii is not bad and is smaller than (i.e., we did not hit the cap of ). Recall the iterative interpretation of the choice of the radii : we start with , accept the current radius with probability , or increase by one and repeat with the remaining probability. Thus, we are interested in the intersection of the following two events: we do not accept any bad radius, but we accept some good radius before the cap .
Whenever we do not accept a bad radius , a vertex of is included in . Consequently, in the whole algorithm we encounter at most bad radii; each is indepently accepted with probability .
By (2), in a fixed iteration there are at most bad radii. Consequently, if we count only acceptance of good radii, the probability that the radius reaches the bound is at most
Consequently, since , by choosing large enough, we can ensure that the probability that there exists a radius equal to is at most . Since the choice of acceptance of different radii are independent, we infer that the probability of the event is at least
for some positive constant . Here, we have used the fact that and .
Assume that the event happens, and let us fix one choice of and . Note that these choices determine the sets and the graphs ; the only remaining random choices are whether to include some vertices into the sets .
For an iteration , define . We are now considering the following event : in every iteration we have . Note that if happens, then . Thus, we need to estimate the probability of the event .
If , then we guess so with probability . As there are at most iterations, with probability at least we will make correct decision in all iterations for which .
Consider now an iteration for which . Since the radius is good, we have
In particular, , and thus there are at most such iterations. Furthermore,
In every such iteration , we need to correctly guess that is nonempty ( success probability), correctly guess (at least success probability) and correctly guess (at least success probability). All these choices are independent. Since is bounded polynomially in , the probability of the event is at least
for some constant depending on , , and . This finishes the proof of the claim.
Let us now wrap up the proof of Theorem 2, using Lemmata 5 and 7. We first apply the algorithm of Lemma 5 to the input graph and integer , obtaining a set . Then, we apply the algorithm of Lemma 7 independently to every connected component of , obtaining a set ; recall that every such component is of radius at most . As the output , we return the union of the returned sets . Clearly, the treedepth bound holds. If we denote for a component , we have that the probability that is at least
This finishes the proof of Theorem 2.
3 Lower bound: proof of Theorem 4
In this section we prove Theorem 4. The reduction is heavily inspired by the reduction for -dimensional Euclidean TSP by Marx and Sidiropolous . In particular, our starting point is the same CSP pivot problem.
Theorem 10 ().
For every fixed , there is a constant such that for every constant an existence of an algorithm solving in time CSP instances with binary constraints, domain size at most , and Gaifman graph being a -dimensional grid of side length would refute ETH.
Let us recall that a binary CSP instance consists of a domain , a set of variables, and a set of constraints. Every constraint is a binary relation that binds two variables . The goal is to find an assignment that satisfies every constraint; a constraint is satisfied if . The Gaifman graph of a binary CSP instance has vertex set and an edge for every constraint .
Similarly as in the case of , our goal is to turn a given CSP instance as in Theorem 10 and turn it into a Hamiltonian path instance by local gadgets. That is, we are going to replace every variable of the CSP instance with a constant-size gadget (i.e., with size depending only on and ); the way the gadget is traversed by the Hamiltonian path indicates the choice of the value of the variable. The neighboring gadgets are wired up to ensure that the constraint binding them is satisfied.
The base gadget of the construction is a 2-chain as presented on Figure 1. A direct check shows that there are two ways how a -chain can be traversed by a Hamiltonian path, as depicted on the figure.
Figure 2 shows a gadget present on both endpoints of a 2-chain. As shown on the figure, it allows choosing how the -chain is traversed.
We will refer to the two depicted Hamiltonian paths of a -chain as modes of the chain. Given one of the horizontal edges of the -chain, a mode is consistent with this edge if the corresponding Hamiltonian path traverses the edge in question, and inconsistent otherwise.
We will attach various gadgets to -chains via one of the horizontal edges. To maintain the properties of the -chains, in particular the effectively two ways of traversing a -chain, we need to space out the attached gadgets. More formally, we partition every -chain into sufficiently long chunks (chunks of length are more than sufficient), and allow gadgets to attach only to one of the two middle horizontal edges on one side of the chain (see Figure 3), with at most one gadget per chunk. A gadget is always attached to an edge by adding two new vertices and near the edge , in the same -dimensional plane as the -chain itself, such that the endpoints of , , and form a square. Properties of such an attachment can be summarized in the following straightforward claim.
Consider a chunk on a -chain , and a gadget attached to an edge in . Then every Hamiltonian path traverses in one of the following three ways (see Figure 3):
In particular, Claim 11 allows us to formally speak about a mode of a -chain, even if multiple gadgets are attached to it.
3.2 Placing -chains
For every variable of the input CSP instance, we create -chains of length (to be determined later). They are positioned parallely in the following fashion (see Figure 4): we choose an arbitrary -dimensional subspace of the whole grid, and place -chains such that -th -chain occupies vertices . The edges indicated as attachment points for gadgets are on the one side of all chains.
All chains, for all variables, are wired up into a Hamiltonian path: for every variable, we connect the constructed -chains into a path in a straightforward fashion, we take an arbitrary Hamiltonian path of the original Gaifman graph of the input CSP instance (which is a -dimensional grid, and thus trivially admits a Hamiltonian path), and connect endpoints of the -chains in the same order using simple paths. This is straightforward to perform if we space out the variable gadgets enough.
Since all constructed -chains are isomorphic, we indicate one mode of a -chain as a low mode, and the other one as high mode. Our goal is to introduce gadgets that (i) ensure that for every variable, exactly one of the corresponding -chains is in high mode, indicating the choice of the value for this variable; (ii) for every two variables that are bound by a constraint, for every pair of values that is forbidden by the constraint, ensure that the two variables in question do not attain the values in quesion at the same time, that is, the corresponding two -chains are not both in high mode at the same time.
The construction of -chains allow us to implement a simple “OR” constraint on two -chains. Consider two -chains and , and two horizontal edges and on and , respectively. By attaching an OR-check to these edges we mean the following construction:
we create vertices and near as well as and near , as in the description of gadget attachment;
we connect to by a path and to by a path.
If the -chains are spaced enough, it is straightforward to implement the above constuction such that the resulting graph is a subgraph of a -dimensional grid.
Claim 11 allows us to observe the following.
If is traversed in a way consistent with , then one can modify the Hamiltonian path traversing so that it visits the OR gadget: replace with a path traversing first a path from to , the edge , and then the path from to . A symmetrical claim holds if is traversed in a way consistent with .
In the other direction, there is no Hamiltonian path that traverses both and in a way inconsistent with and , respectively.
We now observe that, by attaching OR-checks in a straightforward manner, we can ensure that:
for every variable , at most one -chain corresponding to is in high mode (we wire up every pair of -chains with an OR-check forbidding two high modes at the same time);
for every two variables and that are bound by a constraint , for every pair of values that is forbidden by the constraint , the -th -chain of and the -th -chain of are not in the high mode at the same time.
We are left with ensuring that for every variable , at least one of the corresponding -chains is in the high mode. This is the aim of the next gadget.
3.4 Tube gadget
Fix a variable . Without loss of generality, we can assume that the first chunk of every -chain for has not been used by the OR-checks introduced previously. Let be the attachment edge of the -th -chain that is consistent with the high mode of the -chain; note that the edges lie next to each other (see Figure 5).
We create a grid, called henceforth a tube gadget, placed near the edges , such that every edge can be attached to an edge of the grid in a standard way discussed earlier. See Figure 5 for an illustration.
Since a grid admits a Hamiltonian cycle that traverses every edge in one of the “short” directions, if the -th chain is traversed in high mode for some , we can replace on the Hamiltonian path with a traverse along the aforementioned Hamiltonian cycle. This observation, together with Claim 11, proves the following claim.
If there exists an index such that the -th -chain is traversed in high mode, then the Hamiltonian path of this -chain can be altered to visit every vertex of the grid.
On the other hand, any Hamiltonian path of the entire graph needs to traverse at least one -chain in high mode, in order to visit the vertices of the grid.
The tube gadgets ensure that, for every variable, at least one corresponding -chain is in high mode. The first type of the attached OR-checks ensure that at most one such -chain is in high mode. Thus, effectively the gadgets introduced for a single variable can be in one of by choosing the -chain that is in high mode, which corresponds to the choice of the value for in an assignment.
The second type of the attached OR-checks ensure that the values of the neighboring variables satisfy the constraint that binds them, completing the proof of the correctness of the reduction.
To conclude, let us observe that every -chain is attached to one tube gadget and OR-checks, and the whole gadget replacing a single variable takes part in OR-checks. Thus taking suffices. By leaving space of size between consecutive variable gadgets we can ensure more than enough space for all connections. Consequently, the constructed graph is a subgraph of a -dimensional grid of side length , and admits a Hamiltonian path if and only if the input CSP instance is satisfiable. This finishes the proof of Theorem 4.
We have shown a low treewidth pattern covering statement for graphs of polynomial growth with subexponential term being , where is the growth rate of the graph. An almost tight lower bound shows that, assuming ETH, one should not hope for a better term than .
Two natural questions arise. The first one is to close the gap between and ; we conjecture that our lower bound is tight, and the term in the running time bound of Theorem 2 is only a shortfall of our algorithmic techniques. The second one is to derandomize the algorithms of this work and of [19, 18]. The clustering step is the only step of the algorithm of [19, 18] that we do not now how to derandomize, despite its resemblance to the construction of Bartal’s HSTs  that were subsequently derandomized .
-  I. Abraham, C. Gavoille, A. V. Goldberg, and D. Malkhi. Routing in networks with low doubling dimension. In 26th IEEE International Conference on Distributed Computing Systems (ICDCS 2006), 4-7 July 2006, Lisboa, Portugal, page 75. IEEE Computer Society, 2006.
-  I. Abraham and D. Malkhi. Name independent routing for growth bounded networks. In P. B. Gibbons and P. G. Spirakis, editors, SPAA 2005: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures, July 18-20, 2005, Las Vegas, Nevada, USA, pages 49–55. ACM, 2005.
-  Y. Bartal. On approximating arbitrary metrices by tree metrics. In J. S. Vitter, editor, Proceedings of the Thirtieth Annual ACM Symposium on the Theory of Computing, Dallas, Texas, USA, May 23-26, 1998, pages 161–168. ACM, 1998.
-  V. Blondel, K. Jung, P. Kohli, and D. Shah. Partition-merge: Distributed inference and modularity optimization. CoRR, abs/1309.6129, 2013.
-  T. H. H. Chan. Approximation Algorithms for Bounded Dimensional Metric Spaces. PhD thesis, Carnagie Mellon University, 2007. Available at http://i.cs.hku.hk/ hubert/thesis/thesis.pdf.
-  M. Charikar, C. Chekuri, A. Goel, S. Guha, and S. A. Plotkin. Approximating a finite metric by a small number of tree metrics. In 39th Annual Symposium on Foundations of Computer Science, FOCS ’98, November 8-11, 1998, Palo Alto, California, USA, pages 379–388. IEEE Computer Society, 1998.
-  R. H. Chitnis, M. Hajiaghayi, and D. Marx. Tight bounds for Planar Strongly Connected Steiner Subgraph with fixed number of terminals (and extensions). In SODA 2014, pages 1782–1801, 2014.
-  M. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized Algorithms. Springer, 2015.
-  E. D. Demaine, F. V. Fomin, M. T. Hajiaghayi, and D. M. Thilikos. Bidimensional parameters and local treewidth. SIAM J. Discrete Math., 18(3):501–511, 2004.
-  E. D. Demaine, F. V. Fomin, M. T. Hajiaghayi, and D. M. Thilikos. Fixed-parameter algorithms for -Center in planar graphs and map graphs. ACM Transactions on Algorithms, 1(1):33–47, 2005.
-  E. D. Demaine, F. V. Fomin, M. T. Hajiaghayi, and D. M. Thilikos. Subexponential parameterized algorithms on bounded-genus graphs and -minor-free graphs. J. ACM, 52(6):866–893, 2005.
-  E. D. Demaine and M. Hajiaghayi. The bidimensionality theory and its algorithmic applications. Comput. J., 51(3):292–302, 2008.
-  E. D. Demaine and M. Hajiaghayi. Linearity of grid minors in treewidth with applications through bidimensionality. Combinatorica, 28(1):19–36, 2008.
-  E. D. Demaine and M. T. Hajiaghayi. Fast algorithms for hard graph problems: Bidimensionality, minors, and local treewidth. In Graph Drawing, pages 517–533, 2004.
-  F. Dorn, F. V. Fomin, D. Lokshtanov, V. Raman, and S. Saurabh. Beyond bidimensionality: Parameterized subexponential algorithms on directed graphs. In STACS 2010, pages 251–262, 2010.
-  F. Dorn, F. V. Fomin, and D. M. Thilikos. Subexponential parameterized algorithms. Computer Science Review, 2(1):29–39, 2008.
-  F. Dorn, E. Penninkx, H. L. Bodlaender, and F. V. Fomin. Efficient exact algorithms on planar graphs: Exploiting sphere cut decompositions. Algorithmica, 58(3):790–810, 2010.
-  F. V. Fomin, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Subexponential parameterized algorithms for planar and apex-minor-free graphs via low treewidth pattern covering. In FOCS, 2016. To appear.
-  F. V. Fomin, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Subexponential parameterized algorithms for planar and apex-minor-free graphs via low treewidth pattern covering. CoRR, abs/1604.05999, 2016.
-  F. V. Fomin, D. Lokshtanov, V. Raman, and S. Saurabh. Subexponential algorithms for partial cover problems. Inf. Process. Lett., 111(16):814–818, 2011.
-  F. V. Fomin and D. M. Thilikos. Dominating sets in planar graphs: Branch-width and exponential speed-up. SIAM J. Comput., 36(2):281–309, 2006.
-  R. Gummadi, K. Jung, D. Shah, and R. S. Sreenivas. Computing the capacity region of a wireless network. In INFOCOM 2009. 28th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 19-25 April 2009, Rio de Janeiro, Brazil, pages 1341–1349. IEEE, 2009.
-  R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity? J. Comput. Syst. Sci., 63(4):512–530, 2001.
-  P. N. Klein and D. Marx. Solving planar -terminal cut in time. In Proceedings of the 39th International Colloquium of Automata, Languages and Programming (ICALP), volume 7391 of Lecture Notes in Comput. Sci., pages 569–580. Springer, 2012.
-  P. N. Klein and D. Marx. A subexponential parameterized algorithm for Subset TSP on planar graphs. In SODA 2014, pages 1812–1830, 2014.
-  N. Linial and M. E. Saks. Low diameter graph decompositions. Combinatorica, 13(4):441–454, 1993.
-  D. Lokshtanov, D. Marx, and S. Saurabh. Lower bounds based on the exponential time hypothesis. Bulletin of the EATCS, 105:41–72, 2011.
-  D. Marx and A. Sidiropoulos. The limited blessing of low dimensionality: when 1-1/d is the best possible exponent for d-dimensional geometric problems. In S. Cheng and O. Devillers, editors, 30th Annual Symposium on Computational Geometry, SOCG’14, Kyoto, Japan, June 08 - 11, 2014, page 67. ACM, 2014.
-  J. Nešetřil and P. O. de Mendez. Sparsity - Graphs, Structures, and Algorithms, volume 28 of Algorithms and combinatorics. Springer, 2012.
-  M. Pilipczuk, M. Pilipczuk, P. Sankowski, and E. J. van Leeuwen. Subexponential-time parameterized algorithm for Steiner Tree on planar graphs. In STACS 2013, pages 353–364, 2013.
-  M. Pilipczuk, M. Pilipczuk, P. Sankowski, and E. J. van Leeuwen. Network sparsification for Steiner problems on planar and bounded-genus graphs. In FOCS 2014, pages 276–285. IEEE Computer Society, 2014.
-  D. M. Thilikos. Fast sub-exponential algorithms and compactness in planar graphs. In ESA 2011, pages 358–369, 2011.