Confronting Intractability via Parameters1footnote 11footnote 1We would like to dedicate this paper to the 60th birthday of Michael R. Fellows. He is the alma mater of the area of parameterized complexity. The core ideas of this field germinated from work of Langston and Fellows in the late 80’s (such as FellowsL88nonc (); FellowsL89 (); FellowsL94 ()) and Abrahamson, Fellows, Ellis, and Mata AbrahamsonFEM89 (), and then from a serendipitous meeting of Downey with Fellows in December 1990. But there is no doubt much of its current vigor comes from Mike’s vision and irrepressible energy.Happy 60th Birthday Mike!

# Confronting Intractability via Parameters111We would like to dedicate this paper to the 60th birthday of Michael R. Fellows. He is the alma mater of the area of parameterized complexity. The core ideas of this field germinated from work of Langston and Fellows in the late 80’s (such as FellowsL88nonc (); FellowsL89 (); FellowsL94 ()) and Abrahamson, Fellows, Ellis, and Mata AbrahamsonFEM89 (), and then from a serendipitous meeting of Downey with Fellows in December 1990. But there is no doubt much of its current vigor comes from Mike’s vision and irrepressible energy.Happy 60th Birthday Mike!

Rodney G. Downey Dimitrios M. Thilikos School of Mathematics, Statistics and Operations Research, Victoria University, New Zealand. Supported by the Marsden Fund of New Zealand. Department of Mathematics, National & Kapodistrian University of Athens, Panepistimioupolis, GR-15784, Athens, Greece.
###### Abstract

One approach to confronting computational hardness is to try to understand the contribution of various parameters to the running time of algorithms and the complexity of computational tasks. Almost no computational tasks in real life are specified by their size alone. It is not hard to imagine that some parameters contribute more intractability than others and it seems reasonable to develop a theory of computational complexity which seeks to exploit this fact. Such a theory should be able to address the needs of practicioners in algorithmics. The last twenty years have seen the development of such a theory. This theory has a large number of successes in terms of a rich collection of algorithmic techniques both practical and theoretical, and a fine-grained intractability theory. Whilst the theory has been widely used in a number of areas of applications including computational biology, linguistics, VLSI design, learning theory and many others, knowledge of the area is highly varied. We hope that this article will show both the basic theory and point at the wide array of techniques available. Naturally the treatment is condensed, and the reader who wants more should go to the texts of Downey and Fellows DowneyF99para (), Flum and Grohe FlumG06para (), Niedermeier Niedermeier06invi (), and the upcoming undergraduate text Downey and Fellows DowneyF12fund ().

###### keywords:
Parameterized Complexity, Parameterized Algorithms
journal: Computer Science Review\biboptions

longnamesfirst,compress,sort

## 1 Introduction

### 1.1 Preamble

There is little question that the computer has caused a profound change in society. At our fingertips are devices that are performing billions of computations a second, and the same is true of embedded devices in all manner of things. In the same way that much of mathematics was developed to understand the physics of the world around us, developing mathematics to understand computation is an imperative. It is little wonder that the famous question is one of the Clay Prize questions and is regarded by some as the single most important problem in mathematics. This question is one in complexity theory which seeks to understand the resources (such as time or space) needed for computation.

The goal of this paper is to describe the methods and development of an area of complexity theory called parameterized complexity. This is a fine-grained complexity analysis which is often more attuned to analyzing computational questions which arise in practice than traditional worst case analysis. The idea here is that if you are working in some computational science and wish to understand what is feasible in your area, then perhaps this is the methodology you should know.

As articulated by the first author in Downey03para (), people working in the area tend to be a bit schizophrenic in that, even in computer science, paper reviews range from saying that “parameterized complexity is now well known so why are you including this introductory stuff?”, to the other extreme where reviewers say that they have never heard about it.

Whilst the use of parameterized complexity in applications has been growing at a very fast rate, there still remain significant groups who seem unaware of the techniques and ideas of the area. Everyone, apparently, is aware of NP-completeness and seems vaguely aware of randomized or approximation techniques. However, this seems not to be the case yet for parameterized complexity and/or algorithms. Moreover, it is certainly not the case that parameterized complexity has become part of the standard curriculum of theoretical computer science as something all researchers in complexity should know.

### 1.2 The issue

In the first book on the subject by Downey and Fellows DowneyF99para (), much is made of the issue “what is the role of complexity theory?”. After the work of Alan Turing and others in the mid-20th century, we understand what it means for something to be computable. But for actual computations, it is not enough to know that the problem is computable in theory. If we need to compute something it is not good if the running time will be so large that no computer ever conceived will be able to run the computation in the life time of the universe. Hence, the goal is to understand what is feasible. The point is that for any combinatorial problem, we will deal with only a small finite fraction of the universe, so how do we quantify this notion of feasibility? Classically, this is achieved by identifying feasibility with polynomial-time computability.

When we first learn of this, many of us immediately think of polynomial like and say “surely that is not feasible!”. We tend to forget that, because of such intuitive objections, the original suggestions that asymptotic analysis was a reasonable way to measure complexity and polynomial time was a reasonable measure of feasibility, were initially quite controversial. The reasons that the now central ideas of asymptotic polynomial time (P-time) and NP-completeness have survived the test of time are basically two. Firstly, the methodologies associated with polynomial-time and P-time reductions have proved to be readily amenable to mathematical analysis in most situations. In a word, P has good closure properties. It offers a readily negotiable mathematical currency. Secondly, and even more importantly, the ideas seem to work in practice: for “natural” problems the universe seems kind in the sense that (at least historically) if a “natural” computational problem is in P then usually we can find an algorithm having a polynomial running time with small degree and small constants. This last point, while seeming a religious, rather than a mathematical, statement, seems the key driving force behind the universal use of P as a classification tool for “natural” problems.

Even granting this, the use of things like NP-completeness is a very coarse tool. We are given some problem and show that it is NP-complete. What to do? As pointed out in Garey and Johnson GareyJ79comp (), this is only an initial foray. All such a result says is that our hope for an exact algorithm for the general problem which is feasible is likely in vain.

The problem with the standard approach of showing NP-completeness is that it offers no methodology of seeking practical and feasible algorithms for more restricted versions of the problem which might be both relevant and feasible. As we will see, parameterized complexity seeks to have a conversation with the problem which enables us to do just that.

### 1.3 The idea

The idea behind parameterized complexity is that we should look more deeply into the actual structure of the problem in order to seek some kind of hidden feasibility. Classical complexity views a problem as an instance and a question. The running time is specified by the input’s size.

Question : When will the input of a problem coming from “real life” have no more structure than its size?

For real-world computations we always know more about the problem. The problem is planar, the problem has small width, the problem only concerns small values of the parameters. Cities are regular, objects people construct tend to be built in some comprehensible way from smaller ones constructed earlier, people’s minds do not tend to work with more than a few alternations of quantifiers222As anyone who has taught analysis will know!, etc. Thus, why not have a complexity theory which exploits these structural parameters? Why not have a complexity theory more fine-tuned to actual applications?

Before we launch into examples and definitions, we make some remarks about what this article offers. The area of parameterized complexity (as an explicit discipline) has been around for around 20 years. The group or researchers who have adopted the methodology most strongly are those (particularly in Europe) who are working in applications. What has emerged is a very rich collection of distinctive positive techniques which should be known to researchers in computational applications in areas as diverse as linguistics, biology, cognition, physics, etc. The point here is that once you focus on a an extended “computational dialog” with the problem new techniques emerge around this. These techniques vary from simple (and practical) local reduction rules (for example Karsten Weihe’s solution to the “Europe train station problem” Weihe98cove ()) to some which are highly theoretical and use some very deep mathematics (for example, the recent proof that topological embedding is fixed-parameter tractable by Grohe, Kawarabayashi, Marx, and Wollan GroheKMW10find (), which uses most of the structure theory of the Graph Minors project of Robertson and Seymour RobertsonS85 ()).

The second part of this article looks at these positive techniques systematically, and we hope that the reader will find useful tools there. This area is still rapidly developing, and there are many internal contests amongst the researchers to develop techniques to beat existing bounds. For further details, the reader can see the web site http://fpt.wikidot.com/ .

The first part of the article is devoted to limitations. It is worth mentioning that the area is concerned with tractability within polynomial time. Hence classical methods do not readily apply. The question is “how does the problem resides in polynomial time”. To illustrate this, we begin with the basic motivating examples.

Vertex Cover333Current practice in the area is often to write this as Para-Vertex Cover, or sometime p--Vertex Cover, but this seems unnecessarily notational. We will usually supress the aspect of the problem being regarded as a parameter, or even that the problem is considered as a parameterized one when the context is clear. We believe that this will aide the general readability of the paper.
Instance:
A graph .
Parameter: A positive integer .
Question:  Does have a vertex cover of size ? (A vertex cover of a graph is a set such that for all edges of , either or .)

Dominating Set
Instance: A graph .
Parameter: A positive integer .
Question: Does have a dominating set of size ? (A dominating set is a set where, for each there is a such that .)

Of course both of these problems (without the parameter) are famously NP-complete by the work of Karp Karp74 (). With the parameter fixed then both of the problems are in polynomial time simply by trying all of the subsets of size where is the number of vertices of the input graph . What we now know is that there is an algorithm running in time (see ChenKX10impr ()) (i.e., linear time for a fixed and with an additive component that is mildly exponential in ), whereas the only known algorithm for Dominating Set is to try all possibilities. This takes time more or less . Moreover, we remark that the methods for solving Vertex Cover are simple reduction rules which are “industrial strength” in that they run extremely well in practice (e.g. CDRST03 (); LangstonPSSV08inno ()), even for beyond which would seem reasonable, like . The reader might wonder: does this matter? Table 1 from the original book DowneyF99para () illustrates the difference between a running time of (that is, where exaustive search is necessary) and a running time of . The latter has been achieved for several natural parameterized problems. (In fact, as we have seen above, the constant can sometimes be significantly improved and its contribution can sometimes even be additive.)

Even before we give the formal definitions, the intent of the definitions will be clear. If we have algorithms running in time , for some that is independent of , we regard this as being (fixed-parameter) tractable, and if increases with , then this problem is regarded as being intractable. In the latter case, we cannot usually prove intractability as it would separate P from NP, as it would in the case of Dominating Set, for example. But we can have a completeness programme in the same spirit as NP-completeness.

We mention that the methodology has deep connections with classical complexity. For example, one of the more useful assumptions for establishing lower bounds in classical complexity is what is called the exponential time hypothesis (ETH) which is that not only is -variable 3SAT not in polynomial time, but in fact it does not have an algorithm running in subexponential time. (Impagliazzo, Paturi and Zane ImpagliazzoPZ01whic ()). With this hypothesis, many lower bounds can be made rather sharp. Recently Chen and Grohe demonstrated an isomorphism between subexponential time complexity and parameterized complexity ChenG07anis (). The connection between subexponential time complexity and parameterized complexity was noticed long ago by Abrahamson, Downey and Fellows AbrahamsonDF95fixed (). Another notable connection is with polynomial time approximation schemes. Here the idea is to give a solution which is approximate to within of the correct solution. Often the PCP theorem allows us to show that no such approximation scheme exists unless PNP. But sometimes they do, but can have awful running times. For example, here is a table from Downey Downey03para ():

• Arora Arora96poly () gave a PTAS for Euclidean Tsp

• Chekuri and Khanna ChekuriK00apta () gave a PTAS for Multiple Knapsack

• Shamir and Tsur ShamirT98them () gave a PTAS for Maximum Subforest

• Chen and Miranda ChenM01apol () gave a PTAS for General Multiprocessor Job Scheduling

• Erlebach et al. ErlebachJS05poly () gave a PTAS for Maximum Independent Set for geometric graphs.

Table 2 below calculates some running times for these PTAS’s with a 20% error.

Downey Downey03para () argues as follows:

“By anyone’s measure, a running time of is bad and is even worse. The optimist would argue that these examples are important in that they prove that PTAS’s exist, and are but a first foray. The optimist would also argue that with more effort and better combinatorics, we will be able to come up with some PTAS for the problems. For example, Arora Arora97near () also came up with another PTAS for Euclidean Tsp, but this time it was nearly linear and practical.
But this situation is akin to
P vs NP. Why not argue that some exponential algorithm is just the first one and with more effort and better combinatorics we will find a feasible algorithm for Satisfiability? What if a lot of effort is spent in trying to find a practical PTAS’s without success? As with P vs NP, what is desired is either an efficient444An Efficient Polynomial-Time Approximation Scheme (EPTAS) is an -approximation algorithm that runs in steps. If, additionally, is a polynomial function then we say that we have a Fully Polynomial-Time Approximation Scheme (FPTAS). PTAS (EPTAS), or a proof that no such PTAS exists555The same issue can also be raised if we consider FPTAS’s instead of EPTAS’s.. A primary use of NP-completeness is to give compelling evidence that many problems are unlikely to have better than exponential algorithms generated by complete search.”

The methods of parameterized complexity allow us to address the issue. Clearly, the bad running times are caused by the presence of in the exponent. What we could do is parameterize the problem by taking the parameter to be and then perform a reduction to a kind of core problem (see Subsection 3.5). If we can do this, then not only the particular algorithm is infeasible, but moreover, there cannot be a feasible algorithm unless something unlikely (like a miniature PNP) occurs.

### 1.5 Other coping strategies

Finally, before we move to formal definitions, we mention other strategies which attempt to understand what is feasible. One that springs to mind is the theory of average-case complexity. This has not really been widely applied as the distributions seem rather difficult to apply. Similar comments apply to the theory of smoothed analysis.

In fact one of the reasons that parameterized complexity has been used so often in practice is that it is widely applicable. We also mention that often it can be used to explain unexpected tractability of algorithms. Sometimes they seem to work because of underlying hidden parameters in the input. For example, the number of “lets” in some structured programming languages in practice is usually bounded by some constant, and sometimes engineering considerations make sure that, for example, the number of wafers in VLSI design is small. It is even conceivable there might be a parametric explanation of the tractability of the Simplex Algorithm.

If the reader finds this all useful, then we refer him/her to two recent issues of the The Computer Journal Downey08thec () devoted to aspects and applications of parameterized complexity, to the survey of Downey and McCartin RodneyM04some () on parameterized algorithms, the (somewhat dated) survey Downey Downey03para () for issues in complexity, and other articles such as DowneyFS99thev (); DowneyFS97survey (); Fell0ows02survey (); Fellows01para (); Fellows03survey (); DowneyFS99para () as well as the books by Downey and Fellows DowneyF99para (), Niedermeier Niedermeier06invi () and Flum and Grohe FlumG04para ().

### 1.6 Organization and notation

The paper is organized as follows:

In Section 2 we will give the basic definitions and some examples to show the kinds of parameterizations we can look at. We consider as fortunate the fact that a problem can have different parameterizations with different complexities.

In Section 3, we will introduce some of the basic hardness classes, and in particular the main standard of hardness, the class . The gold standard is established by an analog of the Cook-Levin Theorem discussed in Subsection 3.2. Parameterized reductions are more refined than the corresponding classical ones, and, for instance, it would appear that natural parameterized versions of 3-CNF Sat and CNF Sat do not have the same parameterized complexity. In Subsection 3.3 we see how this gives rise to a hierarchy based on logical depth. We also mention the Flum-Grohe hierarchy which is another parameterized hierarchy of problems based on another measure of logical depth. In Subsections 3.5 and 3.6 we look at PTAS’s, approximation, and lower bounds based on strong parameterized hypotheses. In Subection 3.7 we look at other applications to classical questions which are sensitive to combinatorics in polynomial time, and in Subsections 3.8 and 3.9 look at other parameterized classes such as counting classes and the notion of parameterized approximation. (The latter seeks, for example, an FPT-algorithm which on input either delivers a “no size dominating set” or produces one of size .) Subsection 3.10 deals with an important recent development. One of the most important practical techniques kernelization. Here one takes a problem specified by and produces, typically in polynomial time, a small version of the problem: such that is a yes iff is a yes, and moreover and usually . This technique is widely used in practice as it usually relies on a number of easily implementable reduction rules as we will discuss in Subsection 4.4. We will look at recent techniques which say when this technique can be used to give as above with polynomially bounded. The final part of the complexity section deals with some of the other classes we have left out.

In Section 4, we turn to techniques for the design of parameterized algorithms. We will focus mainly on graphs; for to lack of space we do not discuss too many applications, except en passant.

The principal contributor of high running times in classical algorithms comes from branching, and large search trees. For this reason we begin by looking at methods which restrict the branching in Subection 4.1, including bounded search trees and greedy localization. Then in Subection 4.1.3 we look at the use of automata and logic in the design of parameterized algorithms.

In turn this leads to meta-theoretical methods such as applications of Courcelle’s theorem on graphs of bounded treewidth, and other methods such as local treewidth and First Order Logic (FOL). This is discussed in Subection 4.2.1 and 4.2.2.

In Subections 4.2.3 and 4.2.4 we look at the highly impractical, but powerful methods emerging from the Graph Minors project. Later, we examine how these methods can be sped up.

Having worked in the stratosphere of algorithm design we return to focus on singly-exponential algorithm design techniques such as iterative compression, bidimensionality theory, and then in Subsection 4.4 move to kernelization, and finish with variations.

Of course, often we will use all of this in combination. For example, we might kernelize, then begin a branch tree of some bounded size, and the rekernelize the smaller graphs. It is often the case that this method is provably faster and certainly this is how it it is often done in practice. However, we do not have space in this already long paper to discuss such refinements in more detail.

## 2 Preliminaries

### 2.1 Basic definitions

The first thing to do is to define a proper notion of tractability for parameterized problems. This induces the definition of the parametrized complexity class FPT, namely the class of fixed-parameter tractable problems.

###### Definition 2.1.

A parameterized problem (also called parameterized language) is a subset of where is some alphabet. In the input of a parameterized problem, we call as the main part of the input and as the parameter of the input. We also agree that . We say that is fixed parameter tractable if there exists a function and an algorithm deciding whether in

 O(f(k)⋅nc)

steps, where is a constant not depending on the parameter of the problem. We call such an algorithm FPT-algorithm or, more concretely, to visualize the choice of and , we say that -FPT. We define the parameterized class FPT as the one containing all parameterized problems that can be solved by an FPT-algorithm.

Observe that an apparently more demanding definition of an FPT-algoritm would ask for algorithms runnning in steps, since then the exponential part would be additive rather than multiplicative. However, this would not define a different parameterized complexity class. To see this, suppose that some parameterized problem can be solved by an algorithm that can decide whether some belongs in in steps. In case , the same algorithm requires steps, while if , the algorithm runs in less than steps. In both cases, solves in at most steps where and .

Time bounds for parameterized algorithms have two parts. The is called parameter dependence and, is typically a super-polynomial function. The is a polynomial function and we will call it polynomial part. While in classic algorithm design there is only polynomial part to improve, in FPT-algorithms it appears to be more important to improve the parameter dependence. Clearly, for practical purposes, an step FPT-algorithm is more welcome than one running in steps.

### 2.2 Nomenclature of parameterized problems

Notice that a problem of classic complexity whose input has several integers has several parameterizations depending on which one is chosen to be the parameter. We complement the name of a parameterized problem so to indicate the parameterization that we choose. In many cases, the parameterization refers to a property of the input. As a driving example we consider the following problem:

Dominating Set
Instance:
a graph and an integer
Question: does have a dominating set of size at most ?

Dominating Set has several parameterizations. The most popular one is the following one:

-Dominating Set
Instance:
a graph and an integer .
Parameter: .
Question: does have a dominating set of size at most ?

Moreover, one can define parameterizations that do not depend on integers appearing explicitly in the input of the problem. For this, one may set up a “promise” variant of the problem based on a suitable restriction of its inputs. That way, we may define the following parameterization of Dominating Set:

-Dominating Set
Instance:
a graph with maximum degree and an integer .
Parameter: .
Question: does have a dominating set of size at most ?

In the above problem the promise-restriction is “with maximum degree ”. In general, we often omit this restriction as it is becomes clear by the chosen parameterization. Finally, we stress that we can define the parameterization by combining a promise restriction with parameters that appear in the input. As an example, we can define the following parameterization of Dominating Set:

--Dominating Set
Instance:
a graph with maximum degree and an integer .
Parameter: .
Question: does have a dominating set of size at most ?

Finally, the promise-restriction can be just a property of the main part of the input. A typical example is the following parameterized problem.

-Planar Dominating Set
Instance:
a planar graph and an integer .
Parameter: .
Question: does have a dominating set of size at most ?

Certainly, different parameterizations may belong to different parameterized complexity classes. For instance, --Dominating Set belongs to -, using the bounded search tree method presented in Subsection 4.1.1. Also, as we will see in Subsection 4.3.3, -Planar Dominating Set belongs to -. On the other side, -Dominating Set , unless . This follows by the well known fact that Dominating Set is NP-complete for graphs of maximum degree and therefore, not even an -algorithm is expected to exist for this problem. The parameterized complexity of -Dominating Set needs the definition of the W-hierarchy (defined in Subsection 3.3). While the problem can be solved in steps, it is known to be complete for the second level of the -hierarchy. This indicates that an FPT-algorithm is unlikely to exist for this parameterization.

## 3 Parameterized complexity

### 3.1 Basics

In this section we will look at some basic methods of establishing apparent parameterized intractability. We begin with the class and the -hierarchy, and later look at variations, including the and hierarchies, connections with approximation, bounds on kernelization and the like.

The role of the theory of NP-completeness is to give some kind of outer boundary for tractability. That is, if we identify P with “feasible”, then showing that a problem is NP-complete would suggest that the problem is computationally intractable. Moreover, we would believe that a deterministic algorithm for the problem would require worst-case exponential time.

However, showing that some problem is in P does not say that the problem is feasible. Good examples are the standard parameterizations of Dominating Set or Independent Set for which we know of no algorithm significantly better than trying all possibilities. For a fixed , trying all possibilities takes time , which is infeasible for large and reasonable , in spite of the fact that the problem is in . Of course, we would like to prove that there is no FPT algorithm for such a problem, but, as with classical complexity, the best we can do is to formulate some sort of completeness/hardness program. Showing that -Dominating Set is not in FPT would also show, as a corollary, that

A hardness program needs three things. First, it needs a notion of easiness, which we have: FPT. Second, it needs a notion of reduction, and third, it needs some core problem which we believe to be intractable.

Following naturally from the concept of fixed-parameter tractability is an appropriate notion of reducibility that expresses the fact that two parameterized problems have comparable parameterized complexity. That is, if problem (language) reduces to problem (language) , and problem is fixed-parameter tractable, then so too is problem .

###### Definition 3.1 (Downey and Fellows DowneyF95fixe-I (); DowneyF95fixe-II ()-Parameterized reduction).

A parameterized reduction666Strictly speaking, this is a parameterized many-one reduction as an analog of the classical Karp reduction. Other variations such as parameterized Turing reductions are possible. The function can be arbitrary, rather than computable, for other non-uniform versions. We give the reduction most commonly met. from a parameterized language to a parameterized language (symbolically ) is an algorithm that computes, from input consisting of a pair , a pair such that:

1. if and only if ,

2. is a computable function depending only , and

3. the computation is accomplished in time , where is the size of the main part of the input , is the parameter, is a constant (independent of both and ), and is an arbitrary function dependent only on .

If and , then we say that and are FPT-equivalent , and write .

A simple example of an FPT reduction is the fact that -Independent Set -Clique. (Henceforth, we will usually drop the parameter from the name of problems and will do so when the parameter is implicit from the context.) Namely, has a clique of size iff the complement of has an independent set of size . A simple non-example is the classical reduction of Independent Set to Vertex Cover: will have a size independent set iff has a size vertex cover, where is the number of vertices of . The point of this last example is that the parameter is not fixed.

### 3.2 An analog of the Cook-Levin Theorem

We need the final component for our program to establish the apparent parameterized intractability of computational problems: the identification of a “core” problem to reduce from.

In classical NP-completeness this is the heart of the Cook-Levin Theorem: the argument that a nondeterministic Turing machine is such an opaque object that it does not seem reasonable that we can determine in polynomial time if it has an accepting path from amongst the exponentially many possible paths. Building on earlier work of Abrahamson, Ellis, Fellows and Mata AbrahamsonFEM89 (), the idea of Downey and Fellows was to define reductions and certain core problems which have this property. In the fundamental papers DowneyF95fixe-I (); DowneyF95fixe-II (), a parameterized version of Circuit Acceptance. The classic version of this problem has as instance a boolean circuit and the question is whether some value assignment to the input variables leads to a yes. As is well known, this corresponds to Turing Machine acceptance, at least classically. Downey and Fellows DowneyF95fixe-II () combined with Cai, Chen, Downey and Fellows CaiCDF97onth () allows for a Turing Machine core problem:

Short Non-Deterministic Turing Machine Acceptance
Instance:
A nondeterministic Turing machine (of arbitrary degree of non-determinism).
Parameter: A positive integer .
Question:  Does have a computation path accepting the empty string in at most steps?

In the same sense that NP-completeness of the -Step Non-deterministic Turing Machine Acceptance, where is a polynomial in the size of the input, provides us with very strong evidence that no NP-complete problem is likely to be solvable in polynomial time, using Short Non-Deterministic Turing Machine Acceptance as a hardness core provides us with very strong evidence that no parameterized language , for which Short Non-Deterministic Turing Machine Acceptance, is likely to be fixed-parameter tractable. That is, if we accept the idea behind the basis of NP-completeness, then we should also accept that the Short Non-deterministic Turing Machine Acceptance problem is not solvable in time for some fixed . Our intuition would again be that all computation paths would need to be tried.

We remark that the hypothesis “Short Non-Deterministic Turing Machine Acceptance is not in FPT” is somewhat stronger than PNP. Furthermore, connections between this hypothesis and classical complexity have recently become apparent. If Short Non-Deterministic Turing Machine Acceptance is in FPT, then we know that the Exponential Time Hypothesis, which states that -variable 3Sat is not in subexponential time (DTIME), fails. See Impagliazzo, Paturi and Zane ImpagliazzoPZ01whic (), Cai and Juedes CaiJ03onth (), and Estivill-Castro, Downey, Fellows, Prieto-Rodriguez and Rosamond DowneyCFPR03cutt () (and our later discussion of ) for more details. As we will later see the ETH is a bit stronger than the hypothesis that Short Non-Deterministic Turing Machine Acceptance is not in FPT, but is equivalent to an apparently stronger hypothesis that “”. The precise definition of will be given later, but the idea here is that, as most researchers believe, not only is , but problems like Non-Deterministic Turing Machine Acceptance require substantial search of the available search space, and hence do not have algorithms running in deterministic subexponential time such as .

The class of problems that are FPT-reducible to Short Non-Deterministic Turing Machine Acceptance is called , for reasons discussed below. The parameterized analog of the classical Cook-Levin theorem (that CNF Sat is NP-complete) uses the following parameterized version of 3Sat:

Weighted CNF Sat
Instance:
A CNF formula (i.e., a formula in Conjunctive Normal Form).
Parameter: A positive integer .
Question:  Does have a satisfying assignment of weight ?

Here the weight of an assignment is its Hamming weight, that is, the number of variables set to be true.

Similarly, we can define Weighted CNF Sat, where the clauses have only variables and is some number fixed in advance. Weighted CNF Sat, for any fixed , is complete for .

###### Theorem 3.2 (Downey and Fellows DowneyF95fixe-II () and Cai, Chen, Downey and Fellows CaiCDF97onth ()).

For any fixed , Weighted CNF Sat Short Non-Deterministic Turing Machine Acceptance.

As we will see, there are many well-known problems hard for . For example, Clique and Independent set are basic -complete problems. An example of a -hard problem Subset Sum which classically has as input a set of integers, a positive integers and an integer and asks if there is a set of members of which add to . In parametric form, the question asks where there exist members of which add to . A similar problem is Exact Cheap Tour which asks for a weighted graph whether there is a tour through nodes of total weight . Another example is the Finite State Automata Intersection which has parameters and and asks for a set of finite state automata over an alphabet whether there is a string of length accepted by each of the , for There are a number of problems related to the Least Common Subsequence which are hard according to various parameterizations, and notably a -hard parameterized version of Steiner Tree. Here, and later, we will refer the reader to the various monographs and compendia such as the one currently maintained by Marco Cesati:

The original proof of Theorem 3.2 involves a generic simulation of a Turing machine by a circuit and then in the other direction involves combinatorial arguments to have parametric reductions from certain kinds of circuits (“weft 1”, see below) to Independent Set.

Since the original Downey-Fellows work, hundreds of problems have been shown to be -hard or -complete. A classical graph theoretical problem which is -complete (see DowneyF99para (); DowneyF95fixe-II ()) is the following.

-Red-Blue Nonblocker
Instance:
A 2-coloured graph of (fixed) maximum degree , and a positive integer .
Parameter: A positive integer .
Question: Is there a set of red vertices of cardinality such that each blue vertex has at least one neighbour not belonging to ?.

This problem seems to be useful for proving hardness results such as the proof that various coding problems are hard for in Downey, Fellows, Vardy, and Whittle DowneyFVW99thep (). In that problem, using a reduction from Red-Blue Nonblocker Downey et. al. show the following natural problem is hard for .

Weight Distribution
Instance: A binary matrix .
Parameter: An integer .
Question: Is there a set of at most columns of that add to the all zero vector?

As a consequence, the related problem of Maximum Likelihood Decoding where there is a target vector is also hard. Two notorious open questions remain in this area.

Shortest Vector
Instance: A basis for a lattice .
Parameter: An integer .
Question: Is there a nonzero vector such that ?

Even Set (also known as Minimum Distance)
Instance: A red/blue graph .
Parameter: A positive integer .
Question: Is there a set of at most red vertices all of which have an even number of blue neighbours?

Both of these are conjectured in DowneyF99para () as being hard. The unparameterized version of the latter is known to be NP-complete and of the former is famously open, a question of Peter van Emde Boas from 1980. We refer the reader to DowneyFVW99thep () for more details and other related problems such as Theta Series.

### 3.3 The W-hierarchy

The original theorems and hardness classes were first characterized in terms of boolean circuits of a certain structure. These characterizations lend themselves to easier membership proofs, as we now see. This uses the model of a decision circuit. This has boolean variables as inputs, and a single output. It has and and or gates and inverters. We designate a gate as large or small depending on the fan-in allowed, where small will be some fixed number. For example a 3CNF formula can be modeled by a circuit consisting of input variables (of unbounded fanout) one for each formula variable, possibly inverters below the variable, and a large and of small or’s (of size 3) with a single output line. For a decision circuit, the depth is the maximum number of gates on any path from the input variables to the output line, and the weft is the “large-gate depth”. More precisely, the weft is defined to be the maximum number of large gates on any path from the input variables to the output line, where a gate is called large if its fan-in exceeds some pre-determined bound.

The weight of an assignment to the input variables of a decision circuit is the Hamming weight, i.e., the number of variables set to true by the assignment.

Let be a family of decision circuits. Associated with is a basic parameterized language

 LF={⟨Ci,k⟩:Ci\leavevmode\nobreak has\leavevmode\nobreak a\leavevmode\nobreak weight\leavevmode\nobreak k\leavevmode\nobreak satisfying\leavevmode\nobreak assignment}.

We will denote by the parameterized language associated with the family of weft , depth , decision circuits.

###### Definition 3.3 (W[t] – Downey and Fellows DowneyF92 ()).

We define a language to be in the class if there is a parameterized reduction from to , for some .

We think of the inputs of Weighted CNF Sat as circuits consisting of conjunctions of disjunctions of literals. Hence Weighted CNF Sat is in . Extending this idea, a typical example of a formula in would be a conjunction of disjunctions of conjunctions of literals. More generally, we can define Weighted -Normalized Sat as the weighted satisfiability problem for a formula where is a conjunction of disjunctions of conjunctions of disjunctions with alternations.

This allows for the following basic result.

###### Theorem 3.4 (Downey and Fellows DowneyF95fixe-I ()).

For all , Weighted -Normalized Sat is complete for .

There are problems complete for other levels such as Dominating Set being complete for , but as with many situations in logic and computer science, natural problems at levels above tend to be rare. Cesati Cesati03thet () gave machine-based problems for various levels and his approach (via Turing machines) can allow for easier proofs, such as his proof of the -completeness of Perfect Code (Cesati02perf ()).

As an illustrative example, we will give one of the basic reductions.

###### Theorem 3.5 (Downey and Fellows DowneyF95fixe-I ()).

Dominating Set Weighted CNF Sat.

###### Proof.

We sketch the proof of the hardness direction that Weighted CNF Sat Dominating Set.

Let be a Boolean expression in conjunctive normal form consisting of clauses over the set of variables . We show how to produce in polynomial-time by local replacement, a graph that has a dominating set of size if and only if is satisfied by a truth assignment of weight .

A diagram of the gadget used in the reduction is given in Figure 1. The idea of the proof is as follows. There are of the gadgets arranged in a circle, where we regard them as ordered from first to last. Each of the gadgets has 3 main parts. Taken clockwise from top to bottom, these are the truth setting clique, the gap selection part (achieved by the gap selection cliques) and the gap enforcement part (achieved by the gap enforcement line).

The pigeonhole principle combined with the so-called enforcers are used to force one vertex from each of the truth cliques and one vertex from each of the next set of cliques, which form the gap enforcement part. The intuition is that the truth selection cliques represent a choice of a particular vertex to be selected to be true, and the gap selection represents the gap till the next selected true vertex. The interconnections between the truth setting cliques and the gap selection means that they align, and the gap enforcement line makes all the selections consistent. Finally because of the clause variables which also need to be dominated, we will ensure that the dominating set corresponds to a truth assignment.

In more detail, the truth selection component is a clique and the gap selection consists of cliques which we call columns. Our first action is to ensure that in any dominating set of elements, we must pick one vertex from each of these two components. This goal is achieved by the sets of enforcers (which are independent sets). For example, for each truth selection clique, enforcers are connected to every vertex in this clique and nowhere else and then it will follow by the pigeonhole principle that in any size dominating set for the final graph, to dominate these enforcers, some vertex in the truth selection clique must be chosen. Similarly, it follows that we must pick exactly one vertex from each of the truth selection cliques, and one of the gap selection cliques to dominate the enforcers.

The truth selection component of the -th gadget is denoted by , . Each of these components consists of a clique of vertices labeled . The intention is that if the vertex labeled is picked, this represents variable being set to true in the formula . We denote by the gap selection part of the -th gadget, . As explained above, this part consists of columns (cliques) where we index the columns by . The intention is that column corresponds to the choice of variable in the preceding . The idea then is the following. We join the vertex corresponding to variable in , to all vertices in except those in column . This means that the choice of in will cover all vertices of except those in this column. It follows since we have only to spend, that we must choose the dominating element from this column and nowhere else. (There are no connections from column to column.) The columns are meant to be the gap selection saying how many ’s there will be till the next positive choice for a variable. We finally need to ensure that if we choose variable in and gap in column from then we need to pick in . This is fulfilled by the gap enforcement component which consists of a set of vertices. We denote by , the set of vertices in this gap-enforcement line in the -th gadget .

For , the method is to connect vertex in column of to all of the vertices to provided that . (For simply connect vertex in column of to all of the vertices except to since this will need to “wrap around” to .) The first point of this process is that if we choose vertex in column with then none of the vertices in the enforcement line are dominated. Since there is only a single edge to the corresponding vertex in , there cannot possibly be a size dominating set for such a choice. It follows that we must choose some with in any dominating set of size . The main point is that if we choose in column we will dominate all of the except . Since we will only connect additionally to and nowhere else, to choose an element of and still dominate all of the we must actually choose .

Thus the above provides a selection gadget that chooses true variables with the gaps representing false ones. We enforce that the selection is consistent with the clauses of via clause vertices one for each clause . These are connected in the obvious ways. One connects a choice in or corresponding to making a clause true to the vertex . Then if we dominate all the clause variables too, we must have either chosen in some a positive occurrence of a variable in or we must have chosen in a gap corresponding to a negative occurrence of a variable in , and conversely. The formal details can be found in Downey and Fellows DowneyF95fixe-I (); DowneyF99para ().∎

There are notable problems which are -hard for all , such as Bandwidth, below.

Bandwidth
Instance:
A graph .
Parameter: A positive integer .
Question: Is there a 1-1 layout such that implies ?

The (for all ) hardness of Bandwidth was proven by Bodlaender, Fellows and Hallett BodlaenderFH04beyo () via a rather complex reduction. It is unknown what, say, Bandwidth (or even ) would imply either parametrically or classically. On general grounds, it seems unlikely that such a containment is possible.

At even higher levels, we can define Weighted Sat to be the weighted satisfiability problem where inputs correspond to unrestricted Boolean formulas and finally Weighted Circuit Sat to be the most general problem whose inputs are all polynomial sized circuits.

Notice that, in Theorem 3.2, we did not say that Weighted CNF Sat is -complete. The reason for this is that we do not believe that it is! In fact, we believe that

That is, classically, using a padding argument, we know that CNF Sat CNF Sat. However, the classical reduction does not define a parameterized reduction from Weighted CNF Sat to Weighted CNF Sat, it is not structure-preserving enough to ensure that parameters map to parameters. In fact, it is conjectured DowneyF95fixe-I () that there is no parameterized reduction at all from Weighted CNF Sat to Weighted CNF Sat. If the conjecture is correct, then Weighted CNF Sat is not in the class .

The point here is that parameterized reductions are more refined than classical ones, and hence we believe that we get a wider variety of apparent hardness behaviour when intractable problems are classified according to this more fine-grained analysis.

These classes form part of the basic hierarchy of parameterized problems below.

 FPT⊆W[1]⊆W[2]⊆⋯⊆W[t]⊆W[SAT]⊆W[P]⊆AW[t]⊂AW[P]⊆XP

This sequence is commonly termed “the -hierarchy”. The complexity class can be viewed as the parameterized analog of NP, since it suffices for the purpose of establishing likely parameterized intractability.

The classes , and the classes were introduced by Abrahamson, Downey and Fellows in AbrahamsonDF95fixed (). The class is the collection of parameterized languages -reducible to Weighted Sat. The class is the collection of parameterized languages FPT-equivalent to Weighted Circuit Sat, the weighted satisfiability problem for a decision circuit that is unrestricted. A standard translation of Turing machines into circuits shows that -Weighted Circuit Sat is the same as the problem of deciding whether or not a deterministic Turing machine accepts an input of weight . It is conjectured that the containment is proper DowneyF99para ().

Another way to view is the following. Consider the problem Short Circuit Sat defined as follows.

Short Circuit Sat
Instance:
A decision circuit with at most gates and inputs.
Parameter: A positive integer .
Question: Is there a setting of the inputs making true?

###### Theorem 3.6 (Abrahamson, Downey and Fellows AbrahamsonDF95fixed ()).

Short Circuit Sat is -complete.

###### Proof.

The proof of this result uses the “” trick introduced by Abrahamson, Downey and Fellows AbrahamsonDF95fixed (). To see that the problem is -hard, take an instance of Weighted Circuit Satisfiability with parameter and inputs . Let be new variables. Using lexicographic order and in polynomial time we have a surjection from this set to the -element subsets of . Representing this as a circuit and putting this on the top of the circuit for defines our new circuit . The converse is equally easy.∎

captures the notion of alternation. is the collection of parameterized languages -reducible to Parameterized Quantified Circuit Sat, the weighted satisfiability problem for an unrestricted decision circuit that applies alternating quantifiers to the inputs, defined here.

Parameterized Quantified Circuit Sat

Instance:  A weft decision circuit whose inputs correspond to a sequence of pairwise disjoint sets of variables.
Parameter: .
Question:  Is it the case that there exists a size subset of , such that for every size subset of , there exists a size subset of , such that (alternating quantifiers) such that, when are set to true, and all other variables are set to false, is satisfied?

The idea here is to look at the analog of PSPACE. The problem is that in the parameterized setting there seems no natural analog of Savitch’s Theorem or the proof that QBFSat is PSPACE-complete, and it remains an interesting problem to formulate a true analog of parameterized space.

The approach taken by AbrahamsonDF95fixed () was to look at the parameterized analog of QBFSat stated above.

One of the fundamental theorems proven here is that the choice of is irrelevant:

###### Theorem 3.7 (Abrahamson, Downey and Fellows AbrahamsonDF95fixed ()).

for all .

Many parameterized analogs of game problems are complete for the , such as the parameterized analog of Geography.

Short Geography
Instance: A directed graph and a specified vertex .
Parameter: A positive Integer .
Question: Does player 1 have a winning strategy in the following game? Each player alternatively chooses a new arc from . The first arc must have its tail as , and each vertex subsequently chosen must have as its tail the head of the previously chosen arc. The first player unable to choose loses.

The class , introduced in DowneyF99para (), is the collection of parameterized languages such that the -th slice of (the instances of having parameter ) is a member of . is provably distinct from and seems to be the parameterized class corresponding to the classical class (exponential time). There are problems complete for including the game of -Cat and Mice. The problem here is played on a directed graph and begins with a distinguished vertex called the cheese, a token on one vertex called the cat, and tokens (called mice) on other vertices. Players cat and the team of mice play alternatively moving one token at a time. A player can move a token along an arc at each stage. Team of mice wins if one mouse can reach the cheese (by occupying it even if the cat is already there) before the cat can eat one of the mice by occupying the same vertex.

It is conjectured that all of the containments given so far are proper, but all that is currently known is that FPT is a proper subset of .

There are hundreds of complete problems for the levels of the hierarchy. Here is a short list. The reader is referred to Downey03para () for references and definitions. As stated has -Cat and Mouse Game and many other games; has Linear Inequalities, Short Satisfiability, Weighted Circuit Satisfiability and Minimum Axiom Set. There are a number of quite important problems from combinatorial pattern matching which are hard for all : Longest Common Subsequence ( number of sequences, -two parameters), Feasible Register Assignment, Triangulating Colored Graphs, Bandwidth, Topological Bandwidth, Proper Interval Graph Completion, Domino Treewidth and Bounded Persistence Pathwidth. Some concrete problems complete for include Weighted Integer Programming, Dominating Set, Tournament Dominating Set, Unit Length Precedence Constrained Scheduling (hard), Shortest Common Supersequence (hard), Maximum Likelihood Decoding (hard), Weight Distribution in Linear Codes (hard), Nearest Vector in Integer Lattices (hard), Short Permutation Group Factorization (hard). Finally a collection of -complete problems: -Step Derivation for Context Sensitive Grammars, Short NTM Computation, Short Post Correspondence, Square Tiling, Weighted –CNF Satisfiability, Vapnik–Chervonenkis Dimension, Longest Common Subsequence (, length of common subseq.), Clique, Independent Set, and Monotone Data Complexity for Relational Databases. This list is merely representative, and new areas of application are being found all the time. There are currently good compendia of hardness and completeness results as can be found at the web

For older material, see the appendix of the monograph by Downey and Fellows DowneyF99para (), as well as the many surveys and the recent issue of the Computer Journal Downey08thec ().

There remain several important structural questions associated with the -hierarchy such as how it relates to the -hierarchy below, and whether any collapse may propagate.

Open questions : Does imply ? Does imply

### 3.4 The A-hierarchy and the Flum-Grohe approach

There have been several attempts towards simplification of this material, notably by Flum and Grohe FlumG04para (). Their method is to try to make the use of logical depth and logic more explicit. To do this Flum and Grohe take a detour through the logic of finite model theory. Close inspection of their proofs reveals that similar combinatorics are hidden. Their view is that model checking should be viewed as the fundamental viewpoint of complexity.

To this end, for a class of first order formula with -ary free relation variable, we can define as the problem:

Instance: A structure with domain and an integer .
Parameter: .
Question: Is there a relation with such that

This idea naturally extends to classes of formulae . Then we define to be the class of parameterized problems for . It is easy to show that (Strictly, we should write but the formulae considered here are first order.) Then to recast the classical -hierarchy at the finite levels, the idea is to define for ,

 W[t]=[p-\sc WD(Πt)]FPT,

where, given a parameterized problem , denotes the parameterized problems that are -reducible to .

Flum and Grohe have similar logic-based formulations of the other W-hierarchy classes. We refer the reader to FlumG04para () for more details.

We remark that the model checking approach leads to other hierarchies. One important hierarchy found by Flum and Grohe is the -hierarchy which is also based on alternation like the AW-hierarchy but works differently. For a class of formulae, we can define the following parameterized problem.

Instance: A structure and a formula .
Parameter: .
Question: Decide if where this denotes the evaluation of in .

Then Flum and Grohe define

 A[t]=[p\sc-MC(Σt)]FPT.

For instance, for , -Clique can be defined by

 cliquek=∃x1,…xk(⋀1≤i

in the language of graphs, and the interpretation of the formula in a graph would be that has a clique of size . Thus the mapping is a FPT reduction showing that parameterized Clique is in Flum and Grohe populate various levels of the -hierarchy and show the following.

###### Theorem 3.8 (Flum and Grohe FlumG04para ()).

The following hold:

• .

Clearly , but no other containment with respect to other classes of the -hierarchy is known. It is conjectured by Flum and Grohe that no other containments than those given exist, but this is not apparently related to any other conjecture.

If we compare classical and parameterized complexity it is evident that the framework provided by parameterized complexity theory allows for more finely-grained complexity analysis of computational problems. It is deeply connected with algorithmic heuristics and exact algorithms in practice. We refer the reader to either the survey FlumG04para (), or those in two recent issues of The Computer Journal Downey08thec () for further insight.

We can consider many different parameterizations of a single classical problem, each of which leads to either a tractable, or (likely) intractable, version in the parameterized setting. This allows for an extended dialog with the problem at hand. This idea towards the solution of algorithmic problems is explored in, for example, DowneyFS99thev (). A nice example of this extended dialog can be found in the work of Iris van Rooij and her co-authors, as discussed in van Rooij and Wareham vanRooijW08para ().

### 3.5 Connection with Ptas’s

The reader may note that parameterized complexity is addressing intractability within polynomial time. In this vein, the parameterized framework can be used to demonstrate that many classical problems that admit a PTAS do not, in fact, admit any PTAS with a practical running time, unless . The idea here is that if a has a running time such as , where is the error ratio, then the PTAS is unlikely to be useful. For example if then the running time is already to the 10th power for an error of . What we could do is regard as a parameter and show that the problem is -hard with respect to that prameterization. In that case there would likely be no method of removing the from the exponent in the running time and hence no efficient PTAS, a method first used by Bazgan Bazgan95sche (). For many more details of the method we refer the reader to the survey Downey03para ().

A notable application of this technique is due to Cai et. al. CaiFJR07thec () who showed that the method of using planar formulae tends to give PTAS’s that are never practical. The exact calibration of PTAS’s and parameterized complexity comes through yet another hierarchy called the -hierarchy. The breakthrough was the realization by Cai and Juedes CaiJ01sube () that the collapse of a basic sub-hierarchy of the -hierarchy was deeply connected with approximation.

The base level of the hierarchy is the problem defined by the core problem below.

Instance: A CNF circuit (or, equivalently, a CNF formula) of size .
Parameter: A positive integer .
Question: Is satisfiable?

That is, we are parameterizing the size of the problem rather than some aspect of the problem. The idea naturally extends to higher levels for that, for example, M[2] would be a product of sums of product formula of size and we are asking whether it is satisfiable.

The basic result is that . The hypothesis is equivalent to ETH. In fact, as proven in ChenG07anis (), there is an isomorphism, the so-called miniaturization, between exponential time complexity (endowed with a suitable notion of reductions) and XP (endowed with FPT reductions) such that the respective notions of tractability correspond, that is, subexponential time on the one and FPT on the other side. The reader might wonder why we have not used a Turing Machine of size in the input, rather than a CNF circuit. The reason is that whilst we can get reductions for small version of problems, such as -Vertex Cover and the like, to have the same complexity as the circuit problem above, we do not know how to do this for Turing machines. It remains an open question whether the -sized circuit problem and the -sized Turing Machine problem have the same complexity.

For more on this topic and other connections with classical exponential algorithms we refer the reader to the survey of Flum and Grohe FlumG04para ().

### 3.6 Fpt and Xp optimality

Related to the material of the last section are emerging programmes devoted to proving tight lower bounds on parameterized problems, assuming various non-collapses of the parameterized hierarchies. For this section, it is useful to use the notation for parameterized algorithms. This is the part of the running time which is exponential. For example, a running time of would be written as

One example of a lower bound was the original paper of Cai and Juedes CaiJ01sube (); CaiJ03onth () who proved the following definitive result.

###### Theorem 3.9.

-Planar Vertex Cover, -Planar Independent Set, -Planar Dominating Set, and -Planar Red/Blue Dominating Set cannot be in - unless (or, equivalently, unless ETH fails).

The optimality of Theorem 3.9 follows from the fact that all above problems have been classified in -FPT as proved in AlberBFKN02 (); KloksLL02newa (); FominT06domi (); KoutsonasT10plan () (see also Subsection 4.3.3).

We can ask for similar optimality results for any FPT problem. See for example, Chen and Flum ChenF06onmi (). We will meet another approach to FPT optimality in Subsection 3.11 where we look at classes like EPT meant to capture how problems are placed in FPT via another kind of completeness programme.

Another example of such optimality programmes can be found in exciting resent work on XP optimiality. This programme represents a major step forward in the sense that it regards the classes like as artifacts of the basic problem of proving hardness under reasonable assumptions, and strikes at membership of .

Here are some examples. We know that Independent Set and Dominating Set are in .

###### Theorem 3.10 (Chen et. al ChenCFHJKX05tigh ()).

The following hold:

• Independent Set cannot be solved in time unless

• Dominating Set cannot be solved in time unless .

A beautiful development in this area is the resent paper by Marx on the Closest Substring problem. We refer to Marx Marx08clos () for more details, and to Chen and Meng ChenM08onpa () for other related results.

There remains a lot of work to be done here and these programmes appear to be exciting developments, see e.g., FedorGLS09cliq (); FominGLS10algo (); LokshtanovMS11know (); LokshtanovMS10know ().

### 3.7 Other classical applications

Similar techniques have been used to solve a significant open question about techniques for formula evaluation when they were used to show “resolution is not automizable” unless (Alekhnovich and Razborov AlekhnovichR01reso (), Eickmeyer, Grohe and Grüber EickmeyerGG08appr ().) Parameterized complexity assumptions can also be used to show that the large hidden constants (various towers of two’s) in the running times of generic algorithms obtained though the use of algorithmic meta-theorems cannot be improved upon (see FlumG06para ().) One illustration is obtained through the use of local treewidth (which we will discuss in Subection 4.2.2). The notions of treewidth and branchwidth are by now ubiquitous in algorithmic graph theory (the definition of branchwidth is given in Subsection 4.2.1). Suffice to say is that it is a method of decomposing graphs to measure how “treelike” they are, and if they have small treewidth/branchwidth, as we see in Subection 4.2.1, we can run dynamic programming algorithms upon them. The local treewidth of a class of graphs is called bounded iff there a function such that for all graphs and all vertices the neighborhood of of distance from has treewidth (see Subsection 4.2.2). Examples include planar graphs and graphs of bounded maximum degree. The point is that a class of graphs of bounded local treewidth is automatically FPT for a wide class of properties.

###### Theorem 3.11 (Frick and Grohe FrickG99deci ()).

Deciding first-order statements is FPT for every fixed class of graphs of bounded local treewidth.

One problem with this algorithmic meta-theorem is that the algorithm obtained for a fixed first-order statement can rapidly have towers of twos depending on the quantifier complexity of the statement, in the same way that this happens for Courcelle’s theorem on decidability of monadic second order statements (as discussed in Subsection 4.2.1) for graphs of bounded treewidth. What Frick and Grohe FrickG04thec () showed is that such towers of two’s cannot be removed unless

Another use of parameterized complexity is to give an indirect approach to proving likely intractability of problems which are not known to be NP-complete. A classic example of this is the following problem.

Precedence Constrained -Processor Scheduling
Instance: A set of unit-length jobs and a partial ordering on , a positive deadline and a number of processors .
Parameter: A positive integer .
Question: Is there a mapping such that for all , , and for all , , .

In general this problem is NP-complete and is known to be in P for 2 processors. The question is what happens for processors. For us the question becomes whether the problem is in XP for processors. This remains one of the open questions from Garey and Johnson’s famous book GareyJ79comp () (Open Problem 8), but we have the following.

###### Theorem 3.12 (Bodlaender, Fellows and Hallett BodlaenderFH04beyo ()).

Precedence Constrained -Processor Scheduling is -hard.

The point here is that even if  Precedence Constrained -Processor Scheduling is in XP, there seems no way that it will be feasible for large . Researchers in the area of parameterized complexity have long wondered whether this approach might be applied to other problems like Composite Number or Graph Isomorphism. For example, Luks Lucks82isom () has shown that Graph Isomorphism can be solved in for graphs of maximum degree , but any proof that the problem was hard would clearly imply that the general problem was not feasible. We know that Graph Isomorphism is almost certainly not NP-complete, since proving that would collapse the polynomial hierarchy to 2 or fewer levels (see the work of Schöning in Schoning87grap ()). Similar comments can be applied to graphs of bounded treewidth by Bodlaender Bodlaender90poly ().

### 3.8 Other parameterized classes

There have been other parameterized analogs of classical complexity analyzed. For example McCartin McCartin06para () and Flum and Grohe FlumG02thep () each analyzed parameterized counting complexity. Here we can define the class for instance (with the core problem being counting the number of accepting paths of length in a nondeterministic Turing machine), and show that counting the number of cliques is -complete. Notably Flum and Grohe proved the following analog to Valiant’s Theorem on the permanent.

###### Theorem 3.13 (Flum and Grohe FlumG02thep ()).

Counting the number of cycles of size in a bipartite graph is -complete.

One of the hallmark theorems of classical complexity is Toda’s Theorem which states that contains the polynomial time hierarchy. There is no known analog of this result for parameterized complexity. One of the problems is that all known proofs of Toda’s Theorem filter through probabilistic classes. Whilst there are known analogs of Valiant’s Theorem (Downey, Fellows and Regan DowneyFR98para (), and Müller Muller06rand (); Muller08para ()), there is no known method of “small” probability amplification. (See Montoya Montoya08thep () for a thorough discussion of this problem.) This seems the main problem and there is really no satisfactory treatment of probability amplification in parameterized complexity. For example, suppose we wanted an analog of the operator calculus for parameterized complexity. For example, consider , as an analog of . We can define to mean that iff where is a circuit with (e.g.) inputs and the number of accepting inputs is odd. We need that there is a language iff there is a language such that for all ,

 (x,k)∈L iff a randomly chosen (x,k,k′)∈L′.

A problem will appear as soon as we try to prove the analog basic step in Toda’s Theorem: The first step in the usual proof of Toda’s Theorem, which can be emulated, is to use some kind of random hashing to result in a circuit with either no accepting inputs, or exactly one accepting input, the latter with nonzero probability. So far things work out okay. However, the next step in the usual proof is to amplify this probability: that is, do this a polynomial number of times independently to amplify by majority to get the result in the class. The problem is that if this amplification uses many independently chosen instances, then the number of input variables goes up and the the result is no longer a circuit since we think of this as a polynomial sized circuit with only many inputs. There is a fundamental question: Is it possible to perform amplification with only such limited nondeterminism?

Notable here are the following:

###### Theorem 3.14 (Downey, Fellows and Regan DowneyFR98para ()).

For all , there is a randomized FPT-reduction from to unique . (Analog of of the Valiant-Vaziarini Theorem)

###### Theorem 3.15 (Müller Muller08vali ()).

(an analog of ) has weakly uniform derandomization (“weakly uniform” is an appropriate technical condition) iff there is a polynomial time computable unbounded function with , where denotes BPP with only access to nondeterministic bits.

Moritz Müller’s result says that, more or less, parameterized derandomization implies nontrivial classical derandomization. Other interesting work on randomization in parameterized complexity is to be found in the work of Müller. For instance, in Muller06rand (), he showed that there is a Valiant-Vaziarini type lemma for most -complete problems, including e.g. Unique Dominating Set. The only other work in this area is in the papers of Montoya, such as Montoya08thep (). In Montoya08thep (), Montoya showed that it is in a certain sense unlikely that BPW[P], an analogue of the classical Arthur-Merlin class, allows probability amplification. (That is, amplification with bits of nodeterminism is in a sense unlikely.) Much remains to do here.

Perhaps due to the delicacy of material, or because of the focus on practical computation, there is only a little work on what could be called parameterized structural complexity. By this we mean analogs of things like Ladner’s Theorem (that if then there are sets of intermediate complexity) (See DowneyF93fixe ()), Mahaney’s Theorem that if there is a sparse complete set then (Cesarti and Fellows CesatiF96spar (), the PCP Theorem, Toda’s theorem etc. There is a challenging research agenda here.

### 3.9 Parameterized approximation

One other fruitful area of research has been the area of parameterized approximation, beginning with three papers at the same conference! (Cai and Huang CaiH06fixe (), Chen, Grohe and Grüber ChenGG06onpa (), and Downey, Fellows and McCartin DowneyFM06para ()). Parameterized approximation was part of the folklore for some time originating with the dominating set question originally asked by Fellows. For parameterized approximation, one inputs a problem and asks for either a solution of size or a statement that there is no solution of size . This idea was originally suggested by Downey and Fellows, inspired by earlier work of Robertson and Seymour on approximations to treewidth. Of course, we need to assume that for this to make sense. A classical example taken from Garey and Johnson GareyJ79comp () is Bin Packing where the First Fit algorithm either says that no packing of size exists or gives one of size at most . As observed by Downey, Fellows and McCartin DowneyFM06para () most hard problems do not have approximations with an additive factor (i.e. ) unless One surprise from that paper is the following.

###### Theorem 3.16 (Downey, Fellows, McCartin DowneyFM06para ()).

The problem Independent Dominating Set which asks if there is a dominating set of size which is also an independent set, has no parameterized approximation algorithm for any computable function unless

Subsequently, Eickmeyer, Grohe Grüber showed the following in EickmeyerGG08appr ().

###### Theorem 3.17.

If is a “natural” -complete language then has no parameterized approximation algorithm unless

The notion of “natural” here is indeed quite natural and covers all of the known -complete problems, say, in the appendix of Downey and Fellows DowneyF99para (). We refer the reader to EickmeyerGG08appr () for more details.

One open question asks whether there is any multiplicative FPT approximation for Dominating Set. This question of Mike Fellows has been open for nearly 20 years and asks in its oldest incarnation whether there is an algorithm which, on input either says that there is no size dominating set, or that there is one of size .

### 3.10 Limits on kernelization

There has been important recent work concerning limitations of parameterized techniques. One of the most important techniques is that of kernelization. We will see in Subsection 4.4 that this is one of the basic techniques of the area, and remains one of the most practical techniques since usually kernelization is based around simple reduction rules which are both local and easy to implement. (See, for instance Abu-Khzam, et. al. Abu-KhzamCFLSS04kern (), Flum and Grohe FlumG04para (), or Guo and Niedermeier GuoN07invi ().) The idea, of course, is that we shrink the problem using some polynomial time reduction to a small one whose size depends only on the parameter, and then do exhaustive search on that kernel. Naturally the latter step is the most time consuming and the problem becomes how to find small kernels. An important question is therefore: When is it possible to show that a problem has no polynomial kernel? A formal definition of kernelization is the following:

###### Definition 3.18 (Kernelization).

A kernelization algorithm, or in short, a kernel for a parameterized problem is an algorithm that given , outputs in time a pair such that

• ,

• ,

where is an arbitrary computable function, and a polynomial. Any function as above is referred to as the size of the kernel. We frequently use the term “kernel” for the outputs of the kernelization algorithm and, in case is a polynomial (resp. linear) function, we say that we have a polynomial (resp. linear) kernelization algorithm or, simply, a polynomial (resp. linear) kernel.

Clearly, if , then all problems have constant size kernels so some kind of complexity theoretical hypothesis is needed to show that something does not have a small kernel. The key thing that the reader should realize is that the reduction to the kernel is a polynomial time reduction for both the problem and parameter, and not an FPT algorithm. Limiting results on kernelization mean that this often used practical method cannot be used to get FPT algorithms. Sometimes showing that problems do not have small kernels (modulo some hypothesis) can be extracted from work on approximation, since a small kernel is, itself, an approximation. Thus, whilst we know of a kernel for -Vertex Cover NemT75vert (), we know that unless , we cannot do better than size using Dinur and Safra DinurS02thei (), since the PCP theorem provides a 1.36-lower bound on approximation (see also GuoN07invi ()).

To show that certain problems do not have polynomial kernels modulo a reasonable hypothesis, we will need the following definition, which is in classical complexity. (It is similar to another definition of Harnik and Noar HarnikN06onth ()).

###### Definition 3.19 (Or-Distillation).

An Or-distillation algorithm for a classical problem is an algorithm that

• receives as input a sequence , with for each ,

• uses time polynomial in ,

• and outputs a string with

1. for some .

2. is polynomial in .

We can similarly define And-distillation by replacing the second last item by “ for all .” Bodlaender, Downey, Fellows and Hermelin BodlaenderDFH09onpr () showed that an NP-complete problem has an Or-distillation algorithm iff all of them do. On general Kolmogorov complexity grounds, it seems very unlikely that NP-complete problems have either distillation algorithms. Following a suggestion of Bodlaender, Downey, Fellows and Hermelin, Fortnow and Santhanam related Or-distillation to the polynomial time hierarchy as follows.

###### Lemma 3.20 (FortnowS11infe ()).

If any NP-complete problem has an Or-distillation algorithm then co-Poly and hence the polynomial time hierarchy collapses to 3 or fewer levels.

At the time of writing, there is no known version of Lemma 3.20 for And-distillation and it remains an important open question, whether NP-complete problems having And-distillation implies any classical collapse of the polynomial time hierarchy. This material all relates to kernelization as follows.

###### Definition 3.21 (Composition).

An Or-composition algorithm for a parameterized problem is an algorithm that

• receives as input a sequence , with for each ,

• uses time polynomial in ,

• and outputs with

1. for some .

2. is polynomial in .

Again we may similarly define And-composition. The key lemma relating the two concepts is the following, which has an analogous statement for the And-distillation case:

###### Lemma 3.22 (Bodlaender, Downey, Fellows and Hermelin BodlaenderDFH09onpr ()).

Let be a Or-compositional parameterized problem whose unparameterized version is NP-complete. If has a polynomial kernel, then is also Or-distillable.

Distillation of one problem within another has also been discussed in Chen, Flum and Müller BodlaenderDFH09onpr ().

Using Lemma 3.22, Bodlaender, Downey, Fellows and Hermelin BodlaenderDFH09onpr () proved that a large class of graph-theoretic FPT problems including -Path, -Cycle, various problems for graphs of bounded treewidth, etc., all have no polynomial-sized kernels unless the polynomial-time hierarchy collapses to three or fewer levels.

For And-composition, tying the distillation to something like the Fortnow-Santhanam material would be important since it would say that important FPT problems like Treewidth and Cutwidth would likely not have polynomial size kernels, and would perhaps suggest why algorithms such as Bodlaender’s linear time FPT algorithm Bodlaender96alin () (and other treewidth algorithms) is so hard to run. Since the original paper BodlaenderDFH09onpr (), there have been a number of developments such as the Bodlaender, Thomassé, and Yeo BodlaenderTY08anal () use of reductions to extend the arguments above to wider classes of problems such as Disjoint Paths, Disjoint Cycles, and Hamilton Cycle Parameterized by Treewidth. In the same direction, Chen, Flum and Müller ChenFM09lowe () used this methodology to further explore the possible sizes of kernels. One fascinating application was by Fernau et. al. FernauFLRSV09kern (). They showed that the following problem was in FPT, by showing a kernel of size .

Rooted -OutLeaf Branching
Instance:
A directed graph with exactly one vertex of indegree called the root.
Parameter: A positive integer .
Question: Is there an oriented subtree of with exactly leaves spanning ?

However, for the unrooted version they used the machinery from  BodlaenderDFH09onpr () to demonstrate that it has no polynomial-size kernel unless some collapse occurs, but clearly by using independent versions of the rooted version the problem has a polynomial-time Turing kernel. We know of no method of dealing with the non-existence of polynomial time Turing kernels and it seems an important programmatic question.

All of the material on lower bounds for Or distillation tend to use reductions to things like large disjoint unions of graphs having Or composition. Thus, problems sensitive to this seemed not readily approachable using the machinery. Very recently, Stefan Kratsch Kratsch11cono () discovered a way around this problem using the work Dell and van Melkebeek DellM10sati ().

###### Theorem 3.23 (Kratsch Kratsch11cono ()).

The FPT problem -Ramsey which asks if a graph has either an independent set or a clique of size , has no polynomial kernel unless NPpoly.

The point here is that a large collection of disjoint graphs would have a large independent set, hence new ideas were definitely needed. The ideas in Kratsch’s proof are quite novel and use co-nondetermism in compositions and communication complexity in a quite novel way. Perhaps they might allow us to attack other such problems.

Finally, we remark in passing that as an important spinoff of the question from parameterized complexity, the Fortnow-Santhanam Lemma has been used by Burhmann and Hitchcock BuhrmanH08npha () to show that unless the polynomial-time hierarchy collapses, NP-hard languages must be exponentially dense (meaning that hard instances must occur very often), perhaps suggesting a connection between parameterized complexity and density of hard instances.

For further techniques on lower bounds on kernels, see DomLS09inco ().

### 3.11 Bounded parameterized complexity

Another direction exploring the fine structure of FPT was taken by Flum, Grohe and Weyer FlumGW06boun () who suggested that the important problems were those where the constants were small. If is FPT then the question “” is decidable in time , for some computable function . But as we have seen could be anything. Flum, Grohe and Weyer argue that the correct classes to focus upon are those with membership algorithms of the form , , and These classes are called SUBEPT, EPT and EXPT, respectively. As a group they are referred to as bounded FPT. Interestingly, the reductions may be different for the different classes, because the idea is to give the maximum power possible to the reductions and yet still remain within the class. For putting this idea in a general framework, we mention Downey, Flum, Grohe and Weyer FlumGW06boun ().

Of course, any reduction which is polynomial in both variables will keep the problem within the relevant class. As an example, one of the most useful reductions here is the EPT reduction which asks that there is a function so that is computable in time and there is a constant such that . It is easy to see that EPT is closed under EPT reductions.

Concentrating on EPT as a representative example, the next step is to introduce classes akin to the -hierarchy for the bounded theory. For example, a nondeterministic -restricted Turing machine is one for which there is a function that for each run on input the machine performs at most many steps such that at most are nondeterministic. Using this as the core problem, and closing under EPT reductions defines the class . More generally, it is possible to extend this definition to define another hierarchy akin to the -hierarchy called the -hierarchy. It is, of course, easy to show that the classes EPT, EXPT, SUBEPT are all distinct by diagonalization. But what the new hierarchy allows for is to show that various problems which are all -hard, say, differ on the new hierarchy, aligning with our intuition that they should be. As an illustration, there is a problem from computational learning theory called the Vapnik-Chervonenkis Dimension which is proven to be -complete by combining Downey, Evans and Fellows DowneyEF93para () and Downey and Fellows DowneyF95survey (). The hardness proof in DowneyEF93para () used explicitly non-polynomial FPT reductions. It turns out that Vapnik-Chervonenkis Dimension is complete for , and yet the problem Tournament Dominating Set which is -complete is complete for the class .

Little is known here. It would be fascinating if a central FPT problem with infeasible algorithms at present, such as Treewidth could be shown to be complete for, say, , which would suggest no possible reasonable FPT algorithm. The current algorithms only put Treewidth into EXPT.

From FlumGW06boun () one example of this phenomenon is First-Order Model-Checking over Words: it is in FPT but even EAW[*]-hard (a bounded analog of ).

### 3.12 Things left out

In such a short article, we do not really have space to devote to the many areas of applications for this hardness theory. Suffice to say, it has been applied in relational databases, phylogeny, linguistics, VLSI design, graph structure theory, cognitive science, Nash equilibria, voting schemes, operations research, etc. We can only point at the various survey articles, the books by Niedermeier’s Niedermeier06invi (), Fernau Fernau05para (), and Flum and Grohe FlumG06para () as well as the Computer Journal issues mentioned earlier Downey08thec ().

Also we have not really had space to devote neither to many other natural hierarchies based on various aspects of logical depth such as the , and other hierarchies, nor to issues like parameterized parallelism. Here we refer the reader to Flum and Grohe FlumG06para (). Finally, we have not really developed all of the models which have been used.

What we hope the reader has gotten is the general flavor of the parameterized complexity subject and some appreciation of the techniques.

## 4 Parameterized algorithms

The diversity of problems and areas where parameterized algorithms have been developed is enormous. In this section we make an, unavoidably incomplete, presentation of the main techniques and results on the design of parameterized algorithms. To facilitate our description, we are mainly focusing on problems on graphs.

Some basic notational conventions on graphs follow: Given a graph, we denote the vertex and edge set of by and respectively. Given a vertex we denote the set of its neighbors by . When a graph is the input of a parameterized problem, we always denote by the number of its vertices.

### 4.1 De-nondeterminization

Parameterized algorithm design requires a wider viewpoint that the classic one of polynomial algorithm design. The reason is that we now permit time complexities that are super-polynomial. However, we have to make sure that this super-polynomial overhead depends only on the parameter. A wide family of techniques in parameterized algorithm design can be seen as ways to turn some polynomial non-deterministic algorithms to a deterministic one where the resulting super-polynomial overhead is independent of the main part of the problem.

#### 4.1.1 Bounded search trees

The most ubiquitous de-nondeterminization technique is the bounded search tree technique. We present it on one of the most extensively studied problems in parameterized algorithms and complexity:

-Vertex Cover

Instance:  A graph and a non-negative integer .
Parameter: .
Question:  Does have a vertex set of size at most that intersects all the edges of ?

This problem can be solved by the following non-deterministic algorithm:

 1. set S←∅ and i←k, 2. while E(G)≠∅ and i>1, consider some edge e={v,u} of G, guess non-deterministically one, say x, of the two endpoints of e, set G←G∖x, i←i−1 3. if E(G)=∅ then return YES 4. If k=0, then return NO

Clearly, this algorithm is based on the fact that for each edge, one of its endpoints should be a vertex of every vertex cover. It makes at most non-deterministic choices each requiring deterministic steps. This polynomial non-deterministic algorithm can be reverted to an (exponential) deterministic one as follows.

 Algorithm algvc(G,k) 1. if |E(G)|=0, then return YES 2. if k=0, then return NO 3. choose (arbitrarily) an edge e={v,u}∈E(G) and return algvc(G−v,k−1)⋁algvc(G−u,k−1)

Notice that the above algortihm makes recursive calls and the depth of the recursion is . Therefore it takes steps and is an FPT-algorithm. This implies that -Vertex Cover. Notice that the algorithm is based on the transformation of a non-deterministic algorithm to a deterministic one in a way that the exponential blow-up (i.e., the size of the search tree) depends exclusively on the parameter . This idea traces back to the paper of Buss and Goldsmith in BussG93nond () and the corresponding technique is called the bounded search tree technique.

Can we do better? A positive answer requires a better combination of non-deterministic guesses. As an example, instead of an edge one may pick a path P with vertices , and . Then every vertex cover of will contain some of the pairs , , . That way, each recursive call has now calls but also guesses 2 vertices and therefore the depth of the recursion is at most . In case does not contain a path of length , then is a forest of stars and in such a case the -Vertex Cover can be solved in linear time. Summing all this together, we have a step FPT-algorihtm for the -Vertex Cover which improves the previous one, as . An even faster algorithm can be designed if we exploit the fact that the -Vertex Cover can be solved in linear time for graphs of maximum degree . Then, as long as there is a vertex with at least 3 neighbors, we know that a vertex cover should contain or all its neighbors. An elementary analysis implies that the size of the search tree satisfies the relation . As the biggest root of the characteristic polynomial is less than , we have an FPT-algorithm for the the -Vertex Cover that runs in steps.

Especially for -Vertex Cover, there is a long sequence of improvements of the running time, based on even more refined search trees. The first non-trivial results dates back to the step algorithm of Balasubramanian, Fellows, and Raman BalasubramanianFR98anim (). This result was improved in ChenKJ01vert (); NiedermeierR99uppe () and, currently, the fastest parameterized algorithm for -Vertex Cover runs in steps by Chen, Kanj, and Xia ChenKX10impr (). For applications of the same technique on restricted versions of the same problem, see ChenLJ00impr (); NiedermeierR03onef ().

We stress that the bounded search tree technique is strongly linked to the design of exact algorithms, as FPT-algorithms can be seen as exact algorithms where the parameter is not any more restricted. For instance, the