Limits of Preprocessing††thanks: Research funded by the ERC (COMPLEX REASON, Grand Reference 239962).
We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems.
Keywords: Fixed-Parameter Tractability, Constraint Satisfaction, Global Constraints, Satisfiability, Bayesian Networks, Normal Logic Programs, Computational Complexity.
Many important computational problems that arise in various areas of AI are intractable. Nevertheless, AI research was very successful in developing and implementing heuristic solvers that work well on real-world instances. An important component of virtually every solver is a powerful polynomial-time preprocessing procedure that reduces the problem input. For instance, preprocessing techniques for the propositional satisfiability problem are based on Boolean Constraint Propagation (see, e.g., ?, ?), CSP solvers make use of various local consistency algorithms that filter the domains of variables (see, e.g., ?, ?); similar preprocessing methods are used by solvers for Nonmonotonic and Bayesian reasoning problems (see, e.g., ?, ?, ?, ?, respectively).
Until recently, no provable performance guarantees for polynomial-time preprocessing methods have been obtained, and so preprocessing was only subject of empirical studies. A possible reason for the lack of theoretical results is a certain inadequacy of the P vs NP framework for such an analysis: if we could reduce in polynomial time an instance of an NP-hard problem just by one bit, then we can solve the entire problem in polynomial time by repeating the reduction step a polynomial number of times, and follows.
With the advent of parameterized complexity (?), a new theoretical framework became available that provides suitable tools to analyze the power of preprocessing. Parameterized complexity considers a problem in a two-dimensional setting, where in addition to the input size , a problem parameter is taken into consideration. This parameter can encode a structural aspect of the problem instance. A problem is called fixed-parameter tractable (FPT) if it can be solved in time where is a function of the parameter and is a polynomial of the input size . Thus, for FPT problems, the combinatorial explosion can be confined to the parameter and is independent of the input size. It is known that a problem is fixed-parameter tractable if and only if every problem input can be reduced by polynomial-time preprocessing to an equivalent input whose size is bounded by a function of the parameter (?). The reduced instance is called the problem kernel, the preprocessing is called kernelization. The power of polynomial-time preprocessing can now be benchmarked in terms of the size of the kernel. Once a small kernel is obtained, we can apply any method of choice to solve the kernel: brute-force search, heuristics, approximation, etc. (?). Because of this flexibility a small kernel is generally preferable to a less flexible branching-based fixed-parameter algorithm. Thus, small kernels provide an additional value that goes beyond bare fixed-parameter tractability.
In general the size of the kernel is exponential in the parameter, but many important NP-hard optimization problems such as Minimum Vertex Cover, parameterized by solution size, admit polynomial kernels, see, e.g., (?) for references.
In previous research several NP-hard AI problems have been shown to be fixed-parameter tractable. We list some important examples from various areas:
Constraint satisfaction problems (CSP) over a fixed universe of values, parameterized by the induced width (?).
Consistency and generalized arc consistency for intractable global constraints, parameterized by the cardinalities of certain sets of values (?).
Propositional satisfiability (SAT), parameterized by the size of backdoors (?).
Positive inference in Bayesian networks with variables of bounded domain size, parameterized by size of loop cutsets (?; ?).
Nonmonotonic reasoning with normal logic programs, parameterized by feedback width (?).
However, only exponential kernels are known for these fundamental AI problems. Can we hope for polynomial kernels?
Our results are throughout negative. We provide strong theoretical evidence that none of the above fixed-parameter tractable AI problems admits a polynomial kernel. More specifically, we show that a polynomial kernel for any of these problems causes a collapse of the Polynomial Hierarchy to its third level, which is considered highly unlikely by complexity theorists.
Our results are general: The kernel lower bounds are not limited to a particular preprocessing technique but apply to any clever technique that could be conceived in future research. Hence the results contribute to the foundations of AI.
Our results suggest the investigation of alternative approaches to polynomial-time preprocessing; for instance, preprocessing that produces in polynomial time a Boolean combination of polynomially sized kernels instead of one single kernel.
2 Formal Background
A parameterized problem P is a subset of for some finite alphabet . For a problem instance we call the main part and the parameter. We assume the parameter is represented in unary. For the parameterized problems considered in this paper, the parameter is a function of the main part, i.e., for a function . We then denote the problem as , e.g., -CSP(width) denotes the problem -CSP parameterized by the width of the given tree decomposition.
A parameterized problem P is fixed-parameter tractable if there exists an algorithm that solves any input in time where is an arbitrary computable function of and is a polynomial in .
A kernelization for a parameterized problem is an algorithm that, given , outputs in time polynomial in a pair such that (i) if and only if and (ii) , where is an arbitrary computable function, called the size of the kernel. In particular, for constant the kernel has constant size . If is a polynomial then we say that P admits a polynomial kernel.
Every fixed-parameter tractable problem admits a kernel. This can be seen by the following argument due to ? (?). Assume we can decide instances of problem P in time . We kernelize an instance as follows. If then we already have a kernel of size . Otherwise, if , then is a polynomial; hence we can decide the instance in polynomial time and replace it with a small decision-equivalent instance . Thus we always have a kernel of size at most . However, is super-polynomial for NP-hard problems (unless ), hence this generic construction is not providing polynomial kernels.
We understand preprocessing for an NP-hard problem as a polynomial-time procedure that transforms an instance of the problem to a (possible smaller) solution-equivalent instance of the same problem. Kernelization is such a preprocessing with a performance guarantee, i.e., we are guaranteed that the preprocessing yields a kernel whose size is bounded in terms of the parameter of the given problem instance. In the literature also different forms of preprocessing have been considered. An important one is knowledge compilation, a two-phases approach to reasoning problems where in a first phase a given knowledge base is (possibly in exponential time) preprocessed (“compiled”), such that in a second phase various queries can be answered in polynomial time (?).
3 Tools for Kernel Lower Bounds
In the sequel we will use recently developed tools to obtain kernel lower bounds. Our kernel lower bounds are subject to the widely believed complexity theoretic assumption (or equivalently, ). In other words, the tools allow us to show that a parameterized problem does not admit a polynomial kernel unless the Polynomial Hierarchy collapses to its third level (see, e.g., ?, ?).
A composition algorithm for a parameterized problem is an algorithm that receives as input a sequence , uses time polynomial in , and outputs with (i) if and only if for some , and (ii) is polynomial in . A parameterized problem is compositional if it has a composition algorithm. With each parameterized problem we associate a classical problem
where denotes an arbitrary symbol from and is a new symbol not in . We call the unparameterized version of P.
The following result is the basis for our kernel lower bounds.
Theorem 1 (?, ?, ?, ?).
Let P be a parameterized problem whose unparameterized version is NP-complete. If P is compositional, then it does not admit a polynomial kernel unless , i.e., the Polynomial Hierarchy collapses.
Let be parameterized problems. We say that P is polynomial parameter reducible to Q if there exists a polynomial time computable function and a polynomial , such that for all we have (i) if and only if , and (ii) . The function is called a polynomial parameter transformation.
The following theorem allows us to transform kernel lower bounds from one problem to another.
Theorem 2 (?, ?).
Let P and Q be parameterized problems such that is NP-complete, is in NP, and there is a polynomial parameter transformation from P to Q. If Q has a polynomial kernel, then P has a polynomial kernel.
4 Constraint Networks
Constraint networks have proven successful in modeling everyday cognitive tasks such as vision, language comprehension, default reasoning, and abduction, as well as in applications such as scheduling, design, diagnosis, and temporal and spatial reasoning (?). A constraint network is a triple where is a finite set of variables, is a finite universe of values, and is set of constraints. Each constraint is a pair where is a list of variables of length called the constraint scope, and is an -ary relation over , called the constraint relation. The tuples of indicate the allowed combinations of simultaneous values for the variables . A solution is a mapping such that for each and , we have . A constraint network is satisfiable if it has a solution.
With a constraint network we associate its constraint graph where contains an edge between two variables if and only if they occur together in the scope of a constraint. A width tree decomposition of a graph is a pair where is a tree and is a labeling of the nodes of with sets of vertices of such that the following properties are satisfied: (i) every vertex of belongs to for some node of ; (ii) every edge of is is contained in for some node of ; (iii) For each vertex of the set of all tree nodes with induces a connected subtree of ; (iv) holds for all tree nodes . The treewidth of is the smallest such that has a width tree decomposition. The induced width of a constraint network is the treewidth of its constraint graph (?). We note in passing that the problem of finding a tree decomposition of width is NP-hard but fixed-parameter tractable in .
Let be a fixed universe containing at least two elements. We consider the following parameterized version of the constraint satisfaction problem (CSP).
Instance: A constraint network and a width tree decomposition of the constraint graph of .
Parameter: The integer .
Question: Is satisfiable?
It is well known that -CSP(width) is fixed-parameter tractable over any fixed universe (?; ?) (for generalizations see ?, ?). We contrast this classical result and show that it is unlikely that -CSP(width) admits a polynomial kernel, even in the simplest case where .
-CSP(width) does not admit a polynomial kernel unless the Polynomial Hierarchy collapses.
We show that -CSP(width) is compositional. Let , , be a given sequence of instances of -CSP(width) where is a constraint network and is a width tree decomposition of the constraint graph of . We may assume, w.l.o.g., that for (otherwise we can simply change the names of variables). We form a new constraint network as follows. We put where are new variables. We define the set of constraints in three groups.
(1) For each and each constraint we add to a new constraint where .
(2) We add ternary constraints where and , , .
(3) Finally, we add two unary constraints and which force the values of and to and , respectively.
Let be the constraint graphs of and , respectively. Fig. 1 shows an illustration of for .
We observe that are cut vertices of . Removing these vertices separates into independent parts where is the path , and is isomorphic to . By standard techniques (see, e.g., ?, ?), we can put the given width tree decompositions of and the trivial width 1 tree decomposition of together to a width tree decomposition of . Clearly can be obtained from , , in polynomial time.
We claim that is satisfiable if and only if at least one of the is satisfiable. This claim can be verified by means of the following observations: The constraints in groups (2) and (3) provide that for any satisfying assignment there will be some such that are all set to 0 and are all set to ; consequently is set to and all for are set to 1. The constraints in group (1) provide that if we set to , then we obtain from the original constraint ; if we set to then we obtain a constraint that can be satisfied by setting all remaining variables to . We conclude that -CSP(width) is compositional.
In order to apply Theorem 1, it remains to establish that the unparameterized version of -CSP(width) is NP-complete. Deciding whether a constraint network over the universe is satisfiable is well-known to be NP-complete (say by reducing 3-SAT). To a constraint network on variables we can always add a trivial width tree decomposition of its constraint graph (taking a single tree node where contains all variables of ). Hence is NP-complete.
The propositional satisfiability problem (SAT) was the first problem shown to be NP-hard (?). Despite its hardness, SAT solvers are increasingly leaving their mark as a general-purpose tool in areas as diverse as software and hardware verification, automatic test pattern generation, planning, scheduling, and even challenging problems from algebra (?). SAT solvers are capable of exploiting the hidden structure present in real-world problem instances. The concept of backdoors, introduced by ? (?) provides a means for making the vague notion of a hidden structure explicit. Backdoors are defined with respect to a “sub-solver” which is a polynomial-time algorithm that correctly decides the satisfiability for a class of CNF formulas. More specifically, ? (?) define a sub-solver to be an algorithm that takes as input a CNF formula and has the following properties: (i) Trichotomy: either rejects the input , or determines correctly as unsatisfiable or satisfiable; (ii) Efficiency: runs in polynomial time; (iii) Trivial Solvability: can determine if is trivially satisfiable (has no clauses) or trivially unsatisfiable (contains only the empty clause); (iv.) Self-Reducibility: if determines , then for any variable and value , determines . denotes the formula obtained from by applying the partial assignment , i.e., satisfied clauses are removed and false literals are removed from the remaining clauses.
We identify a sub-solver with the class of CNF formulas whose satisfiability can be determined by . A strong -backdoor set (or -backdoor, for short) of a CNF formula is a set of variables such that for each possible truth assignment to the variables in , the satisfiability of can be determined by sub-solver in time . Hence, if we know an -backdoor of size , we can decide the satisfiability of by running on instances , yielding a time bound of . Hence SAT decision is fixed-parameter tractable in the backdoor size for any sub-solver . Hence the following problem is clearly fixed-parameter tractable for any sub-solver .
Instance: A CNF formula , and an -backdoor of of size .
Parameter: The integer .
Question: Is satisfiable?
We are concerned with the question of whether instead of trying all possible partial assignments we can reduce the instance to a polynomial kernel. We will establish a very general result that applies to all possible sub-solvers.
SAT(-backdoor) does not admit a polynomial kernel for any sub-solver unless the Polynomial Hierarchy collapses.
We will devise polynomial parameter transformations from the following parameterized problem which is known to be compositional (?) and therefore unlikely to admit a polynomial kernel.
Instance: A propositional formula in CNF on variables.
Parameter: The number of variables.
Question: Is satisfiable?
Let be a CNF formula and the set of all variables of . Due to property (ii) of a sub-solver, is an -backdoor set for any . Hence, by mapping (as an instance of SAT(vars)) to (as an instance of SAT(-backdoor)) provides a (trivial) polynomial parameter transformation from SAT(vars) to SAT(-backdoor). Since the unparameterized versions of both problems are clearly NP-complete, the result follows by Theorem 2. ∎
Let 3SAT() (where is an arbitrary parameterization) denote the problem SAT() restricted to 3CNF formula, i.e., to CNF formulas where each clauses contains at most three literals. In contrast to SAT(vars), the parameterized problem 3SAT(vars) has a trivial polynomial kernel: if we remove duplicate clauses, then any 3CNF formula on variables contains at most clauses, and so is a polynomial kernel. Hence the easy proof of Theorem 4 does not carry over to 3SAT(-backdoor). We therefore consider the cases 3SAT(Horn-backdoor) and 3SAT(2CNF-backdoor) separately, these cases are important since the detection of Horn and 2CNF-backdoors is fixed-parameter tractable (?).
Neither 3SAT(Horn-backdoor) nor 3SAT(2CNF-backdoor) admit a polynomial kernel unless the Polynomial Hierarchy collapses.
Let . We show that 3SAT(-backdoor) is compositional. Let , , be a given sequence of instances of 3SAT(-backdoor) where is a 3CNF formula and is a -backdoor set of of size . We distinguish two cases.
Case 1: . Let and . Whether is satisfiable or not can be decided in time since the satisfiability of a Horn or 2CNF formula can be decided in linear time. We can check whether at least one of the formulas is satisfiable in time which is polynomial in . If some is satisfiable, we output ; otherwise we output ( is unsatisfiable). Hence we have a composition algorithm.
Case 2: . This case is more involved. We construct a new instance of 3SAT(-backdoor) as follows.
Let . Since , follows.
Let denote the set of variables of . We may assume, w.l.o.g., that and that for all since otherwise we can change names of variable accordingly. In a first step we obtain from every a CNF formula as follows. For each variable we take two new variables and . We replace each positive occurrence of a variable in with the literal and each negative occurrence of with the literal . We add all clauses of the form for ; we call these clauses “connection clauses.” Let be the formula obtained from in this way. We observe that and are SAT-equivalent, since the connection clauses form an implication chain. Since the connection clauses are both Horn and 2CNF, is also a -backdoor of .
We take a set of new variables. Let be the sequence of all possible clauses (modulo permutation of literals within a clause) containing exactly literals over the variables in . Consequently we can write as where .
For we add to each connection clause of the literal . Let denote the 3CNF formula obtained from this way.
For we define 3CNF formulas as follows. If then consists just of the clause . If then we take new variables and let consist of the clauses , , . Finally, we let be the 3CNF formula containing all the clauses from . Any assignment to that satisfies can be extended to an assignment that satisfies since such assignment satisfies at least one connection clause and so the chain of implications from from to is broken.
It is not difficult to verify the following two claims. (i) is satisfiable if and only if at least one of the formulas is satisfiable. (ii) is a -backdoor of . Hence we have also a composition algorithm in Case 2, and thus 3SAT(-backdoor) is compositional. Clearly is NP-complete, hence the result follows from Theorem 1. ∎
6 Global Constraints
The success of today’s constraint solvers relies heavily on efficient algorithms for special purpose global constraints (?). A global constraint specifies a pattern that frequently occurs in real-world problems, for instance, it is often required that variables must all take different values (e.g., activities requiring the same resource must all be assigned different times). The AllDifferent global constraint efficiently encodes this requirement.
More formally, a global constraint is defined for a set of variables, each variable ranges over a finite domain of values. An instantiation is an assignment such that for each . A global constraint defines which instantiations are legal and which are not. A global constraint is consistent if it has at least one legal instantiation, and it is domain consistent (or hyper arc consistent) if for each variable and each value there is a legal instantiation with . For all global constraints considered in this paper, domain consistency can be reduced to a quadratic number of consistency checks, hence we will focus on consistency. We assume that the size of a representation of a global constraint is polynomial in .
For several important types of global constraints, the problem of deciding whether a constraint of type is consistent (in symbols -Cons) is NP-hard. Examples for such intractable types of constraints are NValue, Disjoint, and Uses (?). An NValue constraint over a set of variables requires from a legal instantiation that ; AllDifferent is the special case where . The global constraints Disjoint and Uses are specified by two sets of variables ; Disjoint requires that for each pair and ; Uses requires that for each there is some such that . For a set of variables we write .
? (?) considered as parameter for NValue, as parameter for Disjoint, and as parameter for Uses. They showed that consistency checking is fixed-parameter tractable for the constraints under the respective parameterizations, i.e., the problems NValue-Cons(), Disjoint-Cons(), and Uses-Cons() are fixed-parameter tractable. We show that it is unlikely that their results can be improved in terms of polynomial kernels.
The problems NValue-Cons(), Disjoint-Cons(), Uses-Cons() do not admit polynomial kernels unless the Polynomial Hierarchy collapses.
We devise a polynomial parameter reduction from SAT(vars). We use a construction of ? (?). Let be a CNF formula over variables . We consider the clauses and variables of as the variables of a global constraint with domains , and . Now can be encoded as an NValue constraint with and (clearly is satisfiable if and only if the constraint is consistent). Since we have a polynomial parameter reduction from SAT(vars) to NValue-Cons(). Similarly, as observed by ? (?), can be encoded as a Disjoint constraint with and (), or as a Uses constraint with and (). Since the unparameterized problems are clearly NP-complete, the result follows by Theorem 2. ∎
Further results on kernels for global constraints have been obtained by ? (?).
7 Bayesian Reasoning
Bayesian networks (BNs) have emerged as a general representation scheme for uncertain knowledge (?). A BN models a set of stochastic variables, the independencies among these variables, and a joint probability distribution over these variables. For simplicity we consider the important special case where the stochastic variables are Boolean. The variables and independencies are modelled in the BN by a directed acyclic graph , the joint probability distribution is given by a table for each node which defines a probability for each possible instantiation of the parents of in . The probability of a complete instantiation of the variables of is given by the product of over all variables . We consider the problem Positive-BN-Inference which takes as input a Boolean BN and a variable , and asks whether . The problem is NP-complete (?) and moves from NP to #P if we ask to compute (?). The problem can be solved in polynomial time if the BN is singly connected, i.e, if there is at most one undirected path between any two variables (?). It is natural to parametrize the problem by the number of variables one must delete in order to make the BN singly connected (the deleted variables form a loop cutset). In fact, Positive-BN-Inference(loop cutset size) is easily seen to be fixed-parameter tractable as we can determine whether by taking the maximum of over all possible instantiations of the cutset variables, each of which requires processing of a singly connected network. However, although fixed-parameter tractable, it is unlikely that the problem admits a polynomial kernel.
Positive-BN-Inference(loop cutset size) does not admit a polynomial kernel unless the Polynomial Hierarchy collapses.
(Sketch.) We give a polynomial parameter transformation from SAT(vars) and apply Theorem 2. The reduction is based on the reduction from 3SAT given by ? (?). However, we need to allow clauses with an arbitrary number of literals since, as observed above, 3SAT(vars) has a polynomial kernel. Let be a CNF formula on variables. We construct a BN such that for a variable we have if and only if is satisfiable. Cooper uses input nodes for representing variables of , clause nodes for representing the clauses of , and conjunction nodes for representing the conjunction of the clauses. We proceed similarly, however, we cannot represent a clause of large size with a single clause node , as the required table would be of exponential size. Therefore we split clauses containing more than 3 literals into several clause nodes, as indicated in Figure 2.
It remains to observe that the set of input nodes is a loop cutset of the constructed BN, hence we have indeed a polynomial parameter transformation from SAT(vars) to Positive-BN-Inference(loop cutset size). The result follows by Theorem 2. ∎
8 Nonmonotonic Reasoning
Logic programming with negation under the stable model semantics is a well-studied form of nonmonotonic reasoning (?; ?). A (normal) logic program is a finite set of rules of the form
where are atoms, where forms the head and the from the body of . We write , , and . Let be a finite set of atoms. The GF reduct of a logic program under is the program obtained from by removing all rules with , and removing from the body of each remaining rule all literals with . is a stable model of if is a minimal model of , i.e., if (i) for each rule with we have , and (ii) there is no proper subset of with this property. The undirected dependency graph of is formed as follows. We take the atoms of as vertices and add an edge between two atoms if there is a rule with and , and we add a path if and ( is a new vertex of degree 2). The feedback width of is the size of a smallest set of atoms such that every cycle of runs through an atom in .
A fundamental computational problems is Stable Model Existence (SME), which asks whether a given normal logic program has a stable model. The problem is well-known to be NP-complete (?). ? (?) showed that SME(feedback width) is fixed-parameter tractable (see ? (?) for generalizations). We show that this result cannot be strengthened with respect to a polynomial kernel.
SME(feedback width) does not admit a polynomial kernel unless the Polynomial Hierarchy collapses.
(Sketch.) We give a polynomial parameter transformation from SAT(vars) to feedback width using a construction of ? (?). Given a CNF formula on variables, we construct a logic program as follows. For each variable of we take two atoms and and include the rules and ; for each clause of we take an atom and include for each positive literal of the rule , and for each negative literal of the rule ; finally, we take two atoms and and include the rule and for each clause of the rule . is satisfiable if and only if has a stable model (?). It remains to observe that each cycle of runs through a vertex in , hence the feedback width of is at most . Hence we have a polynomial parameter transformation from SAT(vars) to SME(feedback width). The result follows by Theorem 2. ∎
We have established super-polynomial kernel lower bounds for a wide range of important AI problems, providing firm limitations for the power of polynomial-time preprocessing for these problems. We conclude from these results that in contrast to many optimization problems (see Section 1), typical AI problems do not admit polynomial kernels. Our results suggest the consideration of alternative approaches. For example, it might still be possible that some of the considered problems admit polynomially sized Turing kernels, i.e., a polynomial-time preprocessing to a Boolean combination of a polynomial number of polynomial kernels. In the area of optimization, parameterized problems are known that do not admit polynomial kernels but admit polynomial Turing kernels (?). This suggests a theoretical and empirical study of Turing kernels for the AI problems considered.
- [Bessière et al., 2004] Bessière, C.; Hebrard, E.; Hnich, B.; and Walsh, T. 2004. The complexity of global constraints. In McGuinness, D. L., and Ferguson, G., eds., Proceedings of the Nineteenth National Conference on Artificial Intelligence, July 25-29, 2004, San Jose, California, USA, 112–117. AAAI Press / The MIT Press.
- [Bessière et al., 2008] Bessière, C.; Hebrard, E.; Hnich, B.; Kiziltan, Z.; Quimper, C.-G.; and Walsh, T. 2008. The parameterized complexity of global constraints. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008, 235–240. AAAI Press.
- [Bessière et al., 2009] Bessière, C.; Hebrard, E.; Hnich, B.; Kiziltan, Z.; and Walsh, T. 2009. Range and roots: Two common patterns for specifying and propagating counting and occurrence constraints. Artificial Intelligence 173(11):1054–1078.
- [Bessière, 2006] Bessière, C. 2006. Constraint propagation. In Rossi, F.; van Beek, P.; and Walsh, T., eds., Handbook of Constraint Programming. Elsevier. chapter 3.
- [Bidyuk and Dechter, 2007] Bidyuk, B., and Dechter, R. 2007. Cutset sampling for Bayesian networks. J. Artif. Intell. Res. 28:1–48.
- [Bodlaender et al., 2009] Bodlaender, H. L.; Downey, R. G.; Fellows, M. R.; and Hermelin, D. 2009. On problems without polynomial kernels. J. of Computer and System Sciences 75(8):423–434.
- [Bodlaender, Thomassé, and Yeo, 2009] Bodlaender, H. L.; Thomassé, S.; and Yeo, A. 2009. Kernel bounds for disjoint cycles and disjoint paths. In Fiat, A., and Sanders, P., eds., Algorithms - ESA 2009, 17th Annual European Symposium, Copenhagen, Denmark, September 7-9, 2009. Proceedings, volume 5757 of Lecture Notes in Computer Science, 635–646. Springer Verlag.
- [Bolt and van der Gaag, 2006] Bolt, J., and van der Gaag, L. 2006. Preprocessing the MAP problem. In Studený, M., and Vomlel, J., eds., Proceedings of the Third European Workshop on Probabilistic Graphical Models, Prague, 51–58.
- [Cadoli et al., 2002] Cadoli, M.; Donini, F. M.; Liberatore, P.; and Schaerf, M. 2002. Preprocessing of intractable problems. Information and Computation 176(2):89–120.
- [Cook, 1971] Cook, S. A. 1971. The complexity of theorem-proving procedures. In Proc. 3rd Annual Symp. on Theory of Computing, 151–158.
- [Cooper, 1990] Cooper, G. F. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence 42(2-3):393–405.
- [Dechter and Pearl, 1989] Dechter, R., and Pearl, J. 1989. Tree clustering for constraint networks. Artificial Intelligence 38(3):353–366.
- [Dechter, 2010] Dechter, R. 2010. Constraint satisfaction. In The CogNet Library: References Collection. http://cognet.mit.edu/library/erefs/mitecs/dechter.html: MIT Press.
- [Downey, Fellows, and Stege, 1999] Downey, R.; Fellows, M. R.; and Stege, U. 1999. Parameterized complexity: A framework for systematically confronting computational intractability. In Contemporary Trends in Discrete Mathematics: From DIMACS and DIMATIA to the Future, volume 49 of AMS-DIMACS, 49–99. American Mathematical Society.
- [Eén and Biere, 2005] Eén, N., and Biere, A. 2005. Effective preprocessing in SAT through variable and clause elimination. In Bacchus, F., and Walsh, T., eds., Theory and Applications of Satisfiability Testing, 8th International Conference, SAT 2005, St. Andrews, UK, June 19-23, 2005, Proceedings, volume 3569 of Lecture Notes in Computer Science, 61–75. Springer Verlag.
- [Fernau et al., 2009] Fernau, H.; Fomin, F. V.; Lokshtanov, D.; Raible, D.; Saurabh, S.; and Villanger, Y. 2009. Kernel(s) for problems with no kernel: On out-trees with many leaves. In Albers, S., and Marion, J.-Y., eds., 26th International Symposium on Theoretical Aspects of Computer Science, STACS 2009, February 26-28, 2009, Freiburg, Germany, Proceedings, volume 3 of LIPIcs, 421–432. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany.
- [Fichte and Szeider, 2011] Fichte, J. K., and Szeider, S. 2011. Backdoors to tractable answer-set programming. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI 2011, 863–868. AAAI Press/IJCAI.
- [Fortnow and Santhanam, 2008] Fortnow, L., and Santhanam, R. 2008. Infeasibility of instance compression and succinct PCPs for NP. In Dwork, C., ed., Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, 133–142. ACM.
- [Gaspers and Szeider, 2011] Gaspers, S., and Szeider, S. 2011. Kernels for global constraints. In Proceedings of the International Joint Conference on Artificial Intelligence, IJCAI 2011, 540–545. AAAI Press/IJCAI.
- [Gebser et al., 2008] Gebser, M.; Kaufmann, B.; Neumann, A.; and Schaub, T. 2008. Advanced preprocessing for answer set solving. In Ghallab, M.; Spyropoulos, C. D.; Fakotakis, N.; and Avouris, N. M., eds., ECAI 2008 - 18th European Conference on Artificial Intelligence, Patras, Greece, July 21-25, 2008, Proceedings, volume 178 of Frontiers in Artificial Intelligence and Applications, 15–19. IOS Press.
- [Gelfond and Lifschitz, 1988] Gelfond, M., and Lifschitz, V. 1988. The stable model semantics for logic programming. In Kowalski, R. A., and Bowen, K. A., eds., Logic Programming, Proceedings of the Fifth International Conference and Symposium, Seattle, Washington, August 15-19, 1988, 1070–1080. MIT Press.
- [Gomes et al., 2008] Gomes, C. P.; Kautz, H.; Sabharwal, A.; and Selman, B. 2008. Satisfiability solvers. In Handbook of Knowledge Representation, volume 3 of Foundations of Artificial Intelligence. Elsevier. 89–134.
- [Gottlob, Scarcello, and Sideri, 2002] Gottlob, G.; Scarcello, F.; and Sideri, M. 2002. Fixed-parameter complexity in AI and nonmonotonic reasoning. Artificial Intelligence 138(1-2):55–86.
- [Guo and Niedermeier, 2007] Guo, J., and Niedermeier, R. 2007. Invitation to data reduction and problem kernelization. ACM SIGACT News 38(2):31–45.
- [Kloks, 1994] Kloks, T. 1994. Treewidth: Computations and Approximations. Berlin: Springer Verlag.
- [Marek and Truszczyński, 1991] Marek, W., and Truszczyński, M. 1991. Autoepistemic logic. J. of the ACM 38(3):588–619.
- [Marek and Truszczyński, 1999] Marek, V. W., and Truszczyński, M. 1999. Stable models and an alternative logic programming paradigm. In The Logic Programming Paradigm: a 25-Year Perspective. Springer. 169–181.
- [Niemelä, 1999] Niemelä, I. 1999. Logic programs with stable model semantics as a constraint programming paradigm. Ann. Math. Artif. Intell. 25(3-4):241–273. Logic programming with non-monotonic semantics: representing knowledge and its computation.
- [Nishimura, Ragde, and Szeider, 2004] Nishimura, N.; Ragde, P.; and Szeider, S. 2004. Detecting backdoor sets with respect to Horn and binary clauses. In Proceedings of SAT 2004 (Seventh International Conference on Theory and Applications of Satisfiability Testing, 10–13 May, 2004, Vancouver, BC, Canada), 96–103.
- [Papadimitriou, 1994] Papadimitriou, C. H. 1994. Computational Complexity. Addison-Wesley.
- [Pearl, 1988] Pearl, J. 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference. The Morgan Kaufmann Series in Representation and Reasoning. San Mateo, CA: Morgan Kaufmann.
- [Pearl, 2010] Pearl, J. 2010. Bayesian networks. In The CogNet Library: References Collection. http://cognet.mit.edu/library/erefs/mitecs/pearl.html: MIT Press.
- [Roth, 1996] Roth, D. 1996. On the hardness of approximate reasoning. Artificial Intelligence 82(1-2):273–302.
- [Samer and Szeider, 2010] Samer, M., and Szeider, S. 2010. Constraint satisfaction with bounded treewidth revisited. J. of Computer and System Sciences 76(2):103–114.
- [van Hoeve and Katriel, 2006] van Hoeve, W.-J., and Katriel, I. 2006. Global constraints. In Rossi, F.; van Beek, P.; and Walsh, T., eds., Handbook of Constraint Programming. Elsevier. chapter 6.
- [Williams, Gomes, and Selman, 2003] Williams, R.; Gomes, C.; and Selman, B. 2003. On the connections between backdoors, restarts, and heavy-tailedness in combinatorial search. In Informal Proc. of the Sixth International Conference on Theory and Applications of Satisfiability Testing, S. Margherita Ligure - Portofino, Italy, May 5-8, 2003 (SAT 2003), 222–230.