A Parameterized Complexity View on Description Logic Reasoning
Abstract
Description logics are knowledge representation languages that have been designed to strike a balance between expressivity and computational tractability. Many different description logics have been developed, and numerous computational problems for these logics have been studied for their computational complexity. However, essentially all complexity analyses of reasoning problems for description logics use the onedimensional framework of classical complexity theory. The multidimensional framework of parameterized complexity theory is able to provide a much more detailed image of the complexity of reasoning problems.
In this paper we argue that the framework of parameterized complexity has a lot to offer for the complexity analysis of description logic reasoning problems—when one takes a progressive and forwardlooking view on parameterized complexity tools. We substantiate our argument by means of three case studies. The first case study is about the problem of concept satisfiability for the logic with respect to nearly acyclic TBoxes. The second case study concerns concept satisfiability for concepts parameterized by the number of occurrences of union operators and the number of occurrences of full existential quantification. The third case study offers a critical look at data complexity results from a parameterized complexity point of view. These three case studies are representative for the wide range of uses for parameterized complexity methods for description logic problems.
A Parameterized Complexity View on Description Logic Reasoning
Ronald de Haan Institute for Logic, Language and Computation University of Amsterdam me@ronalddehaan.eu
Copyright © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Introduction
Description logics have been designed as knowledge representation formalisms that have good computational properties (?). Correspondingly, there has been a lot of research into the computational complexity of reasoning problems for different description logics. This research has, however, focused entirely on the framework of classical complexity theory to study the computational complexity (see, e.g., ? ?; ? ?).
The more finegrained and multidimensional framework of parameterized complexity theory has hardly been applied to study the complexity of reasoning problems for description logics. Only a few works used the framework of parameterized complexity to study description logic problems (?; ?; ?; ?; ?; ?; ?). Moreover, these works all use the framework in a traditional way, focusing purely on one commonly used notion of tractability (namely that of fixedparameter tractability).
Parameterized complexity is designed to address the downside of classical complexity theory that it is largely ignorant of structural properties of problem inputs that can potentially be exploited algorithmically. It does so by distuinguishing a problem parameter , in addition to the input size , and measuring running times in terms of both of these. The parameter can be used to measure various types of structure that are present in the problem input. Parameterized complexity theory has grown into a large and thriving research community over the last few decades (see, e.g., ? ?; ? ?; ? ?). Most results and techniques in parameterized complexity theory revolve around the notion of fixedparameter tractability—a relaxation of polynomialtime solvability based on running times of the form , for some computable function (possibly exponential or worse).
Due to the fact that reasoning problems related to description logics are typically of high complexity (e.g., complete for classes like PSPACE and EXPTIME), it is unsurprising that one would need very restrictive parameters to obtain fixedparameter tractability results for such problems. It has been proposed recently that the investigation of problems that are of higher complexity can also benefit from the parameterized complexity point of view (?; ?; ?; ?; ?)—using tools and methods that overstep the traditional focus on fixedparameter tractability as positive results.
In this paper, we show how the complexity study of description logic problems can benefit from using the framework of parameterized complexity and all the tools and methods that it offers. We do so using three case studies: (1) parameterized results for concept satisfiability for with respect to nearly acyclic TBoxes, (2) parameterized results for concept satisfiability for fragments of that are close to , and , respectively, and (3) parameterized results addressing the notion of data complexity for instance checking and conjunctive query entailment for . The complexity results that we obtain are summarized in Tables 1, 3 and 4—at the end of the sections where we present the case studies.
Outline.
We begin by giving an overview of the theory of parameterized complexity—including commonly used (and more traditional) concepts and tools, as well as more progressive notions. Then we present our three case studies in three separate sections, before sketching directions for future research and concluding.
Parameterized Complexity Theory
We begin by introducing relevant concepts from the theory of parameterized complexity. For more details, we refer to textbooks on the topic (?; ?). We introduce both concepts that are used commonly in parameterized complexity analyses in the literature and less commonly used concepts, that play a role in this paper.
FPT and XP.
The core notion in parameterized complexity is that of fixedparameter tractability, which is a relaxation of the traditional notion of polynomialtime solvability. Fixedparameter tractability is a property of parameterized problems. A parameterized problem is a subset of , for some finite alphabet . An instance of a parameterized problem is a pair where is the main part of the instance, and is the parameter. Intuitively, the parameter captures some type of structure of the instance that could potentially be exploited algorithmically—the smaller the value of the parameter , the more structure there is in the instance. (When considering multiple parameters, we take their sum as a single parameter.) A parameterized problem is fixedparameter tractable if instances of the problem can be solved by a deterministic algorithm that runs in time , where is a computable function of . Algorithms running within such time bounds are called fptalgorithms. FPT denotes the class of all parameterized problems that are fixedparameter tractable.
Intuitively, the idea behind fixedparameter tractability is that whenever the parameter value is small, the overall running time is reasonably small—assuming that the constant hidden behind is small. In fact, for every fixed parameter value , the running time of an fptalgorithm is polynomial (where the order of the polynomial is constant).
A related parameterized complexity class is XP, which consists of all parameterized problems for which instances can be solved in time , for some computable function . Algorithms running within such time bounds are called xpalgorithms. That is, a parameterized problem is in XP if there is an algorithm that solves in polynomial time for each fixed value of the parameter—where the order of the polynomial may grow with . It holds that . Intuitively, if a parameterized problem is in , it is not likely to be efficiently solvable in practice. Suppose, for example, that a problem is solvable in time in the worst case. Then already for and , it could take ages to solve this problem (see, e.g., ? ?).
Completeness Theory.
Parameterized complexity also offers a completeness theory, similar to the theory of NPcompleteness, that provides a way to obtain evidence that a parameterized problem is not fixedparameter tractable. Hardness for parameterized complexity classes is based on fptreductions, which are manyone reductions where the parameter of one problem maps into the parameter for the other. More specifically, a parameterized problem is fptreducible to another parameterized problem if there is a mapping that maps instances of to instances of such that (i) if and only if , (ii) for a computable function , and (iii) can be computed in time for a computable function and a constant . A problem is hard for a parameterized complexity class K if every problem can be fptreduced to . A problem is complete for a parameterized complexity class K if and is Khard.
Central to the completeness theory are the classes of the Weft hierarchy. We will not define the classes W[] in detail (for details, see, e.g., ? ?). It suffices to note that it is widely believed that .^{1}^{1}1In fact, it holds that , assuming that variable 3SAT cannot be solved in subexponential time, that is, in time (?; ?; ?). Thus, showing that a problem is W[1]hard gives evidence that is not fpttime solvable.
An example of a W[1]complete parameterized problem is Clique (?; ?). Instances for this problem consist of , where is an undirected graph, and . The parameter is , and the question is to decide whether contains a clique of size .
ParaK.
For each classical complexity class K, we can construct a parameterized analogue paraK (?). Let K be a classical complexity class, e.g., NP. The parameterized complexity class paraK is then defined as the class of all parameterized problems for which there exist a computable function and a problem in , such that for all instances it holds that if and only if . Intuitively, the class consists of all problems that are in K after a precomputation that only involves the parameter. A common example of such parameterized analogues of classical complexity classes is the parameterized complexity class paraNP. Another example is .
If (the unparameterized variant of) a parameterized problem is in the class K, then . Also, if is already Khard for a finite set of parameter values, then is paraKhard (?).
Using the classes paraK and the notion of fptreductions, one can also provide evidence that certain parameterized problems are not fixedparameter tractable. If a paraKhard parameterized problem is fixedparameter tractable, then . For example, a paraNPhard parameterized problem is not fixedparameter tractable, unless .
ParaNP and paracoNP.
The classes paraNP and paracoNP are parameterized analogues of the classes NP and coNP. The class paraNP can alternatively be defined as the class of parameterized problems that are solvable in fpttime by a nondeterministic algorithm (?). Similarly, paracoNP can be defined using fptalgorithms using universal nondeterminism—i.e., nondeterministic fptalgorithms that reject the input if at least one sequence of nondeterministic choices leads the algorithm to reject. It holds that .
Another alternative definition of the class paraNP—that can be motivated by the amazing practical performance of SAT solving algorithms (see, e.g., ? ?)—is using the following parameterized variant of the propositional satisfiability problem (?; ?; ?; ?). Let be the problem SAT with a constant parameter . The class paraNP consists of all problems that can be fptreduced to . In other words, paraNP can be seen as the class of all parameterized problems that can be solved by (1) a fixedparameter tractable encoding into SAT, and (2) using a SAT solving algorithm to then decide the problem. The class paracoNP can be characterized in a similar way, using UNSAT instead of SAT. Consequently, problems in paracoNP can also be solved using the combination of an fpttime encoding and a SAT solving algorithm.
ParaPSPACE.
The class paraPSPACE can alternatively be defined as the class of all parameterized problems for which there exists a (deterministic or nondeterministic) algorithm deciding whether using space , for some computable function . It holds that .
Another alternative characterization of paraPSPACE is using a parameterized variant of TQBF—the problem of deciding whether a given quantified Boolean formula is true. Let be the problem TQBF with a constant parameter . The class paraPSPACE consists of all problems that can be fptreduced to . In other words, paraPSPACE can be seen as the class of all parameterized problems that can be solved by (1) an fpttime encoding into TQBF, and (2) using a TQBF solver to then decide the problem (see, e.g., ? ?).
Yet another characterization of paraPSPACE uses alternating Turing machines (ATMs). An ATM is a nondeterministic Turing machine where the states are partitioned into existential and universal states (see, e.g., ? ?, Appendix A.1). A configuration of the ATM with an existential state is accepting if at least one successor configuration is accepting, and a configuration with a universal state is accepting if all successor configurations are accepting. Intuitively, an ATM can alternate between existential and universal nondeterminism. The class paraPSPACE consists of all parameterized problems that can be decided by an ATM in fixedparameter tractable time.
ParaEXPTIME.
The class paraEXPTIME can be defined as the class of all parameterized problems for which there exists a deterministic algorithm deciding whether in time , for some computable function . It holds that and that .
For an overview of all parameterized complexity classes that feature in this paper—and their relation—see Figure 1.
Case Study 1: Concept Satisfiability for with respect to Nearly Acyclic TBoxes
In this section, we provide our first case study to illustrate how parameterized complexity can be used to obtain a more detailed image of the computational complexity of description logic reasoning. In particular, we consider the problem of concept satisfiability for the description logic with respect to general TBoxes. This problem is EXPTIMEcomplete in general. We consider two parameters for this problem. One of these parameters does not help to reduce the complexity of the problem—that is, for this parameter the problem is paraEXPTIMEcomplete. The other of the two parameters does help to reduce the complexity of the problem—that is, for this parameter the problem is paraPSPACEcomplete.
We begin by revisiting the description logic , the problem of concept satisfiability with respect to acyclic and general TBoxes, and classical complexity results for this problem. We then discuss our parameterized complexity results, and how to interpret these results.
The Description Logic
Let , and be sets of atomic concepts, roles, and individuals, respectively. The triple is called the signature. (We will often omit the signature if this is clear from the context.)
Concepts are defined by the following grammar in BackusNaur form, for and :
An interpretation
over a signature consists of
a nonempty set called the domain,
and an interpretation function that maps
(1) every individual to an
element ,
(2) every concept to a subset of ,
and (3) every role to a subset
of ,
such that:
; ;
;
;
;

there exists some such that ; and

for each such that it holds that .
A general concept inclusion (GCI) is a statement of the form , where are concepts. We write (and say that satisfies ) if . A (general) TBox is a finite set of GCIs. A concept definition is a statement of the form , where is an atomic concept, and is a concept. We write (and say that satisfies ) if . An acyclic TBox is a finite set of concept definitions such that (1) does not contain two different concept definitions and for any , and (2) contains no (direct or indirect) cyclic definitions—that is, the graph with vertex set that contains an edge if and only if contains a concept definition where occurs in is acyclic. An interpretation satisfies a (general or acyclic) TBox if satisfies all GCIs or concept definitions in .
A concept assertion is a statement of the form , where and is a concept. A role assertion is a statement of the form , where and . We write (and say that satisfies ) if . Moreover, we write (and say that satisfies ) if . An ABox is a finite set of concept and role assertions.
Classical Complexity Results
An important reasoning problem for description logics is the problem of concept satisfiability. In this decision problem, the input consists of a concept and a TBox , and the question is whether is satisfiable with respect to —that is, whether there exists an interpretation such that and . The problem of concept satisfiability is PSPACEcomplete, both for the case where is empty and for the case where is an acyclic TBox. For the case where is a general TBox, the problem is EXPTIMEcomplete.
Proposition 1 (? ?; ? ?).
Concept satisfiability for the logic with respect to general TBoxes is EXPTIMEcomplete.
Proposition 2 (? ?; ? ?).
Concept satisfiability for the logic with respect to acyclic TBoxes is PSPACEcomplete.
Parameterized Complexity Results
We consider a parameterized variant of the problem of concept satisfiability for where the parameter captures the distance towards acyclicity for the given TBox. That is, for this parameterized problem, the input consists of a concept , an acyclic TBox , and a general TBox . The parameter is , and the question is whether is satisfiable with respect to —that is, whether there exists an interpretation such that , , and .
Parameterizing by the size of does not offer an improvement in the complexity of the problem—that is, this parameter leads to paraEXPTIMEcompleteness.
Theorem 3.
Concept satisfiability for with respect to both an acyclic TBox and a general TBox is paraEXPTIMEcomplete when parameterized by .
Proof.
Membership in paraEXPTIME follows from the fact that the unparameterized version of the problem is in EXPTIME. To show paraEXPTIMEhardness, it suffices to show that the problem is already EXPTIMEhard for a constant value of the parameter (?). We do so by giving a reduction from the problem of concept satisfiability for with respect to general TBoxes.
Let be a concept and let be a general TBox. Moreover, let . We construct an acyclic TBox and a general TBox such that is satisfiable with respect to if and only if it is satisfiable with respect to . Let be a fresh atomic concept. We let , and we let . It is straightforward to verify that is satisfiable with respect to if and only if it is satisfiable with respect to . Moreover, is constant. From this, we can conclude that the problem is paraEXPTIMEhard. ∎
Intuitively, restricting only the number (and size) of the general TBox does not restrict the problem, as we can encode a general TBox of arbitrary size in the acyclic TBox (together with a small general TBox). If we restrict the number of concepts impacted by the general TBox, however, we do get an improvement in the complexity of the problem.
Let be an acyclic TBox and let be a general TBox. We define the set of concepts impacted by (w.r.t. ) as the smallest set of concepts that is closed under (syntactic) subconcepts and that satisfies that (A) whenever , then , and (B) whenever and , then . If we parameterize the problem of concept satisfiability with respect to both an acyclic TBox and a general TBox by the number of concepts impacted by , the complexity of the problem jumps down to paraPSPACE.
Theorem 4.
Concept satisfiability for with respect to both an acyclic TBox and a general TBox is paraPSPACEcomplete when parameterized by the number of concepts that are impacted by (w.r.t. ).
Proof.
Hardness for paraPSPACE follows directly from the fact that the problem is already PSPACEhard when is empty (Proposition 2)—and thus the number of concepts impacted by is . We show membership in paraPSPACE by exhibiting a nondeterministic algorithm to solve the problem that runs in space , for some computable function . Let be a concept, let be an acyclic TBox, and let be a general TBox. We may assume without loss of generality that all concepts occurring in are in negation normal form—that is, negations occur only directly in front of atomic concepts. If this were not the case, we could straightforwardly transform to a TBox that does have this property in polynomial time, by introducing new atomic concepts for any negated concept .
The algorithm that we use is the usual tableau algorithm (with static blocking) for —see, e.g., (?). That is, it aims to construct a tree that can be used to construct an interpretation satisfying , and . For each node in the tree, it first exhaustively applies the rules for the and operators, the rules for the concept definitions in , and the rules for the GCIs in (for each adding the concept to a node), before applying the rules for the and operators. Moreover, it applies all rules exhaustively to one node of the tree before moving to another node. Additionally, the algorithm uses the following usual blocking condition (subset blocking): the rule for the operator cannot be applied to a node that has a predecessor in the tree that is labelled with all concepts that is labelled with (and possibly more). It is straightforward to verify that this tableau algorithm correctly decides the problem.
We argue that this algorithm requires space , where is the number of concepts impacted by and denotes the input size. It is straightforward to verify that there is a polynomial such that each node in the tree constructed by the tableau algorithm that is more than steps away from the root of the tree is only labelled with concepts that are impacted by . Since there are only concepts that are impacted by , we know that in each branch of the tree, the blocking condition applies at depth at most , and thus that each branch is of length at most . From this, it follows that this algorithm requires space , and thus that the problem is in paraPSPACE. ∎
Interpretation of the Results
The results in this section are summarized in Table 1. The parameterized results of Theorems 3 and 4 show that parameterized complexity theory can make a distinction between the complexity of the two variants of the problem that classical complexity theory is blind to. Classically, both variants are EXPTIMEcomplete, but one parameter can be used to get a polynomialspace algorithm (when an additional factor for the parameter ), whereas the other parameter requires exponential space, no matter what additional factor is allowed. The paraPSPACE result of Theorem 4 also yields an algorithm solving the problem using (1) an fpttime encoding into the problem TQBF, and then (2) using a TQBF solver to decide the problem (see, e.g., ? ?).
parameter  complexity of concept 
satisfiability w.r.t. and  
–  EXPTIMEc (Proposition 1) 
paraEXPTIMEc (Theorem 3)  
# of concepts impacted  
by (w.r.t. )  paraPSPACEc (Theorem 4) 
Case Study 2: Concept Satisfiability for , , and
In this section, we provide our second case study to illustrate how parameterized complexity can be used to obtain a more detailed image of the computational complexity of description logic reasoning. In particular, we consider the problem of concept satisfiability for the description logic . This problem is PSPACEcomplete in general. We consider several parameters that measure the distance to the logics and . The logics and are obtained from by disallowing concept union and full existential qualification, respectively. The parameters that we consider both help to reduce the complexity of the problem. One parameter renders the problem paracoNPcomplete. The other parameter renders the problem paraNPcomplete. The combination of both parameters renders the problem fixedparameter tractable.
We begin by revisiting the description logics and (and their intersection ), and classical complexity results for the problem of concept satisfiability for these logics. We then discuss our parameterized complexity results, and how to interpret these results.
The Description Logics , and
In order to obtain the description logics , and , we consider a (syntactic) variant of the logic where all concepts are in negation normal form. That is, negations only occur immediately followed by atomic concepts. Put differently, we consider concepts that are defined as follows, for and :
One can transform any concept into negation normal form in linear time (see, e.g., ? ?). The semantics of this variant of is defined exactly as described in the previous section. Throughout this section, we will only consider this variant of .
The description logic is obtained from the logic by forbidding any occurrence of the operator . The description logic is obtained from the logic by requiring that for every occurrence of the existential quantifier it holds that ; that is, only limited existential quantification is allowed. The description logic contains those concepts that are concepts in both and —that is, is the intersection of and .
Thus, the logics and are obtained from by means of two orthogonal restrictions: disallowing concept union and replacing full existential qualification by limited existential quantification, respectively. The logic is obtained from by using both of these restrictions.
Classical Complexity Results
In this section, we consider the problem of concept satisfiability with respect to empty TBoxes. In this decision problem, the input consists of a concept , and the question is whether is satisfiable—that is, whether there exists an interpretation such that . This problem is PSPACEcomplete for , coNPcomplete for , NPcomplete for , and polynomialtime solvable for .
Proposition 5 (? ?).
Concept satisfiability for the logic is PSPACEcomplete.
Proposition 6 (? ?).
Concept satisfiability for the logic is coNPcomplete.
Proposition 7 (? ?).
Concept satisfiability for the logic is NPcomplete.
Proposition 8 (? ?).
Concept satisfiability for the logic is polynomialtime solvable.
The rule  
Condition  contains , but it does not contain both and . 
Action  . 
The rule  
Condition  contains , but neither nor . 
Action  , . 
The rule  
Condition  contains , but there is no individual such that and are in . 
Action  , 
where is an arbitrary individual not occurring in .  
The rule  
Condition  contains and , but it does not contain . 
Action  . 
The rule  
Condition  contains and , but it does not contain . 
Action  . 
Parameterized Complexity Results
In order to conveniently describe the parameterized complexity results that we will establish in this section, we firstly describe an algorithm for deciding concept satisfiability for in polynomial space (see, e.g., ? ?, Chapter 2). To use this algorithm to prove the parameterized complexity results in this section, we describe a variant of the algorithm that can be implemented by a polynomialtime alternating Turing machine—i.e., a nondeterministic Turing machine that can alternate between existential and universal nondeterminism.
The algorithm uses ABoxes as data structures, and works by extending these ABoxes by means of several transformation rules. These rules are described Table 2—however, not all rules are applied in the same fashion. The rule, the rule and the rule are used as deterministic rules, and are applied greedily whenever they apply. The rule and the rule are nondeterministic rules, but are used in a different fashion. The rule transforms an ABox into one of two different ABoxes or nondeterministically. The rule is implemented using existential nondeterminism—i.e., the algorithm succeeds if at least one choice of and ultimately leads to the algorithm accepting. (For more details on existential and universal nondeterminism and alternating Turing machines, see, e.g., ? ?, Appendix A.1.) The rule, on the other hand, transforms an ABox into a unique next ABox , but it is a nonmonotonic rule that can be applied in several ways—the condition can be instantiated in different ways, and these instantiations are not all possible anymore after having applied the rule. The rule is implemented using universal nondeterminism—i.e., the algorithm succeeds if all ways of instantiating the condition of the rule (and applying the rule accordingly) ultimately lead to the algorithm accepting.
The tableau algorithm works as follows. Let be an concept for which we want to decide satisfiability. We construct an initial ABox , where is an arbitrary individual. We proceed in two alternating phases: (I) and (II)—starting with phase (I).
In phase (I), we apply the deterministic rules (the rule, the rule and the rule) and the nondeterministic rule exhaustively, until none of these rules is applicable anymore. For the rule we use existential nondeterminism to choose which of and to use. When none of these rules is applicable anymore, we proceed to phase (II). In phase (II), we apply the rule once, using universal nondeterminism to choose how to instantiate the condition (and we apply the rule accordingly). Then, we go back to phase (I).
Throughout the execution of the algorithm, there is always a single current ABox . Whenever it holds that , the algorithm rejects. If at some point no rule is applicable anymore—that is, if at some point we are in phase (II) and the rule is not applicable—the algorithm accepts.
This algorithm essentially works the same way as known tableau algorithms for concept satisfiability (see, e.g., ? ?, Chapter 2). The only difference is that in the algorithm described above the implementation of the rule using existential nondeterminism and the implementation of the rule using universal nondeterminism is built in. In the literature, typically descriptions of tableau algorithms leave freedom for different implementations of the way in which the search tree is traversed. One can think of the algorithm described above as traversing a search tree that is generated by the different (existential and universal) nondeterministic choices that are made in the execution of the algorithm. This search tree is equivalent to the search tree of the usual tableau algorithm for concept satisfiability. Thus, we get that the algorithm is correct. In fact, this algorithm is a reformulation of the standard algorithm known from the literature (?).
Proposition 9 (? ?).
The tableau algorithm described above for an alternating polynomialtime Turing machine correctly decides concept satisfiability for .
We will now consider several parameterized variants of the problem of concept satisfiability for . These parameters, in a sense, measure the distance of an concept to the logics , and , respectively. We will make use of the tableau algorithm described above to establish upper bounds on the complexity of these problems. Lower bounds follow directly from Propositions 6–8.
We begin with the parameterized variant of concept satisfiability where the parameter measures the distance to .
Theorem 10.
Concept satisfiability for the logic , parameterized by the number of occurrences of the union operator in , is paracoNPcomplete.
Proof.
Hardness for paracoNP follows from the fact that concept satisfiability is coNPcomplete (Proposition 6). Any concept is an concept with zero occurrences of the union operator . Therefore, the problem of concept satisfiability parameterized by the number of occurrences of the union operator in is already coNPhard for the parameter value . From this, it follows that the parameterized problem is paracoNPhard (?).
To show that the parameterized problem is also contained in paracoNP, we describe an algorithm that can be implemented by an alternating Turing machine that only makes use of universal nondeterminism and that runs in fixedparameter tractable time. This algorithm is similar to the tableau algorithm for described above, with the only difference that the rule is now not implemented using existential nondeterminism. Instead, we deterministically iterate over all possible choices that can be made in executions of the rule. That is, whenever the rule is applied, resulting in two possible next ABoxes and , we firstly continue the algorithm with , and if the continuation of the algorithm with failed, we then continue the algorithm with instead.
Let be the number of occurrences of the union operator in . For each occurrence, the rule is applied at most once. Therefore, the total number of possible choices resulting from executions of the rule is at most . Therefore, this modification of the algorithm can be implemented by an alternating Turing machine that only uses universal nondeterminism and that runs in time . In other words, the problem is in paracoNP, and thus is paracoNPcomplete. ∎
Theorem 11.
Concept satisfiability for the logic , parameterized by the number of occurrences of full existential qualification in , is paraNPcomplete.
Proof.
Hardness for paraNP follows from the fact that concept satisfiability is NPcomplete (Proposition 7). Any concept is an concept with zero occurrences of full existential qualification . Therefore, the problem of concept satisfiability parameterized by the number of occurrences of full existential qualification in is already NPhard for the parameter value . From this, it follows that the parameterized problem is paraNPhard (?).
To show membership in paraNP, we modify the tableau algorithm for , similarly to the way we did in the proof of Theorem 10. In particular, we describe an algorithm that can be implemented by an alternating Turing machine that only makes use of existential nondeterminism and that runs in fixedparameter tractable time. We do so by executing the rule deterministically, instead of using universal nondeterminism. That is, instead of using universal nondeterminism to choose which instantiation of the condition of the rule to use, we iterate over all possibilities deterministically.
Let be the number of occurrences of full existential quantification in . At each point, there are at most different ways of instantiating the rule. Moreover, after having applied the rule for at most times, the rule is not applicable anymore. Therefore, the total number of possible choices to iterate over is at most . Therefore, this modification of the algorithm can be implemented by an alternating Turing machine that only uses existential nondeterminism and that runs in time . In other words, the problem is in paraNP, and thus is paraNPcomplete. ∎
Theorem 12.
Concept satisfiability for the logic , parameterized by both (i) the number of occurrences of the union operator in and (ii) the number of occurrences of full existential qualification in , is fixedparameter tractable.
Proof (sketch).
We can modify the alternating polynomialtime tableau algorithm for concept satisfiability to work in deterministic fpttime by implementing both the rule and the rule deterministically, iterating sequentially over all possible choices that can be made for these rules. That is, we combine the ideas behind the proofs of Theorems 10 and 11. We omit the details of this fpttime algorithm. ∎
Interpretation of the Results
The results in this section are summarized in Table 3. Similarly as for the first case study, the results for the second case study show that parameterized complexity theory can make distinctions that classical complexity theory does not see. The problems studied in Theorems 10, 11 and 12 are all PSPACEcomplete classically, yet from a parameterized point of view their complexity goes down to paraNP, paracoNP and FPT. The paraNP and paracoNPcompleteness results of Theorems 10 and 11 also yield algorithms that (1) firstly use an fptencoding to an instance of SAT and (2) then use a SAT solver to decide the problem (see, e.g., ? ?).
parameter  complexity of 
concept satisfiability  
–  PSPACEc (Proposition 5) 
# of occurrences of  paracoNPc (Theorem 10) 
# of occurrences of  paraNPc (Theorem 11) 
# of occurrences of and  FPT (Theorem 12) 
Case Study 3: A Parameterized Complexity View on Data Complexity
In this section, we provide our third case study illustrating the use of parameterized complexity for the analysis of description logic reasoning. This third case study is about refining the complexity analysis for cases where one part of the input is much smaller than another part. Typically, these cases occur where there is a small TBox and a small query, but where there is a large database of facts (in the form of an ABox). What is often done is that the size of the TBox and the query are seen as fixed constants—and the complexity results are grouped under the name of “data complexity.”
In this section, we will look at two concrete polynomialtime data complexity results for the description logic . Even though the data complexity view gives the same outlook on the complexity of these problems, we will use the viewpoint of parameterized complexity theory to argue that these two problems in fact have a different complexity. One of these problems is more efficiently solvable than the other.
We chose the example of to illustrate our point because it is technically straightforward. More intricate fixedparameter tractability results for conjunctive query answering in description logics have been obtained in the literature (?; ?; ?).
We begin by reviewing the description logic , and the two reasoning problems for this logic that we will look at (instance checking and conjunctive query entailment). We will review the classical complexity results for these two problems, including the data complexity results. We will then use results from the literature to give a parameterized complexity analysis for these two problems, and argue why the parameterized complexity perspective gives a more accurate view on the complexity of these problems.
The Description Logic
To define the logic , we first consider the logic . The description logic is obtained from the logic by forbidding any use of the negation operator (), the empty concept (), the union operator (), and universal quantification (). The description logic is obtained from the logic by introducing inverse roles. That is, concepts are defined by the following grammar in BackusNaur form, for and :
Interpretations for are defined as interpretations for with the following addition:

there exists some such that .
Classical Complexity Results
We consider two reasoning problems for the logic . The first problem that we consider is the problem of instance checking. In this problem, the input consists of an ABox , a (general) TBox , an individual name and a concept , and the question is whether —that is, whether for each interpretation such that and it holds that .
The second problem that we consider is the problem of conjunctive query entailment (which can be seen as a generalization of the problem of instance checking). A conjunctive query is a set of atoms of the form and , where is a concept, where , and where are variables. Let Var denote the set of variables occurring in . Let be an interpretation and let be a mapping from Var to . We write if , we write if , we write if for all , and we write if for some . For any ABox and TBox , we write if for each interpretation such that and . In the problem of conjunctive query entailment, the input consists of a (general) TBox , an ABox , and a conjunctive query , and the question is to decide whether .
Both the problem of instance checking and the problem of conjunctive query entailment for are EXPTIMEcomplete in general.
Proposition 13 (? ?; ?).
Instance checking for is EXPTIMEcomplete.
Corollary 14 (? ?; ?).
Conjunctive query entailment for is EXPTIMEcomplete.
The results of Propositions 13 and Corollary 14 are typically called “combined complexity” results—meaning that all elements of the problem statement are given as inputs for the problem. To study how the complexity of these problems increases when the size of the ABox grows—and when the size of the TBox and the size of the query remain the same—often different variants of the problems are studied. In these variants, the TBox and the query are fixed (and thus not part of the problem input), and only the ABox is given as problem input. That is, there is a variant of the problem for each choice of and . The computational complexity of these problem variants are typically called the “data complexity” of the problem.
From a data complexity perspective, the problems of instance checking and conjunctive query entailment for the logic are both polynomialtime solvable. In other words, from a data complexity point of view these problems are of the same complexity.
Claim 15 (? ?).
Instance checking for is polynomialtime solvable regarding data complexity.
Claim 16 (? ?).
Conjunctive query entailment for is polynomialtime solvable regarding data complexity.
Parameterized Complexity Results
We will argue that the computational complexity of the problems of instance checking and conjunctive query entailment for —when only the ABox grows in size—is of a vastly different nature. We will do so by using the parameterized complexity methodology. Concretely, we will take (the size of) the TBox and (for the case of conjunctive query entailment) the query as parameters, and observe that the parameterized complexity of these two problems is different.
We begin by observing that the algorithm witnessing polynomialtime data complexity for the problem of instance checking for corresponds to an fptalgorithm for the problem when parameterized by the size of the TBox .
Observation 17.
Instance checking for is fixedparameter tractable when parameterized by .
Proof.
The algorithm to solve the problem of instance checking for described by Krisnadhi (?, Proposition 4.3) runs in time . ∎
The polynomialtime data complexity algorithm for the problem of conjunctive query entailment, on the other hand, does not translate to an fptalgorithm, but to an xpalgorithm instead—when the parameter is (the sum of) the size of the TBox and the size of the query .
Observation 18.
Conjunctive query entailment for is in XP when parameterized by and .
Proof.
The algorithm to solve the problem of conjunctive query entailment for described by Krisnadhi and Lutz (?, Theorem 4) runs in time . ∎
For this parameter, the problem of conjunctive query entailment for is in fact W[1]hard—and thus not fixedparameter tractable, assuming the widely believed conjecture that . This follows immediately from the W[1]hardness of conjunctive query answering over databases when parameterized by the size of the query (?, Theorem 1).
Corollary 19 (? ?).
Conjunctive query entailment for is W[1]hard when parameterized by and .
instance  
checking  conjunctive  
query entailm.  
combined  
complexity  EXPTIMEc  
(Prop 13)  EXPTIMEc  
(Cor 14)  
data complexity  in P (Claim 15)  in P (Claim 16) 
combined  
complexity with  
parameter  in FPT  
(Obs 17)  in XP (Obs 18)  
W[1]h (Cor 19) 
Interpretation of the Results
The results in this section are summarized in Table 4. Observation 17 and Corollary 19 show that parameterized complexity can give a more accurate view on data complexity results than classical complexity theory. From a classical complexity perspective, the data complexity variants of both problems are polynomialtime solvable, whereas the parameterized data complexity variants of the problems differ in complexity. Both problems are solvable in polynomial time when only the ABox grows in size. However, for instance checking the order of the polynomial is constant (Observation 17), and for conjunctive query entailment the order of the polynomial grows with the size of the query (Corollary 19). This is a difference with enormous effects on the practicality of algorithms solving these problems (see, e.g., ? ?).
Directions for Future Research
The results in this paper are merely an illustrative exposition of the type of parameterized complexity results that are possible for description logic reasoning problems when using less commonly studied concepts (e.g., the classes paraNP, paracoNP and paraPSPACE). We hope that this paper sparks a structured investigation of the parameterized complexity of different reasoning problems for the wide range of description logics that have been studied. For this, it would be interesting to consider a large assortment of different parameters that could reasonably be expected to have small values in applications. It would also be interesting to investigate to what extent, say, paraNPmembership results can be used to develop practical algorithms based on the combination of fpttime encodings into SAT and SAT solving algorithms.
Conclusion
We showed how the complexity study of description logic problems can benefit from using the framework of parameterized complexity and all the tools and methods that it offers. We did so using three case studies. The first addressed the problem of concept satisfiability for with respect to nearly acyclic TBoxes. The second was about the problem of concept satisfiability for fragments of that are close to , and , respectively. The third case study concerned a parameterized complexity view on the notion of data complexity for instance checking and conjunctive query entailment for . Moreover, we sketched some directions for future research, applying (progressive notions from) parameterized complexity theory to the study of description logic reasoning problems.
Acknowledgments.
This work was supported by the Austrian Science Fund (FWF), project J4047.
References
 [2001] Baader, F., and Sattler, U. 2001. An overview of tableau algorithms for description logics. Studia Logica 69(1):5–40.
 [2003] Baader, F.; Calvanese, D.; McGuinness, D. L.; Nardi, D.; and PatelSchneider, P. F. 2003. The Description Logic Handbook: Theory, Implementation and Applications.
 [2005] Baader, F.; Lutz, C.; Miličić, M.; Sattler, U.; and Wolter, F. 2005. Integrating description logics and action formalisms for reasoning about web services.
 [2005] Baader, F.; Brandt, S.; and Lutz, C. 2005. Pushing the envelope. In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), 364–369.
 [2008] Baader, F.; Brandt, S.; and Lutz, C. 2008. Pushing the envelope further. In Proceedings of the OWLED 2008 DC Workshop on OWL: Experiences and Directions.
 [2008] Baader, F.; Horrocks, I.; and Sattler, U. 2008. Description logics. In Handbook of Knowledge Representation, volume 3 of Foundations of Artificial Intelligence. Elsevier. 135–179.
 [2017a] Bienvenu, M.; Kikot, S.; Kontchakov, R.; Podolskii, V. V.; Ryzhikov, V.; and Zakharyaschev, M. 2017a. The complexity of ontologybased data access with OWL 2 QL and bounded treewidth queries. In Proceedings of the 36th ACM SIGMODSIGACTSIGAI Symposium on Principles of Database Systems (PODS 2017), 201–216. ACM.
 [2017b] Bienvenu, M.; Kikot, S.; Kontchakov, R.; Ryzhikov, V.; and Zakharyaschev, M. 2017b. On the parameterised complexity of treeshaped ontologymediated queries in OWL 2 QL. In Proceedings of the 30th International Workshop on Description Logics (DL 2017).
 [2009] Biere, A.; Heule, M.; van Maaren, H.; and Walsh, T., eds. 2009. Handbook of Satisfiability, volume 185 of Frontiers in Artificial Intelligence and Applications. IOS Press.
 [2012] Bodlaender, H. L.; Downey, R.; Fomin, F. V.; and Marx, D., eds. 2012. The Multivariate Algorithmic Revolution and Beyond. Springer Verlag.
 [2014] Ceylan, I. I., and Peñaloza, R. 2014. The Bayesian description logic . In Proceedings of the 7th International Joint Conference on Automated Reasoning (IJCAR 2014), 480–494.
 [2012] Chen, J., and Kanj, I. A. 2012. Parameterized complexity and subexponentialtime computability. In Bodlaender, H. L.; Downey, R.; Fomin, F. V.; and Marx, D., eds., The Multivariate Algorithmic Revolution and Beyond, 162–195.
 [2005] Chen, J.; Chor, B.; Fellows, M.; Huang, X.; Juedes, D.; Kanj, I. A.; and Xia, G. 2005. Tight lower bounds for certain parameterized NPhard problems. Information and Computation 201(2):216–231.
 [2000] Donini, F. M., and Massacci, F. 2000. Exptime tableaux for . Artificial Intelligence 124(1):87–138.
 [1992] Donini, F. M.; Lenzerini, M.; Nardi, D.; Hollunder, B.; Nutt, W.; and Spaccamela, A. M. 1992. The complexity of existential quantification in concept languages. 53(23):309–327.
 [1997] Donini, F. M.; Lenzerini, M.; Nardi, D.; and Nutt, W. 1997. The complexity of concept languages. Information and Computation 134(1):1–58.
 [1995] Downey, R. G., and Fellows, M. R. 1995. Fixedparameter tractability and completeness. II. On completeness for . Theoretical Computer Science 141(12):109–131.
 [2013] Downey, R. G., and Fellows, M. R. 2013. Fundamentals of Parameterized Complexity. Springer Verlag.
 [2012] Downey, R. 2012. A basic parameterized complexity primer. In Bodlaender, H. L.; Downey, R.; Fomin, F. V.; and Marx, D., eds., The Multivariate Algorithmic Revolution and Beyond, 91–128.
 [2003] Flum, J., and Grohe, M. 2003. Describing parameterized complexity classes. Information and Computation 187(2):291–319.
 [2006] Flum, J., and Grohe, M. 2006. Parameterized Complexity Theory. Springer Verlag.
 [2014a] de Haan, R., and Szeider, S. 2014a. Fixedparameter tractable reductions to SAT. In Egly, U., and Sinz, C., eds., Proceedings of the 17th International Symposium on the Theory and Applications of Satisfiability Testing (SAT 2014), 85–102.
 [2014b] de Haan, R., and Szeider, S. 2014b. The parameterized complexity of reasoning problems beyond NP. In Baral, C.; De Giacomo, G.; and Eiter, T., eds., Proceedings of the 14th International Conference on the Principles of Knowledge Representation and Reasoning (KR 2014). AAAI Press.
 [2016] de Haan, R., and Szeider, S. 2016. Parameterized complexity results for symbolic model checking of temporal logics. In Proceedings of the 15th International Conference on the Principles of Knowledge Representation and Reasoning (KR 2016), 453–462. AAAI Press.
 [2017] de Haan, R., and Szeider, S. 2017. Parameterized complexity classes beyond paraNP. J. of Computer and System Sciences 87:16–57.
 [2016] de Haan, R. 2016. Parameterized Complexity in the Polynomial Hierarchy. Ph.D. Dissertation, Technische Universität Wien.
 [2011] Kikot, S.; Kontchakov, R.; and Zakharyaschev, M. 2011. On (in)tractability of OBDA with OWL 2 QL. In Proceedings of the 24th International Workshop on Description Logics (DL 2011).
 [2007] Krisnadhi, A., and Lutz, C. 2007. Data complexity in the family of description logics. In Proceedings of the 14th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2007), 333–347.
 [2007] Krisnadhi, A. A. 2007. Data complexity of instance checking in the family of description logics. Master’s thesis, Technische Universität Dresden, Germany.
 [2012] Motik, B. 2012. Parameterized complexity and fixedparameter tractability of description logic reasoning. In Bjørner, N., and Voronkov, A., eds., Proceedings of the 18th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2012), 13–14.
 [1999] Papadimitriou, C. H., and Yannakakis, M. 1999. On the complexity of database queries. J. of Computer and System Sciences 58(3):407–427.
 [1991] Schild, K. 1991. A correspondence theory for terminological logics: Preliminary report. In Mylopoulos, J., and Reiter, R., eds., Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI 1991), 466–471.
 [1991] SchmidtSchauß, M., and Smolka, G. 1991. Attributive concept descriptions with complements. Artificial Intelligence 48(1):1–26.
 [2014] Simančík, F.; Motik, B.; and Horrocks, I. 2014. Consequencebased and fixedparameter tractable reasoning in description logics. Artificial Intelligence 209:29–77.
 [2011] Simančík, F.; Motik, B.; and Krötzsch, M. 2011. Fixed parameter tractable reasoning in DLs via decomposition. In Proceedings of the 24th International Workshop on Description Logics (DL 2011), 400–410.