Answer Set Solving with Bounded Treewidth Revisited††thanks: This is the authorâs self-archived copy including detailed proofs. A preliminary version of the paper was presented on the workshop TAASP’16. Research was supported by the Austrian Science Fund (FWF), Grant Y698.
Parameterized algorithms are a way to solve hard problems more efficiently, given that a specific parameter of the input is small. In this paper, we apply this idea to the field of answer set programming (ASP). To this end, we propose two kinds of graph representations of programs to exploit their treewidth as a parameter. Treewidth roughly measures to which extent the internal structure of a program resembles a tree. Our main contribution is the design of parameterized dynamic programming algorithms, which run in linear time if the treewidth and weights of the given program are bounded. Compared to previous work, our algorithms handle the full syntax of ASP. Finally, we report on an empirical evaluation that shows good runtime behaviour for benchmark instances of low treewidth, especially for counting answer sets.
Parameterized algorithms [14, 5] have attracted considerable interest in recent years and allow to tackle hard problems by directly exploiting a small parameter of the input problem. One particular goal in this field is to find guarantees that the runtime is exponential exclusively in the parameter, and polynomial in the input size (so-called fixed-parameter tractable algorithms). A parameter that has been researched extensively is treewidth [16, 2]. Generally speaking, treewidth measures the closeness of a graph to a tree, based on the observation that problems on trees are often easier than on arbitrary graphs. A parameterized algorithm exploiting small treewidth takes a tree decomposition, which is an arrangement of a graph into a tree, and evaluates the problem in parts, via dynamic programming (DP) on the tree decomposition.
ASP [3, 13] is a logic-based declarative modelling language and problem solving framework where solutions, so called answer sets, of a given logic program directly represent the solutions of the modelled problem. Jakl et al.  give a DP algorithm for disjunctive rules only, whose runtime is linear in the input size of the program and double exponential in the treewidth of a particular graph representation of the program structure. However, modern ASP systems allow for an extended syntax that includes, among others, weight rules and choice rules. Pichler et al.  investigated the complexity of programs with weight rules. They also presented DP algorithms for programs with cardinality rules (i.e., restricted version of weight rules), but without disjunction.
In this paper, we propose DP algorithms for finding answer sets that are able to directly treat all kinds of ASP rules. While such rules can be transformed into disjunctive rules, we avoid the resulting polynomial overhead with our algorithms. In particular, we present two approaches based on two different types of graphs representing the program structure. Firstly, we consider the primal graph, which allows for an intuitive algorithm that also treats the extended ASP rules. While for a given disjunctive program the treewidth of the primal graph may be larger than treewidth of the graph representation used by Jakl et al. , our algorithm uses simpler data structures and lays the foundations to understand how we can handle also extended rules. Our second graph representation is the incidence graph, a generalization of the representation used by Jakl et al.. Algorithms for this graph representation are more sophisticated, since weight and choice rules can no longer be completely evaluated in the same computation step. Our algorithms yield upper bounds that are linear in the program size, double-exponential in the treewidth, and single-exponential in the maximum weights. We extend two algorithms to count optimal answer sets. For this particular task, experiments show that we are able to outperform existing systems from multiple domains, given input instances of low treewidth, both randomly generated and obtained from real-world graphs of traffic networks. Our system is publicly available on github222See https://github.com/daajoe/dynasp..
2 Formal Background
2.1 Answer Set programming (ASP)
ASP is a declarative modeling and problem solving framework; for a full introduction, see, e.g., [3, 13]. State-of-the-art ASP grounders support the full ASP-Core-2 language  and output smodels input format , which we will use for our algorithms. Let , , be non-negative integers such that , , , distinct propositional atoms, , , , non-negative integers, and . A choice rule is an expression of the form, , a disjunctive rule is of the form and a weight rule is of the form . Finally, an optimization rule is an expression of the form . A rule is either a disjunctive, a choice, a weight, or an optimization rule.
For a choice, disjunctive, or weight rule , let , , and . For a weight rule , let map atom to its corresponding weight in rule if for and to otherwise, let for a set of atoms, and let be its bound. For an optimization rule , let and if , let and ; or if , let and . For a rule , let denote its atoms and its body. A program is a set of rules. Let and let and denote the set of all choice, disjunctive, optimization and weight rules in , respectively.
A set satisfies a rule if (i) or for , (ii) or for , or (iii) . is a model of , denoted by , if satisfies every rule . Further, let for .
The reduct (i) of a choice rule is the set of rules, (ii) of a disjunctive rule is the singleton , and (iii) of a weight rule is the singleton where . is called GL reduct of with respect to . A set is an answer set of program if (i) and (ii) there is no such that , that is, is subset minimal with respect to .
We call the cost of model for with respect to the set . An answer set of is optimal if its cost is minimal over all answer sets.
Let . Then, the sets , and are answer sets of .
Given a program , we consider the problems of computing an answer set (called AS) and outputting the number of optimal answer sets (called #AspO).
Next, we show that under standard complexity-theoretic assumptions #Asp is strictly harder than #SAT.
#Asp for programs without optimization is -complete.
Observe that programs containing choice and weight rules can be compiled to disjunctive ones (normalization) without these rule types (see ) using a polynomial number (in the original program size) of rules. Membership follows from the fact that, given such a nice program and an interpretation , checking whether is an answer of is coNP-complete, see e.g., . Hardness is a direct consequence of -hardness for the problem of counting subset minimal models of a CNF formula , since answer sets of negation-free programs and subset-minimal models of CNF formulas are essentially the same objects. ∎
The counting complexity of #Asp including optimization rules (i.e., where only optimal answer sets are counted) is slightly higher; exact results can be established employing hardness results from other sources .
2.2 Tree Decompositions
Let be a graph, a rooted tree, and a function that maps each node to a set of vertices. We call the sets bags and the set of nodes. Then, the pair is a tree decomposition (TD) of if the following conditions hold: (i) all vertices occur in some bag, that is, for every vertex there is a node with ; (ii) all edges occur in some bag, that is, for every edge there is a node with ; and (iii) the connectedness condition: for any three nodes , if lies on the unique path from to , then . We call the width of the TD. The treewidth of a graph is the minimum width over all possible TDs of .
Note that each graph has a trivial TD consisting of the tree and the mapping . It is well known that the treewidth of a tree is , and a graph containing a clique of size has at least treewidth . For some arbitrary but fixed integer and a graph of treewidth at most , we can compute a TD of width in time . Given a TD with , for a node we say that is leaf if has no children; join if has children and with and ; int (“introduce”) if has a single child , and ; rem (“removal”) if has a single child , and . If every node has at most two children, , and bags of leaf nodes and the root are empty, then the TD is called nice. For every TD, we can compute a nice TD in linear time without increasing the width . In our algorithms, we will traverse a TD bottom up, therefore, let be the sequence of nodes in post-order of the induced subtree of rooted at .
2.3 Graph Representations of Programs
In order to use TDs for ASP solving, we need dedicated graph representations of ASP programs. The primal graph of program has the atoms of as vertices and an edge if there exists a rule and . The incidence graph of is the bipartite graph that has the atoms and rules of as vertices and an edge if for some rule . These definitions adapt similar concepts from SAT .
Let be a nice TD of graph representation of a program . Further, let and . The bag-rules are defined as if is the primal graph and as if is the incidence graph. Further, the set is called atoms below , the program below is defined as , and the program strictly below is . It holds that and .
3 ASP via Dynamic Programming on TDs
In the next two sections, we propose two dynamic programming (DP) algorithms, and , for ASP without optimization rules based on two different graph representations, namely the primal and the incidence graph. Both algorithms make use of the fact that answer sets of a given program are (i) models of and (ii) subset minimal with respect to . Intuitively, our algorithms compute, for each TD node , (i) sets of atoms—(local) witnesses—representing parts of potential models of , and (ii) for each local witness subsets of —(local) counterwitnesses—representing subsets of potential models of which (locally) contradict that can be extended to an answer set of . We give the the basis of our algorithms in Algorithm 1 (), which sketches the general DP scheme for ASP solving on TDs. Roughly, the algorithm splits the search space based on a given nice TD and evaluates the input program in parts. The results are stored in so-called tables, that is, sets of all possible tuples of witnesses and counterwitnesses for a given TD node. To this end, we define the table algorithms and , which compute tables for a node of the TD using the primal graph and incidence graph , respectively. To be more concrete, given a table algorithm , algorithm visits every node in post-order; then, based on , computes a table for node from the tables of the children of , and stores in Tables[t]. 33footnotetext: , , and
3.1 Using Decompositions of Primal Graphs
In this section, we present our algorithm in two parts: (i) finding models of and (ii) finding models which are subset minimal with respect to . For sake of clarity, we first present only the first tuple positions (red parts) of Algorithm 2 () to solve (i). We call the resulting table algorithm .
Consider program from Example 1 and in Figure 2 (left) TD of and the tables , , , which illustrate computation results obtained during post-order traversal of by . Table as . Since , we construct table from by taking and for each (corresponding to a guess on ). Then, introduces and introduces . , but since we have for . In consequence, for each of table , we have since enforces satisfiability of in node . We derive tables to similarly. Since , we remove atom from all elements in to construct . Note that we have already seen all rules where occurs and hence can no longer affect witnesses during the remaining traversal. We similarly construct . Since , we construct table by taking the intersection . Intuitively, this combines witnesses agreeing on . Node is again of type rem. By definition (primal graph and TDs) for every , atoms occur together in at least one common bag. Hence, and since , we can construct a model of from the tables. For example, we obtain the model .
Let be a program and a TD of the primal graph of . Then, for every rule there is at least one bag in containing all atoms of .
By Definition the primal graph contains a clique on all atoms participating in a rule . Since a TD must contain each edge of the original graph in some bag and has to be connected, it follows that there is at least one bag containing all (clique) atoms of . ∎