# Practical Access to Dynamic Programming on Tree Decompositions

###### Abstract

Parameterized complexity theory has lead to a wide range of algorithmic breakthroughs within the last decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it,…). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code.

The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model-checker for a small fragment of MSO. This fragment is powerful enough to describe many natural problems, and our model-checker turns out to be very competitive against similar state-of-the-art tools.

Institute for Theoretical Computer Science, Universität zu Lübeck, Lübeck, Germanybannach@tcs.uni-luebeck.dehttps://orcid.org/0000-0002-6475-5512 Department of Computer Science, Kiel University, Kiel, Germanyseb@informatik.uni-kiel.dehttps://orcid.org/0000-0003-4177-8081 \CopyrightMax Bannach and Sebastian Berndt\subjclass\ccsdesc[100]Theory of computation Design and analysis of algorithms\category\relatedversion\supplement\funding\EventEditorsJohn Q. Open and Joan R. Access \EventNoEds2 \EventLongTitle42nd Conference on Very Important Topics (CVIT 2016) \EventShortTitleCVIT 2016 \EventAcronymCVIT \EventYear2016 \EventDateDecember 24–27, 2016 \EventLocationLittle Whinging, United Kingdom \EventLogo \SeriesVolume42 \ArticleNo23 \hideLIPIcs

## 1 Introduction

Parameterized algorithms aim to solve intractable problems on instances where some parameter tied to the complexity of the instance is small. This line of research has seen enormous growth in the last decades and produced a wide range of algorithms [9]. More formally, a problem is fixed-parameter tractable (in fpt), if every instance can be solved in time for a computable function , where is the parameter of . While the impact of parameterized complexity to the theory of algorithms and complexity cannot be overstated, its practical component is much less understood. Very recently, the investigation of the practicability of fixed-parameter tractable algorithms for real-world problems has started to become an important subfield (see e. g. [18, 11]). We investigate the practicability of dynamic programming on tree decompositions – one of the most fundamental techniques of parameterized algorithms. A general result explaining the usefulness of tree decompositions was given by Courcelle in [8], who showed that every property that can be expressed in monadic second-order logic is fixed-parameter tractable if it is parameterized by tree width. By combining this result (known as Courcelle’s Theorem) with the algorithm of Bodlaender [7] to compute an optimal tree decomposition in fpt-time, a wide range of graph-theoretic problems is known to be solvable on these tree-like graphs. Unfortunately, both ingredients of this approach are very expensive in practice.

One of the major achievements concerning practical parameterized algorithms was the discovery of a practically fast algorithm for treewidth due to Tamaki [19]. Concerning Courcelle’s Theorem, there are currently two contenders concerning efficient implementations of it: D-Flat, an Answer Set Programming (ASP) solver for problems on tree decompositions [1]; and Sequoia, an MSO solver based on model checking games [17]. Both solvers allow to solve very general problems and the corresponding overhead might, thus, be large compared to a straightforward implementation of the dynamic programs for specific problems.

#### Our Contributions

In order to study the practicability of dynamic programs on tree decompositions, we expand our tree decomposition library Jdrasil with an easy to use interface for such programs: The user only needs to specify the update rules for the different kind of nodes within the tree decomposition. The remaining work – computing a suitable optimized tree decomposition and performing the actual run of the dynamic program – are done by Jdrasil. This allows users to implement a wide range of algorithms within very few lines of code and, thus, gives the opportunity to test the practicability of these algorithms quickly. This interface is presented in Section 3.

While D-Flat and Sequoia solve very general problems, the experimental results of Section 5 show that naïve implementations of dynamic programs might be much more efficient. In order to balance the generality of MSO solvers and the speed of direct implementations, we introduce a small MSO fragment, that avoids quantifier alternation, in Section 4. By concentrating on this fragment, we are able to build a model-checker, called Jatatosk, that runs nearly as fast as direct implementations of the dynamic programs. To show the feasibility of our approach, we compare the running times of D-Flat, Sequoia, and Jatatosk for various problems. It turns out that Jatatosk is competitive against the other solvers and, furthermore, its behaviour is much more consistent (i. e. it does not fluctuate greatly on similar instances). We conclude that concentrating on just a small fragment of MSO gives rise to practically fast solvers, which are still able to solve a large class of problems on graphs of bounded treewidth.

## 2 Preliminaries

All graphs considered in this paper are undirected, that is, they consists of a set of vertices and of a symmetric edge-relation . We assume the reader to be familiar with basic graph theoretic terminology, see for instance [10]. A tree decomposition of a graph is a tuple consisting of a rooted tree and a mapping from nodes of to sets of vertices of (which we call bags) such that (1) for all there is a node in with , (2) for every edge there is a node in with , and (3) the set is connected in for every . The width of a tree decomposition is the maximum size of one of its bags minus one, and the treewidth of , denoted by , is the minimum width any tree decomposition of must have.

In order to describe dynamic programs over tree decompositions, it turns out be helpful to transform a tree decomposition into a more structured one. A nice tree decomposition is a triple where is a tree decomposition and is a labeling such that (1) nodes labeled “leaf” are exactly the leaves of , and the bags of these nodes are empty; (2) nodes labeled “introduce” or “forget” have exactly one child such that there is exactly one vertex with either and or and , respectively; (3) nodes labeled “join” have exactly two children with . A very nice tree decomposition is a nice tree decomposition that also has exactly one node labeled “edge” for every , which virtually introduces the edge to the bag – i. e., whenever we introduce a vertex, we assume it to be “isolated” in the bag until its incident edges are introduced. It is well known that any tree decomposition can efficiently be transformed into a very nice one without increasing its width (essentially traverse through the tree and “pull apart” bags) [9]. Whenever we talk about tree decompositions in the rest of the paper, we actually mean very nice tree decompositions. However, we want to stress out that all our interfaces also support “just” nice tree decompositions.

We assume the reader to be familiar with basic logic terminology and give just a brief overview over the syntax and semantic of monadic second-order logic (MSO), see for instance [13] for a detailed introduction. A vocabulary (or signature) is a set of relational symbols of arity . A -structure is a set – called universe – together with an interpretation of the relational symbols. Let be a sequence of first-order variables and be a sequence of second-order variables of arity . The atomic -formulas are for two first-order variables and , where is either a relational symbol or a second-order variable of arity . The set of -formulas is inductively defined by (1) the set of atomic -formulas; (2) Boolean connections , , and of -formulas and ; (3) quantified formulas and for a first-order variable and a -formula ; (4) quantified formulas and for a second-order variable of arity and a -formula . The set of free variables of a formula consists of the variables that appear in but are not bounded by a quantifier. We denote a formula with free variables as . Finally, we say a -structure with an universe is a model of an -formula if there are elements and relations with with being true in . We write in this case.

Graphs can be modeled as -structures with a symmetric interpretation of . Properties such as “is 3-colorable” can then be described by formulas as:

For instance, we have and . We write whenever a more refined version of will be given later on.

The model-checking problem asks, given a logical structure and a formula , if holds. A model-checker is a program that solves this problem and outputs an assignment to its free and bounded variables if holds.

## 3 An Interface for Dynamic Programming on Tree Decompositions

It will be convenient to recall a classical viewpoint of dynamic programming on tree decompositions to illustrate why our interface is designed the way it is. We will do so by the guiding example of : Is it possible to color vertices of a given graph with three colors such that adjacent vertices never share the same color? Intuitively, a dynamic program for will work bottom-up on a very nice tree decomposition and manages a set of possible colorings per node. Whenever a vertex is introduced, the program “guesses” a color for this vertex; if a vertex is forgotten we have to remove it from the bag and identify configurations that become eventually equal; for join bags we just have to take the configurations that are present in both children; and for edge bags we have to reject colorings in which both endpoints of the introduced edge have the same color. To formalize this vague algorithmic description, we view it from the perspective of automata theory.

### 3.1 The Tree Automaton Perspective

Classically, dynamic programs on tree decompositions are described in terms of tree automata [13]. Recall that in a very nice tree decomposition the tree is rooted and binary; we assume that the children of are ordered. The mapping can then be seen as a function that maps the nodes of to symbols from some alphabet . A naïve approach to manage would yield a huge alphabet (depending on the size of the graph). We thus define the so called tree-index, which is a map such that no two vertices that appear in the same bag share a common tree-index. The existence of such an index follows directly from the property that every vertex is forgotten exactly once: We can simply traverse from the root to the leaves and assign a free index to a vertex when it is forgotten, and release the used index once we reach an introduce bag for . The symbols of then only contain the information for which tree-index there is a vertex in the bag. From a theoreticians perspective this means that depends only on the treewidth; from a programmers perspective the tree-index makes it much easier to manage data structures that are used by the dynamic program. {definition}[Tree Automaton] A nondeterministic bottom-up tree automaton is a tuple where is a set of states with a subset of accepting states, is an alphabet, and is a transition relation in which is a special symbol to treat nodes with less than two children. The automaton is deterministic if for every and every there is exactly one with . {definition}[Computation of a Tree Automaton] The computation of a tree automaton on a labeled tree with and root is an assignment such that for all we have (1) if has two children , ; (2) or if has one child ; (3) if is a leaf. The computation is accepting if .

#### Simulating Tree Automata

A dynamic program for a decision problem can be formulated as a nondeterministic tree automaton that works on the decomposition, see the left side of Figure 1 for a detailed example. Observe that a nondeterministic tree automaton will process a labeled tree with nodes in time . When we simulate such an automaton deterministically, one might think that a running time of the form is sufficient, as the automaton could be in any potential subset of the states at some node of the tree. However, there is a pitfall: For every node we have to compute the set of potential states of the automaton depending on the sets of potential states of the children of that node, leading to a quadratic dependency on . This can be avoided for transitions of the form , , and , as we can collect potential successors of every state of the child and compute the new set of states in linear time with respect to the cardinality of the set. However, transitions of the form are difficult, as we now have to merge two sets of states. In detail, let be a node with children and and let and be the set of potential states in which the automaton eventually is in at these nodes. To determine we have to check for every and every if there is a such that . Note that the number of states can be quite large even for moderately sized parameters , as is typically of size , and we will thus try to avoid this quadratic blow-up.

###### Observation \thetheorem.

A tree automaton can be simulated in time .

Unfortunately, the quadratic factor in the simulation cannot be avoided in general, as the automaton may very well contain a transition for all possible pairs of states. However, there are some special cases in which we can circumnavigate the increase in the running time. {definition}[Symmetric Tree Automaton] A symmetric nondeterministic bottom-up tree automaton is a nondeterministic bottom-up tree automaton in which all transitions satisfy either , , or . Assume as before that we wish to compute the set of potential states for a node with children and . Observe that in a symmetric tree automaton it is sufficient to consider the set and that the intersection of two sets can be computed in linear time if we take some care in the design of the underlying data structures.

###### Observation \thetheorem.

A symmetric tree automaton can be simulated in time .

The right side of Figure 1 illustrates the deterministic simulation of a symmetric tree automaton. The massive time difference in the simulation of tree automata and symmetric tree automata significantly influenced the design of the algorithms in Section 4, in which we try to construct an automaton that is 1) “as symmetric as possible” and 2) allows to take advantage of the “symmetric parts” even if the automaton is not completely symmetric.

### 3.2 The Interface

We introduce a simple Java-interface to our library Jdrasil, which originally was developed for the computation of tree decompositions only. The interface is build up from two classes: StateVectorFactory and StateVector. The only job of the factory is to generate StateVector objects for the leaves of the tree decomposition, or with the terms of the previous section: “to define the initial states of the tree automaton”. The StateVector class is meant to model a vector of potential states in which the nondeterministic tree automaton is at a specific node of the tree decomposition. Our interface does not define at all what a “state” is, or how a collection of states is managed (although most of the times, it will be a set). The only thing the interface requests a user to implement is the behaviour of the tree automaton when it reaches a node of the tree-decomposition, i. e., given a StateVector (for some unknown node in the tree decomposition) and the information that the automaton reaches a certain node, how does the StateVector for this node look like? To this end, the interface contains the methods shown in Listing 1.

This already rounds up the description of the interface, everything
else is done by Jdrasil. In detail, given a graph and an
implementation of the interface, Jdrasil will compute a tree
decomposition^{1}^{1}1See [6] for the concrete algorithms used
by Jdrasil.,
transform this decomposition into a very nice
tree decomposition, potentially optimize the tree decomposition for
the following dynamic program, and finally traverse through the tree
decomposition and simulate the tree automaton described by the
implementation of the interface. The result of this procedure is the
StateVector object assigned to the root of the tree
decomposition.

### 3.3 Example: 3-Coloring

Let us illustrate the usage of the interface with our running example of 3-coloring. A State of the automaton can be modeled as a simple integer array that stores a color (an integer) for every vertex in the bag. A StateVector stores a set of State objects, i. e., essentially a set of integer arrays. Introducing a vertex to a StateVector therefore means that three duplicates of each stored state have to be created, and for every duplicate a different color has to be assigned to . Listing 2 illustrates how this operation could be realized in Java.

The three other methods can be implemented in a very similar fashion: in the forget-method we set the color of to ; in the edge-method we remove states in which both endpoints of the edge have the same color; and in the join-method we compute the intersection of the state sets of both StateVector objects. Note that when we forget a vertex , multiple states may become identical, which is handled here by the implementation of the Java Set-class, which takes care of duplicates automatically.

A reference implementation of this 3-coloring solver is publicly available [4], and a detailed description of it can be found in the manual of Jdrasil [5]. Note that this implementation is only meant to illustrate the interface and that we did not make any effort to optimize it. Nevertheless, this very simple implementation (the part of the program that is responsible for the dynamic program only contains about 120 lines of structured Java-code) performs surprisingly well, as the experiments in Section 5 indicate.

## 4 A Lightweight Model-Checker for a Small MSO-Fragment

Experiments with the coloring solver of the previous section have shown a huge difference in the performance of general solvers as D-Flat and Sequoia against a concrete implementation of a tree automaton for a specific problem (see Section 5). This is not necessarily surprising, as a general solver needs to keep track of way more information. In fact, a MSO-model-checker can probably (unless ) not run in time for any elementary function [14]. On the other hand, it is not clear (in general) what the concrete running time of such a solver is for a concrete formula or problem (see e. g. [16] for a sophisticated analysis of some running times in Sequoia). We seek to close this gap between (slow) general solvers and (fast) concrete algorithms. Our approach is to concentrate only on a small fragment of MSO, which is powerful enough to express many natural problems, but which is restricted enough to allow model-checking in time that matches or is close to the running time of a concrete algorithm for the problem. As a bonus, we will be able to derive upper bounds on the running time of the model-checker directly from the syntax of the input formula.

Based on the interface of Jdrasil, we have implemented a publicly available prototype called Jatatosk [3]. In Section 5, we perform various experiments on different problems on multiple sets of graphs. It turns out that Jatatosk is competitive against the state-of-the-art solvers D-Flat and Sequoia. Arguably these two programs solve a more general problem and a direct comparison is not entirely fair. However, the experiments do reveal that it seems very promising to focus on smaller fragments of MSO (or perhaps any other description language) in the design of treewidth based solvers.

### 4.1 Description of the Fragment

We only consider vocabularies that contain the binary relation , and we only consider -structures with a symmetric interpretation of , i. e., we only consider structures that contain an undirected graph (but may also contain further relations). The fragment of MSO that we consider is constituted by formulas of the form , where the are second-order variables and the are first-order formulas of the form

Here, the are quantifier-free first-order formulas in canonical normal form. It is easy to see that this fragment is already powerful enough to encode many classical problems as ( from the introduction is part of the fragment), or vertex-cover (we will discuss how to handle optimization in Section 4.4): .

### 4.2 A Syntactic Extension of the Fragment

Many interesting properties, such as connectivity, can easily be expressed in MSO, but not directly in the fragment that we study. Nevertheless, a lot of these properties can directly be checked by a model-checker if it “knows” what kind of properties it actually checks. We present a syntactic extension of our MSO-fragment which captures such properties. The extension consist of three new second order quantifiers that can be used instead of .

The first extension is a partition quantifier, which quantifies over partitions of the universe:

This quantifier has two advantages. First, formulas like can be simplified to

and second, the model-checking problem for them can be solved more efficiently: the solver directly “knows” that a vertex must be added to exactly one of the sets.

We further introduce two quantifiers that work with respect to the symmetric relation (recall that we only consider structures that contain such a relation). The quantifier guesses an that is connected with respect to (in graph theoretic terms), i. e., it quantifies over connected subgraphs. The quantifier guesses a that is acyclic with respect to (again in graph theoretic terms), i. e., it quantifies over subgraphs that are forests. These quantifiers are quite powerful and allow, for instance, to express that the graph induced by contains a triangle as minor:

We can also express problems that usually require more involved formulas in a very natural way. For instance, the feedback-vertex-set problem can be described by the following formula (again, optimization will be handled in Section 4.4): .

### 4.3 Description of the Model-Checker

We describe our model-checker in terms of a nondeterministic tree automaton that works on a tree decomposition of the graph induced by (note that, in contrast to other approaches in the literature, we do not work on the Gaifman graph). We define any state of the automaton as bit-vector, and we stipulate that the initial state at every leaf is the zero-vector. For any quantifier or subformula, there will be some area in the bit-vector reserved for that quantifier or subformula and we describe how state transitions effect these bits. The “algorithmic idea” behind the implementation of these transitions is not new, and a reader familiar with folklore dynamic programs on tree decompositions (for instance for vertex-cover or steiner-tree) will probably recognize them. An overview over common techniques can be found in the standard textbooks [9, 13].

#### The Partition Quantifier

We start with a detailed description of the partition quantifier (we do not implement an additional quantifier, as we can easily state ): Let be the maximum bag-size of the tree decomposition. We reserve bit in the state description, where each block of length indicates in which set the corresponding element of the bag is. On an introduce-bag (e. g. for ), the nondeterministic automaton guesses an index and sets the bits that are associated with the tree-index of to . Equivalently, the corresponding bits are cleared when the automaton reaches a forget-bag. As the partition is independent of any edges, an edge-bag does not change any of the bits reserved for the partition quantifier. Finally, on join-bags we may only join states that are identical on the bits describing the partition (as otherwise the vertices of the bag would be in different partitions) – meaning this transition is symmetric with respect to these bits (in terms of Section 3.1).

#### The Connected Quantifier

The next quantifier we describe is which has to overcome the difficulty that an introduced vertex may not be connected to the rest of the bag in the moment it got introduced, but may be connected to it when further vertices “arrive”. The solution to this dilemma is to manage a partition of the bag into connected components , for which we reserve bit in the state description. Whenever a vertex is introduced, the automaton either guesses that it is not contained in and clears the corresponding bits, or it guesses that and assigns some to . Since is isolated in the bag in the moment of its introduction (recall that we work on a very nice tree decomposition), it requires its own component and is therefore assigned to the smallest empty partition . When a vertex is forgotten, there are four possible scenarios: 1) , then the corresponding bits are already cleared and nothing happens; 2) and with , then is just removed and the corresponding bits are cleared; 3) and with and there are other vertices in the bag with , then the automaton rejects the configuration, as is the last vertex of and may not be connected to any other partition anymore; 4) is the last vertex of the bag that is contained in , then the connected component is “done”, the corresponding bits are cleared and one additional bit is set to indicate that the connected component cannot be extended anymore. When an edge is introduced, components might need to be merged. Assume , , and with (otherwise, an edge-bag does not change the state), then we essentially perform a classical union-operation from the well-known union-find data structure. Hence, we assign all vertices that are assigned to to . Finally, at a join-bag we may join two states that agree locally on the vertices that are in (i. e., they have assigned the same vertices to some ), however, they do not have to agree in the way the different vertices are assigned to (in fact, there does not have to be an isomorphism between these assignments). Therefore, the transition at a join-bag has to connect the corresponding components analogous to the edge-bags – in terms of Section 3.1 this transition is not symmetric. The description of the remaining quantifiers and subformulas is very similar and presented in Appendix A.

### 4.4 Extending the Model-Checker to Optimization Problems

As the example formulas from the previous section already indicate, performing model-checking alone will not suffice to express many natural problems. In fact, every graph is a model of the formula if simply contains all vertices. It is therefore a natural extension to consider an optimization version of the model-checking problem, which is usually formulated as follows [9, 13]: Given a logical structure , a formula of the MSO-fragment defined in the previous section with free unary second-order variables , and weight functions with ; find with such that is minimized under , or conclude that is not a model for for any assignment of the free variables. We can now correctly express the (actually weighted) optimization version of vertex-cover as follows:

Similarly we can describe the optimization version of dominating-set if we assume the input does not have isolated vertices (or is reflexive), and we can also fix the formula :

We can also maximize the term by multiplying all weights with and, thus, express problems such as independent-set: The implementation of such an optimization is straightforward: essentially there is a partition quantifier for every free variable that partitions the universe into and . We assign a current value of to every state of the automaton, which is adapted if elements are “added” to some of the free variables at introduce nodes. Note that, since we optimize an affine function, this does not increase the state space: even if multiple computational paths lead to the same state with different values at some node of the tree, it is well defined which of these values is the optimal one. Therefore, the cost of optimization only lies in the partition quantifier, i. e., we pay with bits in the state description of the automaton per free variable – independently of the weights.

### 4.5 Handling Symmetric and Non-Symmetric Joins

In Section 4.3 we have defined the states of
our automaton with respect to a formula, the left side of
Table 1 gives an overview of the number of bits we
require for the different parts of the formula. Let be
the number of bits that we have to reserve for a formula and
a tree decomposition of maximum bag size , i. e., the sum over the
required bits of each part of the formula. By Observation 3.1 this
implies that we can simulate the automaton (and hence, solve the
model-checking problem) in time ; or by Observation 3.1 in time
if the automaton is
symmetric^{2}^{2}2The notation supresses polynomial factors.. Unfortunately, this is not always the case, in fact, only the quantifier
, the bits needed to
optimize over free variables, as well as the formulas that do not
require any bits, yield an symmetric tree automaton. That means that the
simulation is wasteful if we
consider a mixed formula (for instance, one that contains a partition and
a connected quantifier). To overcome this issue, we partition the bits
of the state description into two parts: first the “symmetric” bits
of the quantifiers and
the bits required for optimization, and in the “asymmetric” ones of all
other elements of the formula. Let and
be defined analogously to . We
implement the join of states as in the following lemma, allowing
us to deduce the
running time of the model-checker for concrete formulas. The right side of
Table 1 provides an overview for formulas
presented here.
{lemma}
Let be a node of with children and , and let and
be sets of states in which the automaton may be at and
. Then the set of states in which the automaton may be at
node can be computed in time
.

###### Proof.

To compute , we first split into such that all elements in one share the same “symmetric bits”. This can be done in time using bucket-sort. Note that we have and . With the same technique we identify for every elements in its corresponding partition . Finally, we compare with the elements in to identify those for which there is a transition in the automaton. This yields a running time of . ∎

Quantifier / Formula | Number of Bit |
---|---|

free variables | |

Time | |

## 5 Applications and Experiments

In order to show the feasibility of our approach, we have performed experiments for widely investigated graph problems: 3-coloring, vertex-cover, dominating-set, independent-set, and feedback-vertex-set. All experiments where performed on an Intel Core processor containing four cores of 3.2 GHz each and 8 Gigabyte RAM. The machine runs Ubuntu 17.10. Jdrasil was used with Java 1.8 and both Sequoia and D-Flat were compiled with gcc 7.2. The implementation of Jatatosk uses hashing to realize Lemma 4.5, which has no constant-time worst case guarantee but works well in practice. We use a data set that was assembled from three different sources and that contains graphs with 18 to 956 vertices and treewidth 3 to 13. The first source is a collection of publicly available transit graphs from GTFS-transit feeds [15] that was also used for experiments in [12], the second source are real-world instances collected in [2], and the last one are the publicly available graphs used in the PACE challenge [18] (we selected the ones with treewidth at most 11). For 3-coloring the results can be found in Experiment 1, and for the other problems in Appendix B.

D-Flat | Jdrasil-Coloring | Jatatosk | Sequoia | |
---|---|---|---|---|

Average Time | 478.19 | 36.52 | 42.63 | 714.73 |

Standard Deviation | 733.90 | 77.8 | 81.82 | 866.34 |

Median Time | 3.5 | 21 | 24.5 | 20.5 |

## 6 Conclusion and Outlook

We investigated the practicability of dynamic programming on tree decompositions, which is arguably one of the corner stones of parameterized complexity theory. We implemented a simple interface for such programs and demonstrated how it can be used to build a competitive graph coloring solver with just a few lines of code. We hope that the interface allows other researchers to implement and explore various dynamic programs on tree decompositions. The whole power of such dynamic programs is well captured by Courcelle’s Theorem, which essentially states that there is an efficient version of such a program for every problem definable in monadic second-order logic. We took a step towards practice here as well, by implementing a “lightweight” version in the form of a model-checker for a small fragment of the logic. By clever syntactic extensions, the fragment turns out to be powerful enough to express many natural problems such as 3-coloring, feedback-vertex-set, and more.

## References

- [1] Michael Abseher, Bernhard Bliem, Günther Charwat, Frederico Dusberger, Markus Hecher, and Stefan Woltran. D-flat: progress report. DBAI, TU Wien, Tech. Rep. DBAI-TR-2014–86, 2014.
- [2] Michael Abseher, Frederico Dusberger, Nysret Musliu, and Stefan Woltran. Improving the efficiency of dynamic programming on tree decompositions via machine learning. In Proc. IJCAI, pages 275–282, 2015.
- [3] M. Bannach. Jatatosk. https://github.com/maxbannach/Jatatosk, 2018. [Online; accessed 22-04-2018].
- [4] M. Bannach. Jdrasil for Graph Coloring. https://github.com/maxbannach/Jdrasil-for-GraphColoring, 2018. [Online; accessed 22-04-2018].
- [5] M. Bannach, S. Berndt, and T. Ehlers. Jdrasil. https://github.com/maxbannach/Jdrasil, 2017. [Online; accessed 22-04-2018].
- [6] Max Bannach, Sebastian Berndt, and Thorsten Ehlers. Jdrasil: A modular library for computing tree decompositions. In 16th International Symposium on Experimental Algorithms, SEA 2017, June 21-23, 2017, London, UK, pages 28:1–28:21, 2017. doi:10.4230/LIPIcs.SEA.2017.28.
- [7] Hans L Bodlaender. A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM Journal on computing, 25(6):1305–1317, 1996.
- [8] Bruno Courcelle. The monadic second-order logic of graphs. i. recognizable sets of finite graphs. Information and computation, 85(1):12–75, 1990.
- [9] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. doi:10.1007/978-3-319-21275-3.
- [10] Reinhard Diestel. Graph Theory, 4th Edition, volume 173 of Graduate texts in mathematics. Springer, 2012.
- [11] M. R. Fellows. Parameterized complexity for practical computing. http://www.mrfellows.net/wordpress/wp-content/uploads/2017/11/FellowsToppforsk2017.pdf, 2018. [Online; accessed 22-04-2018].
- [12] Johannes Klaus Fichte, Neha Lodha, and Stefan Szeider. Sat-based local improvement for finding tree decompositions of small width. In Theory and Applications of Satisfiability Testing - SAT, pages 401–411, 2017.
- [13] J. Flum and M. Grohe. Parameterized Complexity Theory. Texts in Theoretical Computer Science. Springer, 2006. doi:10.1007/3-540-29953-X.
- [14] Markus Frick and Martin Grohe. The complexity of first-order and monadic second-order logic revisited. Annals of pure and applied logic, 130(1-3):3–31, 2004.
- [15] gtfs2graphs - A Transit Feed to Graph Format Converter. https://github.com/daajoe/gtfs2graphs. Accessed: 2018-04-20.
- [16] Joachim Kneis, Alexander Langer, and Peter Rossmanith. Courcelle’s theorem—a game-theoretic approach. Discrete Optimization, 8(4):568 – 594, 2011. URL: http://www.sciencedirect.com/science/article/pii/S157252861100034X, doi:https://doi.org/10.1016/j.disopt.2011.06.001.
- [17] Alexander Langer. Fast algorithms for decomposable graphs. PhD thesis, RWTH Aachen, 2013.
- [18] The Parameterized Algorithms and Computational Experiments Challenge (PACE). https://pacechallenge.wordpress.com/. Accessed: 2018-04-20.
- [19] Hisao Tamaki. Positive-instance driven dynamic programming for treewidth. In Proc. ESA, pages 68:1–68:13, 2017.

## Appendix A Technical Appendix: Description of the Fragment

#Bit: Introduce: As for . Forget: Just clear the corresponding bits. Edge: As for , but reject if two vertices of the same component are connected. Join: As for , but track if the join introduces a cycle.

#Bit: 0 Introduce: - Forget: - Edge: Reject if is not satisfied for the vertices of the edge. Join: -

#Bit: Introduce: - Forget: Reject if the bit corresponding to is not set. Edge: Set the bit of if is satisfied. Join: Compute the logical-or of the bits of both states. #Bit: Introduce: Set the corresponding bit. Forget: If the corresponding bit is set, set the additional bit. Edge: If is not satisfied, clear the corresponding bit. Join: Compute the logical-and of all but the last bit, for the last bit use a logical-or.

#Bit: 1 Introduce: - Forget: - Edge: Set the bit if is satisfied. Join: Compute logical-or of the bit in both states. () #Bit: 0 (1) Introduce: Test if is satisfied and reject if not (set the bit if so). Forget: - Edge: - Join: - (Compute logical-or of the bit in both states.)

## Appendix B Technical Appendix: Further Experiments

We perform the experiments from Section 5 for further problems. We did run every solver for a maximum of 600 seconds on every instance for every problem. It can be seen that Jatatosk is competitive (though not superior) against its competitors, as it is faster then the faster of the two on many instances and its average running time is at most twice the time of the corresponding fastest algorithm. Jatatosk outperforms the others for 3-coloring, but gets outperformed for vertex-cover by Sequoia. The same holds for independent-set, also the difference is much smaller in this case. For dominating-set the situation is more complex, as Jatatosk outperforms the others on about half of the instances, and gets outperformed on the other half. Interestingly, the difference is quit high in both halves in both directions.

D-Flat | Jatatosk | Sequoia | |
---|---|---|---|

Average Time | 451.68 | 59.02 | 33.95 |

Standard Deviation | 213.08 | 128.45 | 92.45 |

Median Time | 597.5 | 30 | 6 |

D-Flat | Jatatosk | Sequoia | |
---|---|---|---|

Average Time | 420.14 | 102.48 | 114.92 |

Standard Deviation | 265.14 | 157.80 | 196.67 |

Median Time | 600 | 44.5 | 20.5 |

D-Flat | Jatatosk | Sequoia | |

Average Time | 229.18 | 16.98 | 15.32 |

Standard Deviation | 272.64 | 18.17 | 45.53 |

Median Time | 13 | 14 | 1 |

D-Flat | Jatatosk | Sequoia | |

Average Time | 587.84 | 303.6 | 384.16 |

Standard Deviation | 77.72 | 292.76 | 282.28 |

Median Time | 600 | 548 | 600 |