Computing Treewidth on the GPU
Abstract
We present a parallel algorithm for computing the treewidth of a graph on a GPU. We implement this algorithm in OpenCL, and experimentally evaluate its performance. Our algorithm is based on an time algorithm that explores the elimination orderings of the graph using a HeldKarp like dynamic programming approach. We use Bloom filters to detect duplicate solutions.
GPU programming presents unique challenges and constraints, such as constraints on the use of memory and the need to limit branch divergence. We experiment with various optimizations to see if it is possible to work around these issues. We achieve a very large speed up (up to ) compared to running the same algorithm on the CPU.
O m \ExplSyntaxOff \CopyrightTom C. van der Zanden and Hans L. Bodlaender\subjclassG.2.2 Graph algorithms
1 Introduction
Treewidth is a well known graph parameter that measures how ‘treelike’ a graph is. The fact that many otherwise hard graph problems are linear time solvable on graphs of bounded treewidth [6] has been exploited in many theoretical and practical applications. For such applications, it is important to have efficient algorithms, that given a graph, determine the treewidth and find tree decompositions with optimal (or nearoptimal) width.
The interest in practical algorithms to compute treewidth and tree decompositions is also illustrated by the fact that both the PACE 2016 and PACE 2017 challenges [12] included treewidth as one of the two challenge topics. Remarkably, while most tracks in the PACE 2016 challenge attracted several submissions [13], there were no submissions for the call for GPUbased programs for computing treewidth. Current sequential exact algorithms for treewidth are only practical when the treewidth is small (up to 4, see [17]), or when the graph is small (see [16, 4, 25, 14, 24]). As computing treewidth is NPhard, an exponential growth of the running time is to be expected; unfortunately, the exact FPT algorithms that are known for treewidth are assumed to be impractical; e.g., the algorithm of [3] has a running time of . This creates the need for good parallel algorithms, as parallelism can help to significantly speed up the algorithms, and thus deal with larger graph sizes.
In this paper, we consider a practical parallel exact algorithm to compute the treewidth of a graph and a corresponding tree decomposition. The starting point of our algorithm is a sequential algorithm by Bodlaender et al. [4]. This algorithm exploits a characterization of treewidth in terms of the width of an elimination ordering, and gives a dynamic programming algorithm with a structure that is similar to the textbook HeldKarp algorithm for TSP [18].
Prior work on parallel algorithms for treewidth is limited to one paper, by Yuan [24], who implements a branch and bound algorithm for treewidth on a CPU with a (relatively) small number of cores. With the advent of relatively inexpensive consumer GPUs that offer more than an order of magnitude more computational power than their CPU counterparts, it is very interesting to explore how exact and fixedparameter algorithms can take advantage of the unique capabilities of GPUs. We take a first step in this direction, by exploring how treewidth can be computed on the GPU.
Our algorithm is based on the elimination ordering characterization of treewidth. Given a graph , we may eliminate a vertex from by removing and turning its neighborhood into a clique, thus obtaining a new graph. One way to compute treewidth is to find an order in which to eliminate all the vertices of , such that the maximum degree of each vertex (at the time it is eliminated) is minimized. This formulation is used by e.g. [16] to obtain a (worstcase) time algorithm. However, it is easy to obtain an time algorithm by applying HeldKarp style dynamic programming as first observed by Bodlaender et al. [4]: given a set , eliminating the vertices in from will always result in the same intermediate graph, regardless of the order in which the vertices are eliminated (and thus, the order in which we eliminate only affects the degrees encountered during its elimination). This optimization is used in the algorithms of for instance [15] and [24].
We explore the elimination ordering space in a breadthfirst manner. This enables efficient parallelization of the algorithm: during each iteration, a wavefront of states (consisting of the sets of vertices of size for which there is a feasible elimination order) is expanded to the wavefront of the next level, with each thread of the GPU taking a set and considering which candidate vertices of the graph can be added to . Since multiple threads may end up generating the same state, we then use a bloom filter to detect and remove these duplicates.
To reduce the number of states explored, we experiment with using the minorminwidth heuristic [16], for which we also provide a GPU implementation. Whereas normally this heuristic would be computed by operating on a copy of the graph, we instead compute it using only the original graph and a smaller auxiliary data structure, which may be more suitable for the GPU. We also experiment with several techniques unique to GPU programming, such as using shared/local memory (which can best be likened to the cache of a CPU) and rewriting nested loops into a single loop to attempt to improve parallelism.
We provide an experimental evaluation of our techniques, on a platform equipped with a Intel Core i76700 CPU (3.40GHz) with 32GB of RAM (4x8GB DDR4), and an NVIDIA GeForce GTX 1060 with 6GB GDDR5 memory (Manufactured by Gigabyte, Part Number GVN1060WF2OC6GD). Our algorithm is implemented in OpenCL (and thus highly portable). We achieve a very large speedup compared to running the same algorithm on the CPU.
2 Preliminaries
Treewidth.
For a detailed description of treewidth and its characterization, we refer to [11]. Our algorithm is based on the time algorithm of Bodlaender et al. [4]. Though the characterization in terms of tree decomposition is more common, we recall only the characterization in terms of elimination orderings that is used by this algorithm:
Let be a graph with vertices . An elimination ordering is a permutation of the vertices of . The treewidth of is defined as , where is the set of vertices , i.e., is the subset of vertices of reachable from by paths whose internal vertices are in .
An alternative view of this definition is that given a graph , we can eliminate a vertex by removing it from the graph, and turning its neighborhood into a clique. The treewidth of a graph is at most , if there exists an elimination order such that all vertices have degree at most at the time they are eliminated.
GPU Terminology.
Parallelism on a GPU is achieved by executing many threads in parallel. These threads are grouped into warps of 32 threads. The 32 threads that make up a warp do not execute independently: they share the same program counter, and thus must always execute the same “line” of code (thus, if different threads need to execute different branches in the code, this execution is serialized  this phenomenon, called branch divergence, should be avoided). The unit that executes a single thread is called a CUDA core.
We used a GTX1060 GPU, which is based on the Pascal architecture [20]. The GTX1060 has 1280 CUDA cores, which are distributed over 10 Streaming Multiprocessors (SMs). Each SM thus has 128 CUDA cores, which can execute up to 4 warps of 32 threads simultaneously. However, a larger number of warps may be assigned to an SM, enabling the SM to switch between executing different warps, for instance to hide memory latency.
Each SM has 256KiB^{1}^{1}1A kibibyte is bytes. of register memory (which is the fastest, but which registers are addressed must be known at compile time, and thus for example dynamically indexing an array stored in register memory is not possible), 96KiB of shared memory (which can be accessed by all threads executing within the thread block) and 48KiB of L1 cache.
Furthermore, we have approximately 6GB of global memory available which can be written to and read from by all threads, but is very slow (though this is partially alleviated by caching and latency hiding). Shared memory can, in the right circumstances, be read and written much faster, but is still significantly slower than register memory. Finally, there is also texture memory (which we do not use) and constant memory (which is a cached section of the global memory) that can be used to store constants that do not change over the kernel’s execution (we use constant memory to store the adjacency lists of the graph).
Shared memory resides physically closer to the SM than global memory, and it would thus make sense to call it “local” memory (in contrast to the more remote global memory). Indeed, OpenCL uses this terminology. However, NVIDIA/CUDA confusingly use “local memory” to indicate a portion of the global memory dedicated to a single thread.
3 The Algorithm
3.1 Computing Treewidth
Our algorithm works with an iterative deepening approach: for increasing values of , it repeatedly runs an algorithm that tests whether the graph has treewidth at most . This means that our algorithm is in practice much more efficient than the worstcase behavior shown by [4], since only a small portion of the possible subsets may be feasible for the target treewidth . A similar approach (of solving the decision version of the problem for increasing values of ) was also used by Tamaki [22], who refers to it as positiveinstance driven dynamic programming.
This algorithm lends itself very well to paralellization, since the subsets can be evaluated (mostly) independently in parallel. This comes at the cost of slightly reduced efficiency (in terms of the number of states expanded) compared to a branch and bound approach (e.g. [14, 24, 25]) since the states with treewidth are expanded more than once. However, even a branch and bound algorithm needs to expand all of the states with treewidth before it can conclude that treewidth is optimal, so the main advantage of branch and bound is that it can settle on a solution with treewidth without expanding all such solutions (of width ).
To test whether the graph has treewidth at most , we consider subsets of increasing size, such that the vertices of can be eliminated in some order without eliminating a vertex of degree . For each , the algorithm starts with an input list (that initially contains just the empty set) and then forms an output list by for each set in the input list, attempting to add every vertex to , which is feasible only if the degree of in the graph that remains after eliminating the vertices in is not too large. This is tested using a depth first search. Then, the input and output lists are swapped and the process is repeated. If after iterations the output list is not empty, we can conclude that the graph has treewidth at most . Otherwise, we proceed to test for treewidth . Pseudocode for this algorithm is given in Listing 3.1.
We include three optimizations: first, if induces a clique, there is an elimination order that ends with the vertices in [4]. We can thus precompute a maximum clique , and on line 7 of Lisiting 3.1, skip any vertices in . Next, if has treewidth at most and there are at least vertexdisjoint paths between vertices and , we may add the edge to without increasing its treewidth [10]. Thus, we precompute for each pair of vertices the number of vertexdisjoint paths between them, and when testing whether the graph has treewidth at most we add edges between all vertices which have at least disjoint paths (note that this has diminishing returns, since in each iteration we can add fewer and fewer edges). Finally, if the graph has treewidth at least , then the last vertices can be eliminated in any order so we can terminate execution of the algorithm earlier.
We note that our algorithm does not actually compute a tree decomposition or elimination order, but could easily be modified to do so. Currently, the algorithm stores with each (partial) solution one additional integer, which indicates which four vertices were the last to be eliminated. To reconstruct the solution, one could either store a copy of (one in every four of) the output lists on the disk, or repeatedly add the last four vertices to and rerun the algorithm to obtain the next four vertices (with each iteration taking less time than the previous, since the size of has increased).
3.2 Duplicate Elimination using Bloom Filters
Each set may be generated in multiple ways by adding different vertices to subsets ; if we do not detect whether a set is already in the output list when adding it, we risk the algorithm generating sets. To detect whether a set is already in the output, we use a Bloom filter [2]: Bloom filters are a classical data structure in which an array of bits can be used to encode the presence of elements by means of hash functions. To insert an element , we compute independent hash functions each of which indicates one position in the array, , which should be set to . If any of these bits was previously zero, then the element was not yet present in the filter, and otherwise, the probability of a false positive is approximately .
In our implementation, we compute two 32bit hashes using Murmur3 [1], which we then combine linearly to obtain hashes (which is nearly as good as using independent hash functions [19]).
In our experiments, we have used and to obtain a low (theoretical) false positive probability of around in . We note that the possibility of false positives results in a Monte Carlo algorithm (the algorithm may inadvertently decide that the treewidth is higher than it really is). Indeed, given that many millions of states are generated during the search we are guaranteed that the Bloom filter will return some false positives, however, this does not immediately lead to incorrect results: it is still quite unlikely that all of the states leading to an optimal solution are pruned, since there are often multiple feasible elimination orders.
The Bloom filter is very suitable for implementation on a GPU, since our target architecture (and indeed, most GPUs) offers a very fast atomic OR operation [21]. We note that addressing a Bloom filter concurrently may also introduce false negatives if multiple threads attempt to insert the same element simultaneously. To avoid this, we use the initial hash value to pick one of 65.536 mutexes to synchronize access (this allows most operations to happen waitfree, and only a collision on the initial hash value causes one thread to wait for another).
3.3 MinorMinWidth
Search algorithms for treewidth are often enhanced with various heuristics and pruning rules to speed up the computation. One very popular choice (used by e.g. [16, 24, 25]) is minorminwidth (MMW) [16] (also known as MMD+(mind)) [7]). MMW is based on the observation that the minimum degree of a vertex is a lower bound on the treewidth, and that contracting edges (i.e. taking minors) does not increase the treewidth. MMW repeatedly selects a minimum degree vertex, and then contracts it with a neighbor of minimum degree, in an attempt to obtain a minor with large minimum degree (if we encounter a minimum degree that exceeds our target treewidth, we know that we can discard the current state). As a slight improvement to this heuristic, the second smallest vertex degree is also a lower bound on the treewidth [7].
Given a subset , we would like to compute the treewidth of the graphs that remains after eliminating from . The most straightforward method is to explicitly create a copy of , eliminate the vertices of , and then repeatedly perform the contraction as described above. However, storing e.g. an adjacency list representation of these intermediate graphs would exceed the available shared memory and size of the caches. As we would like to avoid transferring large amounts of data to and from global memory, we implemented a method to compute MMW without explicitly storing the intermediate graphs.
Our algorithm tracks the current degrees of the vertices (which, conveniently, we already have computed to determine which vertices can be eliminated). It is thus easy to select a minimum degree vertex . Since we do not know what vertices it is adjacent to (in the intermediate graph), we must select a minimum degree neighbor by using a depthfirst search, similarly to how we compute the vertex degrees in Listing 3.1. Once we have found a minimum degree neighbor , we run a second deptfirst search to compute the number of neighbors has in common with , allowing us to update the degree of . To keep track of which vertices have been contracted, we use a disjoint set data structure.
The disjoint set structure and list of vertex degrees together use only two bytes per vertex (for a graph of up to 256 vertices), thus, they fit our memory constraints whereas an adjacency matrix or adjacency list (for dense graphs, noting that the graphs in question can quickly become dense as vertices are eliminated) would readily exceed it.
4 Experiments
4.1 Instances
All instances were preprocessed using the preprocessing rules of our PACE submission [8], which split the graph using safe separators: we first split the graph into its connected components, then split on articulation points, then on articulation pairs (making the remaining components 3connected) and finally  if we can establish that this is safe  on articulation triplets (resulting in the 4connected components of the graph). We then furthermore try to detect (almost) clique separators in the graph, and split on those. For a more detailed treatment of these preprocessing rules, we refer to [5].
4.2 General Benchmark
We first present an experimental evaluation of our algorithm (without using MMW) on a set of benchmark graphs. Table 1 shows the number of vertices, computed treewidth, time taken (in seconds) on the GPU and the number of sets explored. Note that the time does not include the time taken for preprocessing, and that the vertex count is that of the preprocessed graph (and thus, the original graph may have been larger).
Name  tw  Time (sec.)  Exp  
GPU  CPU  
1e0b_graph  55  24  \calcnum778828 / 1000    \calcnum1731299698 / 1000000 
1fjl_graph*  57  26  \calcnum1734559 / 1000    \calcnum3682951713 / 1000000 
1igd_graph  59  25  \calcnum106808 / 1000  \calcnum5116568 / 1000  \calcnum260659220 / 1000000 
1ku3_graph  60  22  \calcnum234853 / 1000    \calcnum542195037 / 1000000 
1ubq*  47  11  \calcnum1132360 / 1000    \calcnum2296029372 / 1000000 
8x6_torusGrid*  48  7  \calcnum1106367 / 1000    \calcnum2098741672 / 1000000 
BN_97  48  18  \calcnum1024056 / 1000    \calcnum2306775237 / 1000000 
BN_98  47  21  \calcnum689341 / 1000    \calcnum1589013379 / 1000000 
contiki_dhcpc_handle_dhcp*  39  6  \calcnum1490413 / 1000    \calcnum2933148926 / 1000000 
DoubleStarSnark  30  6  \calcnum34497 / 1000  \calcnum873370 / 1000  \calcnum87628083 / 1000000 
DyckGraph  32  7  \calcnum280488 / 1000    \calcnum638882602 / 1000000 
HarborthGraph*  40  5  \calcnum697909 / 1000    \calcnum1535664481 / 1000000 
KneserGraph_8_3*  56  24  \calcnum1711441 / 1000    \calcnum4125978014 / 1000000 
McGeeGraph  24  7  \calcnum1298 / 1000  \calcnum25267 / 1000  \calcnum3853249 / 1000000 
myciel4  23  10  \calcnum234 / 1000  \calcnum460 / 1000  \calcnum97765 / 1000000 
myciel5*  47  19  \calcnum2000829 / 1000  \calcnum70608054 / 1000  \calcnum4003611855 / 1000000 
NonisotropicUnitaryPolarGraph_3_3  63  53  \calcnum1158 / 1000  \calcnum60444 / 1000  \calcnum1558852 / 1000000 
queen5_5  25  18  \calcnum212 / 1000  \calcnum23 / 1000  \calcnum3134 / 1000000 
queen6_6  36  25  \calcnum254 / 1000  \calcnum389 / 1000  \calcnum35973 / 1000000 
queen7_7  49  35  \calcnum966 / 1000  \calcnum43491 / 1000  \calcnum1901029 / 1000000 
queen8_8  64  45  \calcnum26284 / 1000  \calcnum2044470 / 1000  \calcnum57892754 / 1000000 
RandomBarabasiAlbert_100_2*  41  12  \calcnum1609641 / 1000    \calcnum3283134936 / 1000000 
RandomBoundedToleranceGraph_60  59  30  \calcnum274 / 1000  \calcnum635 / 1000  \calcnum56028 / 1000000 
SylvesterGraph  36  15  \calcnum247921 / 1000    \calcnum631663187 / 1000000 
te*  62  7  \calcnum1171702 / 1000    \calcnum2163144692 / 1000000 
water  21  9  \calcnum197 / 1000  \calcnum6 / 1000  \calcnum1240 / 1000000 
The size of the input and output lists were limited by the memory available on our GPU. With the current configuration (limited to graphs of at most 64 vertices  though the code is written to be flexible and can easily be changed to support up to vertices), these lists could hold at most 180 million states (i.e., subsets that have a feasible partial elimination order) each. If at any iteration this number was exceeded, the excess states were discarded. The algorithm was allowed to continue execution for the current treewidth , but was terminated when trying the next higher treewidth (since we might have discarded a state that would have lead to a solution with treewidth , the answer would no longer be exact). The states were the capacity of the lists was exceed are marked with *, if the algorithm was terminated then the treewidth is stricken through (and represents the candidate value for treewidth at which the algorithm was terminated, and not the treewidth of the graph, which is likely higher).
For instance, for graph 1ubq the capacity of the lists was first exceeded at treewidth , and the algorithm was terminated at treewidth (and thus the actual treewidth is at least , but likely higher). For graph myciel5, the capacity of the lists was first exceeded at treewidth , but still (despite discarding some states) a solution of treewidth was nevertheless found (which we thus know is the exact treewidth).
For several graphs (those where the GPU version of the algorithm took at most 5 minutes), we also benchmarked a sequential version of the same algorithm on the CPU. In some cases, the algorithm achieves a very large speedup compared to the CPU version (up to , in the case of queen8_8). Additionally, for myciel5, we also ran the CPUbased algorithm, which took more than 19 hours to finish. The GPU version only took 34 minutes.
The GPU algorithm can process a large amount of states in a very short time. For example, for the graph 1fjl, 3680 million states were explored in just 1730 seconds, i.e., over 2 million states were processed each second (and for each state, a time algorithm is executed). The highest throughput (2.5 million states/sec.) is achieved on SylvesterGraph, but this graph has relatively few vertices.
We caution the reader that the graph names are somewhat ambiguous. For instance, the queen7_7 instance is from libtw and has treewidth 35. The 2016 PACE instances include a graph called dimacs_queen7_7 which only has treewidth 28. The instances used in our evaluation are available from our GitHub repository [9].
4.3 Work Size and Global v.s. Shared Memory
In this section, we study the effect of work size and whether shared or global memory is used on the running time of our implementation.
Recall that shared memory is a small amount (in our case, 96KiB) of memory that is physically close to each Streaming Multiprocessor, and is therefore in principle faster than the (much larger, offchip) global memory. We would therefore expect that our implementation is faster when used with shared memory.
Each SM contains CUDA cores, and thus warps of threads each can be executed simultaneously on each SM. The work size (which should be a multiple of ), represents the number of threads we assign to each SM. If we set the work size larger than , more threads than can physically be executed at once are assigned to one SM. The SM can then switch between executing different warps, for instance to hide latency of memory accesses. If the work size is smaller than , a number of CUDA cores will be unutilized.
In Table 2, we present some experiments that show running times on several graphs, depending on whether shared memory or global memory is used, for several sizes of work group (which is the number of threads allocated to a single SM).
There is not much difference between running the program using shared or global memory. In most instances, the shared memory version is slightly faster. Surprisingly, it also appears that the work size used does not affect the running time significantly. This suggests that our program is limited by the throughput of memory, rather than being computationallybound.
Name  tw  Time (sec.)  Exp  
GPU  CPU  
1e0b_graph  55  24  \calcnum720504 / 1000    \calcnum1731299697 / 1000000 
1fjl_graph*  57  26  \calcnum1595005 / 1000    \calcnum3656827840 / 1000000 
1igd_graph  59  25  \calcnum98186 / 1000  \calcnum5116568 / 1000  \calcnum260659220 / 1000000 
1ku3_graph  60  22  \calcnum222473 / 1000    \calcnum542195037 / 1000000 
1ubq*  47  11  \calcnum1038873 / 1000    \calcnum2290803382 / 1000000 
8x6_torusGrid*  48  7  \calcnum1039769 / 1000    \calcnum2077190472 / 1000000 
BN_97  48  18  \calcnum944285 / 1000    \calcnum2306775238 / 1000000 
BN_98  47  21  \calcnum642902 / 1000    \calcnum1589013379 / 1000000 
contiki_dhcpc_handle_dhcp*  39  6  \calcnum1353034 / 1000    \calcnum2835417012 / 1000000 
DoubleStarSnark  30  6  \calcnum32832 / 1000  \calcnum873370 / 1000  \calcnum87628083 / 1000000 
DyckGraph  32  7  \calcnum266271 / 1000    \calcnum638882602 / 1000000 
HarborthGraph*  40  5  \calcnum646916 / 1000    \calcnum1532611838 / 1000000 
KneserGraph_8_3*  56  24  \calcnum1580621 / 1000    \calcnum4103569889 / 1000000 
McGeeGraph  24  7  \calcnum1235 / 1000  \calcnum25267 / 1000  \calcnum3853249 / 1000000 
myciel4  23  10  \calcnum238 / 1000  \calcnum460 / 1000  \calcnum97765 / 1000000 
myciel5*  47  19  \calcnum1845884 / 1000  \calcnum70608054 / 1000  \calcnum3994710744 / 1000000 
NonisotropicUnitaryPolarGraph_3_3  63  53  \calcnum1029 / 1000  \calcnum60444 / 1000  \calcnum1558852 / 1000000 
queen5_5  25  18  \calcnum179 / 1000  \calcnum23 / 1000  \calcnum3134 / 1000000 
queen6_6  36  25  \calcnum241 / 1000  \calcnum389 / 1000  \calcnum35973 / 1000000 
queen7_7  49  35  \calcnum875 / 1000  \calcnum43491 / 1000  \calcnum1901029 / 1000000 
queen8_8  64  45  \calcnum24522 / 1000  \calcnum2044470 / 1000  \calcnum57892754 / 1000000 
RandomBarabasiAlbert_100_2*  41  12  \calcnum1473373 / 1000    \calcnum3264359373 / 1000000 
RandomBoundedToleranceGraph_60  59  30  \calcnum263 / 1000  \calcnum635 / 1000  \calcnum56028 / 1000000 
SylvesterGraph  36  15  \calcnum229185 / 1000    \calcnum631663187 / 1000000 
te*  62  7  \calcnum1098076 / 1000    \calcnum2138569876 / 1000000 
water  21  9  \calcnum207 / 1000  \calcnum6 / 1000  \calcnum1240 / 1000000 
4.4 MinorMinWidth
In Table 4, we list results obtained when using MinorMinWidth to prune states.
The computational expense of using MMW is comparable to that of the initial computation (for determining the degree of vertices): the algorithm does a linear search for a minimum degree vertex (using the precomputed degree values), and then does a graph traversal (using BFS) to find a minimum degree neighbour (recall that we do not store the intermediate graph, and use only a single copy of the original graph). Once such a neighbour is found, the contraction is performed (by updating the disjoint set data structure) and another graph traversal is required (to compute the number of common neighbours, and thus update the degree of the vertex).
The lower bound given by MMW does not appear to be very strong, at least for the graphs considered in our experiment: the reduction in number of states expanded is not very large (for instance, from 1730 million states to 1660 million for 1e0b, or from 1590 million to 1480 million for BN_98). The largest reductions are visible for graphs on which we run out of memory (for instance, from 4130 million to 1330 million for KneserGraph_8_3), but this is likely because the search is terminated before we reach the actual treewidth (so we avoid the part of our search where using a heuristic is least effective) and there are no graphs on which we previously ran out of memory for which MMW allows us to determine the treewidth (the biggest improvement is that we are able to determine that te has treewidth at least 10, up from treewidth at least 7).
Consistent with the relatively low reduction in the number of states expanded, we see the computation using MMW typically takes around times longer. On the graphs considered here, the reduction in search space offered by MMW does not offset the additional cost of computing it.
Again, the GPU version is significantly faster than executing the same algorithm on the CPU: we observed a speedup for queen8_8. Still, given what we observed in Section 4.3, it is not clear whether our approach of not storing the intermediate graphs explicitly is indeed the best approach. Our main motivation for taking this approach was to be able to store the required data structures entirely in shared memory, but our experiments indicate that for MMW, using global memory gives better performance than using shared memory. However, the relatively good performance of global memory might be (partially) due to caching and the small amount of data transferred, so it is an interesting open question to determine whether the additional memory costs of using more involved data structures is compensated by the potential speedup.
4.5 Loop Unnesting
Finally, we experimented with another technique, which aims to increase parallelism (and thus speedup) by limiting branch divergence. However, as the results were discouraging, we limit ourselves to a brief discussion.
The algorithm of Listing 3.1 consists of a loop (lines 5–22) over the (not yet eliminated) vertices, inside of which is a depthfirst search (which computes the degree of the vertex, to determine whether it can be eliminated). The depthfirst search in turn consists of a loop which runs until the stack becomes empty (lines 10–19) inside of which is a final loop over the neighbours of the current vertex (lines 12–18). This leads to two sources of branch divergence:

First, if the graph is irregular, all threads in a warp have to wait for the thread that is processing the highest degree vertex, even if they only have lowdegree vertices.

Second, all threads in a warp have to wait for the longest of the BFS searches to finish before they can start processing the next vertex.
To alleviate this, we proposed a technique which we call loop unnesting: rather than have 3 nested loops, we have only one loop, which simulates a state machine with 3 states: (1) processing the adjacency list of a vertex, (2) having finished processing of an adjacency list and being ready to pop a new vertex off the queue, or (3) having finished a BFS, and being ready to begin computing the degree of a new vertex.
We considered a slightly more general version of this idea: in an unnesting of our program, after every iterations of the inner loop (exploring neighbours of the current vertex) one iteration of the middle loop is executed (if exploring the adjacency list is finished, get a new vertex from the queue), and for every iterations of the middle loop, one iteration of the outer loop is executed (begin processing an entirely new vertex). Thus, a unrolling corresponds to the state machine simulation described above, and an unrolling corresponds to the original program.
Picking the right values for means finding the right tradeoff between checking frequently enough whether a thread is ready to start working on another vertex, and the cost of performing those checks. What we observed was surprising: while , and )unrollings gave reasonable results, the best results were obtained with unrollings (i.e. the original, unmodified algorithm) and the performance of unrollings was abysmal.
We believe that a possible explanation may be that loop unnesting does work to some extent, but not unnesting the loops has the advantage that all BFS searches running simultaneously start from the same initial vertex, and (up to differences caused by different sets being used) will access largely the same values from the adjacency lists at the same time, which may increase the efficiency of read operations. On the other hand, unnesting can not take advantage of either phenomenon: different initial vertices may be processed at any given time (so there is little consistency in memory accesses) and the inner loop is not unnested at all so there is no potential to gain speedup there either. Perhaps for larger graphs, where the difference in length of adjacency lists may be more pronounced, or the amount of time a BFS takes varies more strongly with the initial vertes and , loop unnesting does provide speed up, but for the graphs considered here it does not appear to be a beneficial choice.
5 Conclusions
We have presented an algorithm that computes treewidth on the GPU, achieving a very large speedup over running the same algorithm on the CPU. Our algorithm is based on the classical time dynamic programming algorithm [4] and our results represent (promising) first steps in speeding up dynamic programming for treewidth on the GPU. The current best known practical algorithm for computing treewidth is the algorithm due to Tamaki [22]. This algorithm is much more complicated, and porting it to the GPU would be a formidable challenge but could possibly offer an extremely efficient implementation for computing treewidth.
Given the large speedup achieved, we are no longer mainly limited by computation time. Instead, our ability to solve larger instances is hampered by the memory required to store the very large lists of partial solutions. Using minorminwidth did not prove effective in reducing the number of states considerably, so it would be interesting to see how other heuristics and pruning rules (such as simplicial vertex detection) could be implemented on the GPU.
GPUs are traditionally used to solve easy (e.g. linear time) problems on very large inputs (such as the millions of pixels rendered on a screen, or exploring a graph with millions of nodes), but clearly, the speedup offered by inexpensive GPUs would also be very welcome in solving hard (complete) problems on small instances. Exploring how techniques from FPT and exact algorithms can be used on the GPU raises many interesting problems  not only practical ones, but also theoretical: how should we model complex devices such as GPUs, with their many types of memory and branch divergence issues?
Acknowledgements. We thank Jacco Bikker for discussions on the architecture of GPUs, and Gerard Tel for discussions on hash functions.
Source Code and Instances. We have made our source code, as well as the graphs used for the experiments, available on GitHub [9].
References
 [1] Austin Appleby. SMHasher. Accessed 20170412. URL: https://github.com/aappleby/smhasher.
 [2] Burton H. Bloom. Space/time tradeoffs in hash coding with allowable errors. Commun. ACM, 13(7):422–426, 1970.
 [3] Hans L. Bodlaender. A linear time algorithm for finding treedecompositions of small treewidth. SIAM J. Comput., 25:1305–1317, 1996.
 [4] Hans L. Bodlaender, Fedor V. Fomin, Arie M. C. A. Koster, Dieter Kratsch, and Dimitrios M. Thilikos. On exact algorithms for treewidth. ACM Trans. Algorithms, 9(1):12:1–12:23, December 2012.
 [5] Hans L. Bodlaender and Arie M.C.A. Koster. Safe separators for treewidth. Discrete Mathematics, 306(3):337–350, 2006.
 [6] Hans L. Bodlaender and Arie M.C.A. Koster. Combinatorial optimization on graphs of bounded treewidth. The Computer Journal, 51(3):255–269, 2008.
 [7] Hans L. Bodlaender and Arie M.C.A. Koster. Treewidth computations II. Lower bounds. Information and Computation, 209(7):1103 – 1119, 2011.
 [8] Hans L. Bodlaender and T. C. van der Zanden. BZTreewidth. Accessed 20170411. URL: https://github.com/TomvdZanden/BZTreewidth.
 [9] Hans L. Bodlaender and T. C. van der Zanden. GPGPU treewidth. Accessed 20170421. URL: https://github.com/TomvdZanden/GPGPUTreewidth.
 [10] François Clautiaux, Jacques Carlier, Aziz Moukrim, and Stéphane Nègre. New lower and upper bounds for graph treewidth. In Klaus Jansen, Marian Margraf, Monaldo Mastrolilli, and José D. P. Rolim, editors, Experimental and Efficient Algorithms: Second International Workshop, WEA 2003, Ascona, Switzerland, May 26–28, 2003 Proceedings, pages 70–80, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg.
 [11] Marek Cygan, Fedor V. Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized algorithms. Springer, 1st edition, 2015.
 [12] Holger Dell, Thore Husfeldt, Bart M. P. Jansen, Petteri Kaski, Christian Komusiewicz, and Frances A. Rosamond. The parameterized algorithms and computational experiments challenge (PACE). Accessed 20170405. URL: https://pacechallenge.wordpress.com/pace2016/trackatreewidth/.
 [13] Holger Dell, Thore Husfeldt, Bart M.P. Jansen, Petteri Kaski, Christian Komusiewicz, and Frances A. Rosamond. The first parameterized algorithms and computational experiments challenge. In LIPIcsLeibniz International Proceedings in Informatics, volume 63. Schloss DagstuhlLeibnizZentrum fuer Informatik, 2017.
 [14] P. Alex Dow. Search Algorithms for Exact Treewidth. PhD thesis, University of California, Los Angeles, CA, USA, 2010. AAI3405666.
 [15] P. Alex Dow and Richard E. Korf. Bestfirst search for treewidth. In Proceedings of the 22nd National Conference on Artificial Intelligence  Volume 2, AAAI’07, pages 1146–1151. AAAI Press, 2007.
 [16] Vibhav Gogate and Rina Dechter. A complete anytime algorithm for treewidth. In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, UAI ’04, pages 201–208, Arlington, Virginia, United States, 2004. AUAI Press.
 [17] Alexander Hein and Arie M. C. A. Koster. An experimental evaluation of treewidth at most four reductions. In Panos M. Pardalos and Steffen Rebennack, editors, Proceedings of the 10th International Symposium on Experimental and Efficient Algorithms, SEA 2011, volume 6630 of Lecture Notes in Computer Science, pages 218–229. Springer Verlag, 2011.
 [18] M. Held and R. Karp. A dynamic programming approach to sequencing problems. Journal of the Society for Industrial and Applied Mathematics, 10:196–210, 1962.
 [19] Adam Kirsch and Michael Mitzenmacher. Less hashing, same performance: Building a better bloom filter. In Yossi Azar and Thomas Erlebach, editors, Algorithms – ESA 2006: 14th Annual European Symposium, pages 456–467, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
 [20] NVIDIA. NVIDIA GeForce GTX 1080 Whitepaper. Accessed 20170410. URL: http://international.download.nvidia.com/geforcecom/international/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf.
 [21] NVIDIA. NVIDIA’s Next Generation CUDA Compute Architecture: FERMI. Accessed 20170412. URL: http://www.nvidia.com/content/pdf/fermi_white_papers/nvidia_fermi_compute_architecture_whitepaper.pdf.
 [22] Hisao Tamaki. PositiveInstance Driven Dynamic Programming for Treewidth. In Kirk Pruhs and Christian Sohler, editors, 25th Annual European Symposium on Algorithms (ESA 2017), volume 87 of Leibniz International Proceedings in Informatics (LIPIcs), pages 68:1–68:13, Dagstuhl, Germany, 2017. Schloss Dagstuhl–LeibnizZentrum fuer Informatik. URL: http://drops.dagstuhl.de/opus/volltexte/2017/7880, doi:10.4230/LIPIcs.ESA.2017.68.
 [23] Thomas C. van Dijk, JanPieter van den Heuvel, and Wouter Slob. Computing treewidth with libtw, 2006. Accessed 20170616. URL: http://www.treewidth.com/treewidth.
 [24] Y. Yuan. A fast parallel branch and bound algorithm for treewidth. In 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, pages 472–479, Nov 2011.
 [25] Rong Zhou and Eric A. Hansen. Combining breadthfirst and depthfirst strategies in searching for treewidth. In Proceedings of the 21st International Joint Conference on Artifical Intelligence, IJCAI’09, pages 640–645, San Francisco, CA, USA, 2009. Morgan Kaufmann Publishers Inc.