Clifford algebras meet tree decompositions^{1}^{1}1 This work is partially supported by Foundation for Polish Science grant HOMING PLUS/20126/2 and a project TOTAL that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 677651).
Abstract
We introduce the Noncommutative Subset Convolution – a convolution of functions useful when working with determinantbased algorithms. In order to compute it efficiently, we take advantage of Clifford algebras, a generalization of quaternions used mainly in the quantum field theory.
We apply this tool to speed up algorithms counting subgraphs parameterized by the treewidth of a graph. We present an time algorithm for counting Steiner trees and an time algorithm for counting Hamiltonian cycles, both of which improve the previously known upper bounds. The result for Steiner Tree also translates into a deterministic algorithm for Feedback Vertex Set. All of these constitute the best known running times of deterministic algorithms for decision versions of these problems and they match the best obtained running times for pathwidth parameterization under assumption .
Michał Włodarczyk\subjclassF.2.2 [Nonnumerical Algorithms and Problems]: Computations on discrete structures; G.2.2 [Graph Theory]: Graph algorithms \EventEditorsJohn Q. Open and Joan R. Acces \EventNoEds2 \EventLongTitle42nd Conference on Very Important Topics (CVIT 2016) \EventShortTitleCVIT 2016 \EventAcronymCVIT \EventYear2016 \EventDateDecember 24–27, 2016 \EventLocationLittle Whinging, United Kingdom \EventLogo \SeriesVolume42 \ArticleNo23
1 Introduction
The concept of treewidth has been introduced by Robertson and Seymour in their work on graph minors [13]. The treewidth of a graph is the smallest possible width of its tree decomposition, i.e. a treelike representation of the graph. Its importance follows from the fact that many NPhard graph problems become solvable on trees with a simple dynamical programming. A similar idea of pathwidth captures the width of a graph in case we would like to have a path decomposition. Formal definitions can be found in Section 2.2.
A bound on the graph’s treewidth allows to design efficient algorithms using fixedparameter tractability. An algorithm is called fixedparameter tractable (FPT) if it works in time complexity where is a parameter describing hardness of the instance and is a computable function. We also use notation that suppresses polynomial factors with respect to the input size. Problems studied in this work are parameterized by the graph’s pathwidth or treewidth. To distinguish these cases we denote the parameter respectively or .
It is natural to look for a function that is growing relatively slow. For problems with a local structure, like Vertex Cover or Dominating Set, there are simple FPT algorithms with single exponential running time. They usually store states for each node of the decomposition and take advantage of the Fast Subset Convolution [2] to perform the join operation in time . As a result, time complexities for pathwidth and treewidth parameterizations remain the same. The Fast Subset Convolution turned out helpful in many other problems, e.g. Chromatic Number, and enriched the basic toolbox used for exponential and parameterized algorithms.
Problems with connectivity conditions, like Steiner Tree or Hamiltonian Cycle, were conjectured to require time until the breakthrough work of Cygan et al. [8]. They introduced the randomized technique Cut & Count working in single exponential time. The obtained running times were respectively and . Afterwards, a faster randomized algorithm for Hamiltonian Cycle parameterized by the pathwidth was presented with running time [7]. This upper bound as well as for Steiner Tree are tight modulo subexponential factors under the assumption of Strong Exponential Time Hypothesis [7, 8].
The question about the existence of single exponential deterministic methods was answered positively by Bodlaender et al. [4]. What is more, presented algorithms count the number of Steiner trees or Hamiltonian cycles in a graph. However, in contrast to the Cut & Count technique, a large gap emerged between the running times for pathwidth and treewidth parameterizations – the running times were respectively , for Steiner Tree and , for Hamiltonian Cycle. This could be explained by a lack of efficient algorithms to perform the join operation, necessary only for tree decompositions. Some efforts have been made to reduce this gap and the deterministic running time for Steiner Tree has been improved to [9].
1.1 Our contribution
The main contribution of this work is creating a link between Clifford algebras, objects not being used in algorithmics to the best of our knowledge, and fixedparameter tractability. As the natural dynamic programming approach on tree decompositions uses the Fast Subset Convolution (FSC) to perform efficiently the join operation, there was no such a tool for algorithms based on the determinant approach.
Our first observation is that the FSC technique can be regarded as an isomorphism theorem for some associative algebras. To put it briefly, a Fourierlike transform is being performed in the FSC to bring computations to a simpler algebra. Interestingly, this kind of transform is just a special case of the ArtinWedderburn theorem [1], which seemingly is not widely reported in computer science articles. The theorem provides a classification of a large class of associative algebras, not necessarily commutative (more in Appendix A). We use this theory to introduce the Noncommutative Subset Convolution (NSC) and speed up multiplication operations in an algebra induced by the join operation in determinantbased dynamic programming on tree decomposition. An important building block is a fast Fourierlike transform for a closely related algebra [11]. We hope our work will encourage researchers to investigate further algorithmic applications of the ArtinWedderburn theorem.
1.2 Our results
We apply our algebraic technique to determinant approach introduced by Bodlaender et al. [4]. For path decomposition, they gave an time algorithm for counting Steiner trees and an time algorithm for counting Hamiltonian cycles. The running times for tree decomposition were respectively and . These gaps can be explained by the appearance of the join operation in tree decompositions which could not be handled efficiently so far.
By performing NSC in time complexity we partially solve an open problem about the different convolution from [6]. Our further results may be considered similar to those closing the gap between time complexities for pathwidth and treewidth parameterizations for Dominating Set by switching between representations of states in dynamic programming [14]. We improve the running times to for counting Steiner trees and for counting Hamiltonian cycles, where denotes the matrix multiplication exponent (currently it is established that [15]). These are not only the fastest known algorithms for counting these objects, but also the fastest known deterministic algorithms for the decision versions of these problems. The deterministic algorithm for Steiner Tree can be translated into a deterministic algorithm for Feedback Vertex Set [4] so our technique provides an improvement also in this case.
Observe that the running times for pathwidth and treewidth parameterizations match under the assumption . Though we do not hope for settling the actual value of , this indicates there is no further space for significant improvement unless pure combinatorial algorithms (i.e. not based on matrix multiplication) are invented or the running time for pathwidth parameterization is improved.
1.3 Organization of the paper
Section 3 provides a brief introduction to Clifford algebras. The bigger picture of the employed algebraic theory can be found in Appendix A. In Section 4 we define the NSC and design efficient algorithms for variants of the NSC employing the algebraic tools. Sections 5 and 6 present how to apply the NSC in counting algorithms for Steiner Tree and Hamiltonian Cycle. They contain main ideas improving the running times, however in order to understand the algorithms completely one should start from Section 4 (Determinant approach) in [4]. The algorithm for Hamiltonian Cycle is definitely more complicated and its details, formulated as two isomorphism theorems, are placed in Appendix C.
2 Preliminaries
We will start with notation conventions.

stands for .

.

equals 1 if condition holds and 0 otherwise.

For permutation of a linearly ordered set

For being subsets of a linearly ordered set
(1)
Let us note two simple properties of .
Claim \thetheorem.
For disjoint
Claim \thetheorem.
For and
2.1 Fast Subset Convolution
Let us consider a universe of size and functions .
The Möbius transform of is function defined as
Let denote a subset convolution of defined as
[Björklund et al. [2]] The Möbius transform, its inverse, and the subset convolution can be computed in time .
2.2 Pathwidth and treewidth
{definition}A tree (path) decomposition of a graph is a tree (path ) in which each node is assigned a bag such that

for every edge there is a bag containing and ,

for every vertex the set forms a nonempty subtree (subpath) in the decomposition.
The width of the decomposition is defined as and the treewidth (pathwidth) of is a minimum width over all possible tree (path) decompositions.
If a graph admits a tree decomposition of width then it can be found in time [3] and a decomposition of width at most can be constructed in time [10]. We will assume that a decomposition of the appropriate type and width is given as a part of the input.
A nice tree (path) decomposition is a decomposition with one special node called the root and in which each bag is one of the following types:

Leaf bag: a leaf with ,

Introduce vertex bag: a node having one child for which ,

Forget vertex bag: a node having one child for which ,

Introduce edge bag: a node having one child for which ,

Join bag: (only in tree decomposition) a node having two children with condition .
We require that every edge from is introduced exactly once and is an empty bag. For each we define and to be sets of respectively vertices and edges introduced in the subtree of the decomposition rooted at .
2.3 Problems definitions
Steiner Tree Input: graph , set of terminals , integer Decide: whether there is a subtree of with at most edges connecting all vertices from
Hamiltonian Cycle Input: graph Decide: whether there is a cycle going through every vertex of exactly once
Feedback Vertex Set Input: graph , integer Decide: whether there is a set of size at most such that every cycle in contains a vertex from
In the counting variants of problems we ask for a number of structures satisfying the given conditions. This setting is at least as hard as the decision variant.
3 Clifford algebras
Some terms used in this section originate from advanced algebra. For better understanding we suggest reading Appendix A.
The Clifford algebra is a dimensional associative algebra over a ring . It is generated by .
These are rules of multiplication of generators:

is a neutral element of multiplication,

for ,

for ,

if .
All products of ordered sets of generators form a basis of ( is treated as a product of an empty set). We provide a standard addition and we extend multiplication for all elements in an associative way.
We will be mainly interested only in ^{2}^{2}2Clifford algebras with appear also in geometric literature as exterior algebras. and its natural embedding into . As , we can neglect condition 3 when analyzing these algebras.
For where let . Each element of can be represented as , where are real coefficients. Using condition 4 we can deduce a general formula for multiplication in :
(2) 
where the meaning of is explained in (1).
As a Clifford algebra over is semisimple, it is isomorphic to a product of matrix algebras by the ArtinWedderburn theorem (see Theorem A). However, it is more convenient to first embed in a different Clifford algebra that is isomorphic to a single matrix algebra. As a result, we obtain a monomorphism (see Definition A) where and the following diagram commutes ( stands for multiplication).
(3) 
Thus, we can perform multiplication in the structure that is more convenient for us. For we can treat them as elements of , find matrices and , multiply them efficiently, and then revert the transform. The result always exists and belongs to because is closed under multiplication. The monomorphism can be performed and reverted (within the image) in time [11]. However, the construction in [11] is analyzed in the infinite precision model. For the sake of completeness, we revisit this construction and prove the following theorem in Appendix B.
The multiplication in , with coefficients having number of bits, can be performed in time .
In order to unify the notation we will represent each element of , that is , as a function . We introduce convolution as an equivalence of multiplication in . The equation (2) can be now rewritten in a more compact form
(4) 
4 Noncommutative Subset Convolution
We consider a linearly ordered universe of size and functions .
Let denote Noncommutative Subset Convolution (NSC) of functions defined as
NSC on an element universe can be performed in time .
Proof.
Observe that condition is equivalent to so
Alternatively, we can write
Observation \thetheorem.
The technique of paying polynomial factor for grouping the sizes of sets will turn useful in further proofs. We will call it sizegrouping.
In our applications we will need to compute a slightly more complex convolution.
When are of type we can define (NSC2) as follows
NSC2 on an element universe can be performed in time .
Proof.
Let us introduce a new universe of size consisting of two copies of with an order so each element of is greater than any element of . To underline that we will use notation when summing subsets of and . In order to reduce NSC2 to NSC on the universe we need to replace factor with . The latter term can be expressed as due to Claim 2. As all elements from compare less to elements from then and depends only on the sizes of and . To summarize,
To deal with factor we have to split the convolution into 4 parts for different parities of and . We define functions as
Now we can reduce NSC2 to 4 simpler convolutions.
We have shown that computing NSC2 is as easy as NSC on a universe two times larger. Using Theorem 4 directly gives us the desired complexity. ∎
5 Counting Steiner trees
We will revisit the theorem stated in the aforementioned work.
[Bodlaender et al. [4]] There exist algorithms that given a graph count the number of Steiner trees of size for each in time if a path decomposition of width is given, and in time if a tree decomposition of width is given.
Both algorithms use dynamic programming over tree or path decompositions. We introduce some decompositionbased order on and fix vertex . Let be an incidence matrix, i.e. for we have and for any other vertex . For each node of the decomposition we define a function with arguments . The idea is to express the number of Steiner trees with exactly edges as .
(5) 
As observed in [4] condition implies that either or . This means there are at most triples for which returns a nonzero value.
If a node has a child and is of type introduce vertex, introduce edge, or forget vertex, then the function can be computed from in linear time with respect to the number of nontrivial states. Saying this is just a reformulation of Theorem 5 for path decompositions. The only thing that is more difficult for tree decompositions is that they include also join nodes having two children each. Here is the recursive formula^{3}^{3}3 As confirmed by the authors [5], the formula in [4] for the join node is missing the first argument to the function tracking the number of vertices of a Steiner tree, hence we present a corrected version of this formula. for for a join node having children .
(6) 
The next lemma, however not stated explicitly in the discussed work, follows from the proof of Theorem 5 (Theorem 4.4 in [4]).
Assume there is an algorithm computing all nonzero values of given by (6) with running time . Then the number of Steiner trees of size in a graph can be counted in time if a tree decomposition of width is given.
We will change notation for our convenience. Each function will be matched with a set . Let us replace functions with having first argument fixed and operating on triples of sets. In this setting, the convolution can we written as
(7) 
Observe that sizegrouping allows us to sacrifice a polynomial factor and neglect the restrictions for . Hence, we can work with a simpler formula
(8) 
The only triples allowed for each vertex are , , , , . In terms of set notation we can say that if then . Let be with the first set fixed, i.e. .
For fixed all values can be computed in time .
The convolution (7) can be performed in time .
Proof.
We use sizegrouping to reduce the problem to computing (8). Then we iterate through all possible sets and take advantage of Lemma 5. The total number of operations (modulo polynomial factor) is bounded by
∎
Keeping in mind that (6) and (7) are equivalent and combining Lemmas 5, 5, we obtain the following result.
The number of Steiner trees of size in a graph can be computed in time if a tree decomposition of width is given.
The space complexity of the algorithm is .
Solving the decision version of Feedback Vertex Set can be reduced to the Maximum Induced Forest problem [4]. As observed in [4] the join operation for Maximum Induced Forest is analogous to (6).
The existence of a feedback vertex set of size at most in a graph can be determined in time if a tree decomposition of width is given.
6 Counting Hamiltonian cycles
Likewise in the previous section, we will start with a previously known theorem.
[Bodlaender et al. [4]] There exist algorithms that given a graph count the number of Hamiltonian cycles in time if a path decomposition of width is given, and in time if a tree decomposition of width is given.
For each node of the decomposition a function is defined with arguments and . The idea and notation is analogous to (5). The number of Hamiltonian cycles can be expressed as .
(9) 
As observed in [4] we can restrict ourselves only to some subspace of states. When then all nonzero summands in the (6) satisfy . When then we can neglect all summands except for those satisfying .
This time there are at most triples for which returns a nonzero value. We again argue that introduce vertex, introduce edge, and forget vertex nodes can be handled the same way as for the path decomposition and the only bottleneck is formed by join nodes. We present a formula for if is a join node with children .
(10) 
Analogously to the algorithm for Steiner Tree, we formulate our claim as a lemma following from the proof of Theorem 6 (Theorem 4.3 in [4]).
Assume there is an algorithm computing all nonzero values of given by (10) with running time . Then the number of Hamiltonian cycles in a graph can be counted in time if a tree decomposition of width is given.
The only allowed triples of for each vertex are .
Assume the equation 10 holds. Then it remains true after the following translation of the set of allowed triples .
Proof.
The factors do not change as we do not modify the coordinates given by functions . Triples that match in (10) translate into matching triples as the transformation keeps their additive structure. This fact can be seen on the tables below.
000  100  101  110  111  211  

000  000  100  101  110  111  211 
100  100  X  X  X  211  X 
101  101  X  X  211  X  X 
110  110  X  211  X  X  X 
111  111  211  X  X  X  X 
211  211  X  X  X  X  X 
000  100  101  010  011  111  

000  000  100  101  010  011  111 
100  100  X  X  X  111  X 
101  101  X  X  111  X  X 
010  010  X  111  X  X  X 
011  011  111  X  X  X  X 
111  111  X  X  X  X  X 
∎
Therefore we can treat functions as binary ones. We start with unifying the notation binding functions with sets . Let us replace functions with their equivalences operating on triples of sets. In this setting, the convolution looks as follows.
(11) 
Performing convolution (11) within the space of allowed triples is noticeably more complicated than computations in Section 5. Therefore the proof of the following lemma is placed in Appendix C.
The convolution (11) can be computed in time .
The number of Hamiltonian cycles in a graph can be computed in time if a tree decomposition of width is given.
The space complexity of the algorithm is .
7 Conclusions
We have presented the Noncommutative Subset Convolution, a new algebraic tool in algorithmics based on the theory of Clifford algebras. This allowed us to construct faster deterministic algorithms for Steiner Tree, Feedback Vertex Set, and Hamiltonian Cycle, parameterized by the treewidth. As the determinantbased approach applies to all problems solvable by the Cut & Count technique [4, 8], the NSC can improve running times for a larger class of problems.
The first open question is whether the gap between time complexities for the decision and counting versions of these problems could be closed. Or maybe one can prove this gap inevitable under a wellestablished assumption, e.g. SETH?
The second question asked is if it is possible to prove a generic theorem so the lemmas like 5 or 6 would follow from it easily. It might be possible to characterize convolution algebras that are semisimple and algorithmically construct isomorphisms with their canonical forms described by the ArtinWedderburn theorem.
The last question is what other applications of Clifford algebras and ArtinWedderburn theorem can be found in algorithmics.
Acknowledgements. I would like to thank Marek Cygan for pointing out the bottleneck of the previously known algorithms and for the support during writing this paper. I would also like to thank Paul Leopardi for helping me understand the fast Fourierlike transform for Clifford algebras.
References
 [1] John A Beachy. Introductory lectures on rings and modules, volume 47. Cambridge University Press, 1999.
 [2] Andreas Björklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto. Fourier meets Möbius: Fast subset convolution. In Proceedings of the Thirtyninth Annual ACM Symposium on Theory of Computing, STOC ’07, pages 67–74, New York, NY, USA, 2007. ACM. doi:10.1145/1250790.1250801.
 [3] Hans L. Bodlaender. A lineartime algorithm for finding treedecompositions of small treewidth. SIAM Journal on computing, 25(6):1305–1317, 1996.
 [4] Hans L. Bodlaender, Marek Cygan, Stefan Kratsch, and Jesper Nederlof. Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth. Inf. Comput., 243(C):86–111, August 2015. doi:10.1016/j.ic.2014.12.008.
 [5] Marek Cygan. Private communication, 2016.
 [6] Marek Cygan, Fedor Fomin, Bart MP Jansen, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Open problems for fpt school 2014. Available from: http://fptschool.mimuw.edu.pl/opl.pdf.
 [7] Marek Cygan, Stefan Kratsch, and Jesper Nederlof. Fast hamiltonicity checking via bases of perfect matchings. In Proceedings of the fortyfifth annual ACM symposium on Theory of computing, pages 301–310. ACM, 2013.
 [8] Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michał Pilipczuk, Joham M. M. van Rooij, and Jakub Onufry Wojtaszczyk. Solving connectivity problems parameterized by treewidth in single exponential time. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 150–159. IEEE, 2011.
 [9] Fedor V Fomin, Daniel Lokshtanov, Fahad Panolan, and Saket Saurabh. Representative sets of product families. In AlgorithmsESA 2014, pages 443–454. Springer, 2014.
 [10] Ton Kloks. Treewidth: computations and approximations, volume 842. Springer Science & Business Media, 1994.
 [11] Paul Leopardi. A generalized FFT for Clifford algebras. Bulletin of the Belgian Mathematical Society, 11(5):663–688, 03 2005.
 [12] David K Maslen and Daniel N Rockmore. Generalized ffts—a survey of some recent results. In Groups and Computation II, volume 28, pages 183–287. American Mathematical Soc., 1997.
 [13] Neil Robertson and Paul D Seymour. Graph minors. III. Planar treewidth. Journal of Combinatorial Theory, Series B, 36(1):49–64, 1984.
 [14] Johan M. M. van Rooij, Hans L. Bodlaender, and Peter Rossmanith. Dynamic Programming on Tree Decompositions Using Generalised Fast Subset Convolution, pages 566–577. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. doi:10.1007/9783642041280_51.
 [15] Virginia Vassilevska Williams. Multiplying matrices faster than CoppersmithWinograd. In Proceedings of the fortyfourth annual ACM symposium on Theory of computing, pages 887–898. ACM, 2012.
Appendix A Associative algebras
This section is not crucial to understanding the paper but it provides a bigger picture of the applied theory. We assume that readers are familiar with basic algebraic structures like rings or fields. More detailed introduction can be found, e.g. in [1].
A linear space over a field (or, more generally, a module over a ring ) is called an associative algebra if it admits a multiplication operator satisfying the following conditions:

,

,

.
A set is called a generating set if every element of can be obtained from by addition and multiplication. The elements of are called generators. It is easy to see that multiplication defined on a generating set extends in an unambiguous way to the whole algebra. We will often abbreviate the term associative as we will study only such algebras.
The product of algebras is an algebra with multiplication performed independently on each coordinate.
For algebras over a ring , function is called a homomorphism of algebras if it satisfy the following conditions:

,

,

.
If is reversible within its image then we call it a monomorphism and if additionally then we call an isomorphism
Monomorphisms of algebras turn out extremely useful when multiplication in algebra is simpler than multiplication in , because we can compute as . This observation is used in Theorem 3 and Lemmas 6, C. For a better intuition, we depict the various ways of performing multiplication on diagrams (3), (14).
A subset of algebra is called a simple left module if

,

,
and the only proper subset of with these properties is .
The next definition is necessary to exclude some cases of obscure algebras.
An algebra is called semisimple if there is no nonzero element so for every simple left module the set is .
The theorem below was proven in full generality for algebras over arbitrary rings but we will formulate its simpler version for fields.
[ArtinWedderburn [1]] Every finitedimensional associative semisimple algebra over a field is isomorphic to a product of matrix algebras
where are fields containing .
The related isomorphism is called a generalized Fourier transform (GFT) for . If we are able to perform GFT efficiently then we can reduce computations in to matrix multiplication. For some classes of algebras, e.g. abelian group algebras [12], there are known algorithms for GFT with running time where .
If the field is algebraically closed (e.g. ) then all and equals the dimension of . If the algebra is commutative then all and is isomorphic to a product of fields. This is actually the case in the Fast Subset Convolution [2] where the isomorphism is given by the Möbius transform.
Appendix B Proof of Theorem 3
Proof.
The transformation can be computed and reverted (within the image) in time assuming infinite precision and time for any arithmetic operation [11]. In order to compute accurately, we need to look inside the paper [11].
Transformation can be represented as where is a monomorphic embedding into another Clifford algebra and is an isomorphism with the matrix algebra. We modify isomorphism diagram (3) to show these mappings in more detail.
We begin with embedding where (see Definition 4.4 in [11]). Transformation is just a translation of basis so no arithmetic operations are required.
For the sake of disambiguation, we indicate the domain of the function with a lower index: . In the th step, we construct a matrix representation of . Let denote the projections of onto subspaces spanned by products of respectively even and odd number of generators. Of course, and . Such an element can be represented as for being the first and the last generator () and . Now we can apply the recursive formula from Theorem 5.2 in [11]:
where stands for a block matrix with applied to each element of .
We see that computing can be reduced to computing 4 analogous pairs for and combining them using addition and subtraction. Hence, the coefficients of the obtained matrix will also be integers with number of bits and the total number of arithmetic operations is .
The inverse transform is also computed in steps and we continue using lower index to indicate the domain alike for the forward transform. Let and
Then from Theorem 7.1 in [11] we know that