Duality of Graphical Models and Tensor Networks
In this article we show the duality between tensor networks and undirected graphical models with discrete variables. We study tensor networks on hypergraphs, which we call tensor hypernetworks. We show that the tensor hypernetwork on a hypergraph exactly corresponds to the graphical model given by the dual hypergraph. We translate various notions under duality. For example, marginalization in a graphical model is dual to contraction in the tensor network. Algorithms also translate under duality. We show that belief propagation corresponds to a known algorithm for tensor network contraction. This article is a reminder that the research areas of graphical models and tensor networks can benefit from interaction.
Graphical models and tensor networks are very popular but mostly separate fields of study. Graphical models are used in artificial intelligence, machine learning, and statistical mechanics . Tensor networks show up in areas such as quantum information theory and partial differential equations [8, 11].
Tensor network states are tensors which factor according to the adjacency structure of the vertices of a graph. On the other hand, graphical models are probability distributions which factor according to the clique structure of a graph. The joint probability distribution of several discrete random variables is naturally organized into a tensor. Hence both graphical models and tensor networks are ways to represent families of tensors that factorize according to a graph structure.
The relationship between particular graphical models and particular tensor networks has been studied in the past. For example in  the authors reparametrize a hidden markov model to make a matrix product state tensor network. In , a map is constructed that sends a restricted Boltzmann machine graphical model to a matrix product state. In , an example of a directed graphical model is given with a related tensor network on the same graph, to highlight computational advantages of the graphical model in that setting.
From the outset, there are differences in the graphical description. On the graphical models side, the factors in the decomposition correspond to cliques in the graph. On the tensor networks side, the factors are associated to the vertices of the graph.
In this article, we show a duality correspondence between graphical models and tensor networks. This correspondence applies to all graphical models and all tensor networks and does not require reparametrization of either. Our mathematical relationship stems from hypergraph duality. We begin by recalling the definition of a hypergraph.
A hypergraph consists of a set of vertices , and a set of hyperedges . A hyperedge is any subset of the vertices.
There are two ways to construct a hypergraph from a matrix of size with entries in . First, we let the rows index the vertices and the columns index the hyperedges. The non-vanishing entries in each column give the vertices that appear in that hyperedge,
In this case is the incidence matrix of the hypergraph. We allow nested or repeated hyperedges, as well as edges containing one or no vertices, so there are no restrictions on . Alternatively, we can construct a hypergraph with incidence matrix . This is the dual hypergraph to the one with incidence matrix , see [4, Section 1.1].
We now add extra data to the matrix. We attach positive integers to each row. We assign tensors to each column of whose size is the product of the as ranges over the non-vanishing entries in the column. For example, the tensor associated to the column would have size . We explain how this defines the data of both a graphical model and of a tensor network. Filling in the entries of the tensors gives a distribution in a graphical model (if we choose entries in ), or a tensor network state in a tensor network. We see how a graphical model is visualized by the hypergraph with incidence matrix , while the tensor network is visualized by the hypergraph of .
Before stating our duality correspondence, we define graphical models in terms of hypergraphs, and introduce tensor hypernetworks. We keep in mind how the definitions translate to the incidence matrix set-up from above.
Consider a hypergraph with . An undirected graphical model with respect to is the set of probability distributions on the random variables which factor according to the hyperedges in :
Here, the random variable takes values , the subset equals , and the function is a clique potential with domain . The normalizing constant ensures the probabilities sum to one.
When all random variables are discrete, the joint probabilities form a tensor of size and the clique potentials are tensors of size , all with entries in . The graphical model is depicted as the hypergraph whose incidence matrix has rows represented by the random variables and columns indexed by the hyperedges.
If we fix the values in the clique potentials, we obtain a particular distribution in the graphical model. We recover the usual depiction of the graphical model by a graph instead of a hypergraph by connecting pairs of vertices by an edge if they lie in the same hyperedge.
Graphical models are sometimes required to factorize according to the maximal cliques of a graph. We see later how our set-up specializes to this case. Models with cliques that are not necessarily maximal can be called hierarchical models .
Consider a hypergraph . To each hyperedge we associate a positive integer , called the size of the hyperedge. To each vertex we assign a tensor , where is usually or . The tensor hypernetwork state is obtained from by contracting indices along all hyperedges in the graph that contain two or more vertices. We call hyperedges containing only one vertex dangling edges.
Note that as opposed to graphical models, in tensor hypernetworks we assign tensors to the vertices of the graph rather than the hyperedges.
Restricting the definition of a tensor hypernetwork to hyperedges with at most two vertices gives the usual definition of a tensor network. The following example illustrates a widely used tensor network.
Example 1.5 (Tucker decomposition).
Consider the graph
We have a core tensor , and matrices . The entries of the tensor are
For suitable weights and orthogonal matrices , this is the Tucker decomposition of .
An important reason to extend the definition of tensor networks to tensor hypernetworks, other than the duality with graphical models explained in the next section, is that significant classes of tensors naturally arise from tensor hypernetworks.
Example 1.6 (Tensor rank (CP rank)).
Consider this hypergraph on vertex set .
There is one dangling edge for each vertex, with sizes . There is one more hyperedge of size , represented by a shaded triangle, that connects all three vertices. The tensors attached to each of the three vertices are matrices of size . The tensor hypernetwork state has size with entries
The set of tensors given by this tensor hypernetwork equals the set of tensors of rank at most . The same structure on vertices, with weights , gives tensors of size , and rank at most . Tensor rank is the most direct generalization of matrix rank to the setting of tensors . The set of tensors of rank at most r is naturally parametrized by this tensor hypernetwork without requiring special structure on the tensors at the vertices.
The rest of the paper is organized as follows. We describe the duality correspondence between graphical models and tensor networks in Section 2. In Section 3 we explain how certain structures (graphs, trees, and homotopy types) and operators (marginalization, conditioning, and entropy) translate under the duality map. In Section 4 we give an algorithmic application of our duality correspondence.
In this section we give the duality between graphical models and tensor networks.
A discrete graphical model associated to a hypergraph with clique potentials is the same as the data of a tensor hypernetwork associated to its dual hypergraph with tensors at each vertex of .
Consider a joint distribution (or tensor) in the graphical model defined by the hypergraph . As described above, the incidence matrix of has rows corresponding to the variables and columns corresponding to the cliques . The data of the distribution also contains a potential function for each clique , which is equivalently a tensor of size .
The dual hypergraph has incidence matrix . It is a hypergraph with vertices and hyperedges . By definition of the dual hypergraph, is equivalent to . Associating the tensors to each vertex of gives a tensor hypernetwork for . Moreover, up to scaling by the normalization constant , the joint probability tensor is given by
The last expression is the tensor hypernetwork state before contracting the hyperedges. ∎
Note that since , the dual of the dual is equal to . This implies the following one-to-one correspondence. Before we state it, let us denote the set of distributions on that are in the graphical model defined by the hypergraph by , and the set of non-contracted tensor hypernetwork states from a hypergraph with weights by .
There is a one-to-one correspondence between the graphical models and the tensor hypernetwork states up to global scaling constant.
Note that while clique potentials are required to take values in for probabilistic reasons, the definition and factorization structure of graphical models carries over to the case where the entries of these tensors belong to a general field . In the rest of this section we illustrate our results by showing the dual structures to some familiar examples of tensor network states and graphical models.
Example 2.3 (Matrix Product States (MPS)/Tensor Trains).
These are a popular family of tensor networks in quantum physics  and numerical applications  (where the two names come from). We return to them in detail in Section 4. The MPS network on the left is dual to the graphical model on the right.
The top row of edges in the tensor network is contracted. We see later that this corresponds to the top row of variables in the graphical model being hidden.
Example 2.4 (No three-way interaction model).
This graphical model consists of all probability distributions that factor as , for clique potential matrices . It is represented by a hypergraph in which all hyperedges have two vertices. The incidence matrix of the hypergraph is
This matrix is symmetric. Hence the tensor network corresponding to this graphical model is given by the same triangle graph. We note that, up to dangling edges, this is also the shape of the tensor network of the tensor that represents the matrix multiplication operator .
Example 2.5 (The Ising Model).
This graphical model is defined by the cliques of a two-dimensional lattice such as the grid on the right. Its dual is the hypergraph on the left.
Example 2.6 (Projected Entangled Pair States (PEPS)).
This tensor network is a two-dimensional analogue of MPS. It depicts two-dimensional quantum spin systems. Its hypergraph is depicted on the left, with its dual graphical model on the right. Note the structural similarity with Example 2.5.
Example 2.7 (The Multi-scale Entanglement Renormalization Ansatz (MERA)).
This tensor network is popular in the quantum community, due to its favorable abilities to represent relevant tensors and compute efficiently with them. It is on the left, with its dual graphical model on the right.
Finally, we point out the following fun fact.
Remark 2.8 (Duality of Tucker and CP decomposition).
Tensor networks and graphical models are often given special structure. For example, one can restrict to tensor networks that use a graph rather than a hypergraph. In this section we show how properties and operations for graphical models and tensor hypernetworks behave under the duality map.
3.1 Restricting to graphs
Graphs are special hypergraphs in which every hyperedge contains two vertices. They are also known as -uniform hypergraphs. Each column of the incidence matrix of such a hypergraph sums to two. Taking the dual of a graph gives a hypergraph in which every vertex has degree two, also known as a -regular hypergraph . We call a hypergraph at-most-2-regular if every vertex has degree at most 2.
Tensor networks are dual to at-most--regular graphical models. Graph models (graphical models whose cliques are the edges of a graph) are dual to -regular tensor hypernetworks.
Graphical models defined by the maximal cliques of a graph correspond to hypergraphs in which we introduce a hyperedge for each maximal clique. Their dual tensor hypernetworks have the following property.
Graphical models defined by the maximal cliques of a graph correspond to tensor hypernetworks whose hypergraphs have the property that whenever a set of hyperedges meet pairwise, the intersection of all of them is non-empty.
Let be a set of hyperedges that meet pairwise. Then, for all , the corresponding vertices in the dual hypergraph (i.e. in the graphical model) are connected by an edge. Thus, the vertices form a clique in the graphical model, so there exists a maximal clique in which this clique is contained. Thus, all hyperedges in contain the vertex corresponding to . ∎
3.2 Trees on each side
The homotopy type of a hypergraph is the homotopy type of the simplicial complex whose maximal simplices are the maximal hyperedges. For topological purposes, we associate hypergraphs with their simplicial complexes. We show that the homotopy type of a hypergraph and its dual agree.
Definition 3.3 (see ).
Consider an open cover of a topological space . The nerve of the cover is a simplicial complex with one vertex for each open set. A subset spans a simplex in the nerve whenever .
Theorem 3.4 (The Nerve Lemma ).
The homotopy type of a space equals the homotopy type of the nerve of an open cover of , provided that all intersections of sets in the open cover are contractible.
We consider the open cover of our simplicial complex in which open sets are -neighborhoods of the maximal simplices. For sufficiently small, such an open cover has contractible intersections, since they are homotopy equivalent to intersections of simplices. Hence the homotopy type of the hypergraph is equal to that of its nerve. The following proposition relates the nerve to the dual hypergraph.
The nerve of a hypergraph is the simplicial complex of its dual hypergraph.
Consider a hypergraph with vertex set and hyperedge set . We now construct the dual hypergraph. The edges are represented by rows in the original incidence matrix. A subset is connected by a hyperedge if there exists a vertex that is in all hyperedges in the subset, or equivalently, if the intersection is non-empty. This is exactly the definition of the nerve. ∎
From this, the Nerve Lemma implies the following.
A tensor hypernetwork and its dual graphical model have the same homotopy type.
A hypergraph cycle (see [4, Chapter 5]) is a sequence , where the are distinct hyperedges and the are distinct vertices, such that for all , and . A tree is a hypergraph with no cycles. The simplicial complexes corresponding to trees are contractible. Theorem 3.6 implies that trees are preserved under the duality correspondence.
3.3 Marginalization and contraction
Let be a hypergraph and its dual. Let be a distribution in the graphical model on with clique potentials . The dual tensor hypernetwork has tensors at the vertices of .
Proposition 3.7 (Marginalization Equals Contraction).
Let be a subset of the vertices of the graph . Then, the marginal distribution of equals
which is the contracted tensor hypernetwork along the hyperedges corresponding to .
The proof follows from the chain of equalities:
In words, summing over the values of all variables in is the same as contracting the tensor hypernetwork along all hyperedges in . ∎
The interpretations of marginalization and contraction are also very similar in nature. The variables of a graphical model that are marginalized are often considered to be hidden, and the contracted edges of a tensor network represent entanglement (‘unseen interaction’).
The correspondence described in Proposition 3.7 allows us to translate algorithms for marginalization in graphical models to algorithms for contraction in tensor networks, see Section 4. Without care to order indices, marginalization and contraction involve summing exponentially many terms. In many cases more efficient methods are possible.
3.4 Conditional distributions
Consider a probability distribution given by a fully-observed graphical model. Conditioning a variable to only take values in a given set means restricting the probability tensor to the slice which contains only the values for the variable . This in turn corresponds to restricting each of the potentials for hyperedges containing to the given subset of elements . On the tensor networks side, we restrict the tensor corresponding to the given clique potential to the slice .
We wish to remark that the equivalence of conditioning and restriction to a slice of the probability tensor is due to the fact that the basis in which we view the probability tensor is fixed. The basis is given by the states of the the random variables: graphical models are not basis invariant. On the other hand, basis invariance is a key property of tensor networks that crops up in many applications, e.g. often a gauge (basis) is selected to make the computations efficient .
3.5 Entanglement entropy and Shannon entropy
Given a tensor network state represented by a tensor , the entanglement entropy  equals
On the other hand, if represents the corresponding marginal distribution of the graphical model, the Shannon entropy  of is defined as
where indexes all entries of . Expanding out the formula shows that these two notions of entropy are the same.
4 Algorithms for marginalization and contraction
The belief propagation (or sum-product) algorithm is a dynamic programming method for computing marginals of a distribution . The junction tree algorithm  extends it to graphs with cycles. The equivalence between marginalization in graphical models and contraction in tensor hypernetworks was given in Proposition 3.7. It means that we can use methods for marginalization to contract tensor hypernetworks and vice versa. For example, we can compute expectation values of tensor hypernetwork states  as well as contracted tensor hypernetwork states. In this section, we apply the junction tree algorithm to these tasks for the matrix product state (MPS) tensor networks from Example 2.3. We first recall the algorithm.
4.1 The junction tree algorithm
The input and output data of the junction tree algorithm are as follows.
Input: A graphical model defined by a hypergraph with clique potentials .
Output: The marginals at each hyperedge, .
We now recall how this algorithm works. First, we construct the graph associated to the hypergraph by adding edge whenever vertices and belong to the same hyperedge. If is not chordal (or triangulated) we add edges until all cycles of length four or more have a chord, i.e. becomes chordal. Then we can form the junction tree. This is a tree whose nodes are the maximal cliques of the graph. It has the running intersection property: the subset of cliques of containing a given vertex forms a connected subtree. Note that there are often multiple ways to construct a junction tree of a given graph .
To each maximal clique in we associate a clique potential which equals the product of the potentials of the hyperedges contained in . If a hyperedge is contained in more than one maximal clique, its clique potential is assigned to one of them. Each edge of the junction tree connects two cliques in . We associate to such an edge the separator set . We also assign a separator potential to each . It is initialized to the constant value 1. A basic message passing operation from to a neighboring updates the potential functions at clique and separator :
The algorithm chooses a root of the junction tree, and orients all edges to point from the root outwards. It then applies basic message passing operations step-by-step from the root to the leaves until every node has received a message. Then the process is done in reverse, updating the clique and separator potentials from the leaves back to the root. After all messages have been passed, the final clique potentials equal the marginals, , and likewise for the final separator potentials.
When the junction tree algorithm is used for probability distributions the clique potential functions are positive, but it works just as well for complex valued functions.
The complexity of the junction tree algorithm is exponential in the treewidth of the graph, which is the size of the largest clique over all possible triangulations [17, Chapter 2].
4.2 Contracting a tensor network via duality
To compute a tensor network state, we contract all edges in its tensor network that are not dangling. Our framework allows us to do this via duality, and to provably show the hardness of this computation since computing marginals on the graphical models side is widely studied . The recipe is as follows. We consider the dual graphical model to the tensor hypernetwork. We make a new clique in the graphical model consisting of all vertices corresponding to the dangling edges of the tensor hypernetwork. The tensor hypernetwork state is the marginal distribution of that clique. We can then use, e.g., the junction tree algorithm to compute it.
4.3 Computing expectation values for matrix product states
We now give the example of MPS tensor networks to illustrate how the junction tree algorithm translates to tensor hypernetworks. Using Theorem 2.1 we compute the family of graphical models that is dual to matrix product states. We show that the junction tree algorithm used to compute marginalizations of the dual graphical model corresponds to the bubbling algorithms that are used to compute expectation values of a MPS . In the figures, we draw the MPS with four observable indices, but repeating the pattern gives the results in the general case.
In quantum applications a tensor network state is denoted . Its expectation value is the inner product for some operator , which acts as a linear transformation in each vector space of observable indices (i.e. it is block diagonal).
Computing the expectation value of a MPS means contracting the tensor network on the left in Figure 1, where the middle row of vertices correspond to the blocks of . Equivalently, it means marginalizing all variables of the graphical model on the right (or, computing the normalization constant of this graphical model). We contract the tensor network by applying the junction tree algorithm to the graphical model.
The first step of the algorithm is to triangulate the graph of the graphical model, by adding edges until it is chordal (or triangulated), see Figure 2. Next, we form a junction tree for the triangulated graph, see Figure 3.
We choose the root of the tree to be the left-most vertex in Figure 3. We do basic message passing operations from left to right until every vertex has received a message from its parent. We arrive at the right-most clique . If we complete the algorithm, by repeating this process from right to left, the final clique potentials at each vertex will equal the marginals. However, we can simplify the computation since our goal is just to compute the total sum. We terminate the message passing operations once we reach . At that point we have the marginal at that clique, so we sum over the three vertices 12, 13, and 14 to get the total sum.
We now translate the junction tree algorithm to the language of tensor networks. The junction tree determines the order in which to contract the indices of the tensor network, see Figure 4. We contract edges in the tensor network until it is completely contracted.
At each step we sum over just one vertex of the dual graphical model (due to the structure of the junction tree in this case). This means means we contract one edge at a time from the tensor network. In the first message passing operation we have , , . We sum over the values of vertex , since it is the only variable in . This corresponds to contracting the tensor along the edge corresponding to vertex 1 of the graphical model (see step one of Figure 4 for the corresponding tensor network operation). In the second message passing operation we sum over the values of vertex 2 of the graphical model. This corresponds to contracting the tensor network along the left edge (see the second step of Figure 4). The subsequent steps of the junction tree algorithm correspond to the steps shown in Figure 4.
It turns out that contracting the tensor in this way is what is usually done by the tensor networks community as well, a method sometimes called bubbling . The triangulated graph of the dual graph of MPS has a treewidth of four, since we can continue the triangulation given in Figure 2. We can compute the complexity of the junction tree algorithm, and of the bubbling algorithm, to be where is the number of vertices in the MPS, is the size of the dangling edges, and is the size of the entanglement edges.
4.4 Extending to larger dimensions
The higher-dimensional analogue of matrix product states/tensor trains is called the projected entangled pair states (PEPS), see Example 2.5. They are based on a two-dimensional lattice of entanglement interactions. Computing expectation values for the PEPS network takes exponential time in the number of states of the network . On the graphical models side, it is possible in principle to find expectation values of a PEPS state using the junction tree algorithm. Since the triangulated graph of the dual hypergraph of PEPS has a tree-width that grows in the size of the network, the junction tree algorithm is exponential time.
In , the authors show that algorithms for computing expectation values are exponential in the treewidth of the tensor network. On the other hand, we have seen that the junction tree algorithm is exponential time in the treewidth of the dual graphical model. This indicates a similarity between the treewidth of a hypergraph and of its dual. A comparison of the treewidths of planar graphs and of their graph duals can be found in .
To avoid exponential running times, numerical approximations are used. For graphical models, these are termed loopy belief propagation (see [17, Chapter 4] and references therein). A natural question is whether the algorithms for loopy belief propagation translate to known algorithms in the tensor networks community, e.g. for computing expectation values of PEPS, or whether they provide a new family of algorithms. In our opinion both answers to this question would be interesting.
Acknowledgements. We would like to thank Jason Morton and Bernd Sturmfels for helpful discussions. Elina Robeva was funded by an NSF mathematical sciences postdoctoral research fellowship (DMS 1703821).
Elina Robeva, Massachusetts Institute of Technology, USA,
Anna Seigal, University of California, Berkeley, USA, firstname.lastname@example.org.
- R. Bailly, F. Denis, G. Rabusseau: Recognizable Series on Hypergraphs, Language and Automata Theory and Applications, 639-651, Lecture Notes in Comput. Sci., 8977, Springer, Cham, 2015.
- A. Banerjee, A. Char, B. Mondal: Spectra of general hypergraphs. Preprint arXiv:1601.02136.
- C. Berge: Hypergraphs, Combinatorics of finite sets, North-Holland Mathematical Library, 45. North-Holland Publishing Co., Amsterdam (1989).
- K. Borsuk: On the imbedding of systems of compacta in simplicial complexes, Fund. Math. 35 (1948) 217-234.
- J. Chen, S. Cheng, H. Xie, L. Wang, T. Xiang: On the Equivalence of Restricted Boltzmann Machines and Tensor Network States, preprint, arXiv:1701.04831 (2017).
- A. Critch, J. Morton: Algebraic Geometry of Matrix Product States, SIGMA Symmetry Integrability Geom. Methods Appl. 10 (2014).
- W. Hackbusch: Tensor spaces and numerical tensor calculus, Springer Series in Computational Mathematics, 42. Springer, Heidelberg (2012).
- A. Hatcher: Algebraic topology, Cambridge University Press, Cambridge (2002).
- S.L. Lauritzen: Graphical Models, Oxford Statistical Science Series, 17. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York (1996).
- J. M. Lansberg: Tensors and their uses in Approximation Theory, Quantum Information Theory and Geometry, draft notes (2017).
- I.L. Markov, Y. Shi: Simulating quantum computation by contracting tensor networks, SIAM J. Comput. 38 (2008), no. 3, 963-981.
- R. Orús: A practical introduction to tensor networks: matrix product states and projected entangled pair states, Ann. Physics 349 (2014), 117–158.
- M. Pejic: Quantum Bayesian networks with application to games displaying Parrondo’s paradox, Thesis (Ph.D.), University of California, Berkeley (2014).
- N. Robertson, P.D. Seymour: Graph minors. III. Planar tree-width, J. Combin. Theory Ser. B 36 (1984), no. 1, 49-64.
- S. Sullivant: Algebraic Statistics, draft copy of book to appear (2017).
- M. Wainwright and M. I. Jordan: Graphical Models, Exponential Families, and Variational Inference, Foundation and Trends in Machine Learning, vol 1, nos 1-2 (2008).