Towards a phase diagram for spin foams
Abstract
One of the most pressing issues for loop quantum gravity and spin foams is the construction of the continuum limit. In this paper, we propose a systematic coarse–graining scheme for three–dimensional lattice gauge models including spin foams. This scheme is based on the concept of decorated tensor networks, which have been introduced recently. Here we develop an algorithm applicable to gauge theories with non–Abelian groups, which for the first time allows for the application of tensor network coarse–graining techniques to proper spin foams. The procedure deals efficiently with the large redundancy of degrees of freedom resulting from gauge symmetry. The algorithm is applied to 3D spin foams defined on a cubical lattice which, in contrast to a proper triangulation, allows for non–trivial simplicity constraints. This mimics the construction of spin foams for 4D gravity. For lattice gauge models based on a finite group we use the algorithm to obtain phase diagrams, encoding the continuum limit of a wide range of these models. We find phase transitions for various families of models carrying non–trivial simplicity constraints.
Contents:
 I Introduction
 II Lattice gauge models
 III (Decorated) Tensor Network Renormalization
 IV The 3D algorithm for gauge models
 V Application to finite group spin foam models
 VI Discussion
 A Singular Value Decomposition
 B Group Fourier transform
 C Parameterization of spin foam models
 D Recoupling theory of
I Introduction
Spin foam models propose a covariant, background independent and nonperturbative quantization of general relativity foams3; reisenbergerSF; BarrettCrane; eprl; fk; bftheory, based on insights from loop quantum gravity foams1; thomasbook. Being non–perturbative, the models rely on the definition of a path integral regularized via an auxiliary discretization. Since this discretization breaks the diffeomorphism invariance dittrich08; dittrichbroken underlying general relativity, one cannot take this discrete formulation as fundamental. One rather has to construct a continuum limit, in which one hopes to restore diffeomorphism symmetry improved1; review14. Diffeomorphism invariance is a very powerful symmetry, whose restoration should also ensure the independence of the chosen discretization improved2; dittrich12a. Requiring this symmetry also resolves ambiguities in the construction of the discrete models measure1; BahrSteinhausPRL.
Most importantly, only if diffeomorphism symmetry is correctly implemented into the path integral, can one hope that the path integral does act as a projector on the so–called physical states HartleHalliwell, that is states that satisfy the Wheeler–DeWitt (constraint) equation which encodes the dynamics of (quantum) general relativity. The breaking of diffeomorphism symmetry by the discretization leads also to a violation of the constraints determining the Wheeler–DeWitt equation dittrichbroken; hoehn1. The construction of the continuum limit in diffeomorphism invariant systems is therefore tantamount to solving the dynamics dittrich12a; timeevol; review14 as is illustrated in improved2.
Unfortunately not much is known about the behaviour of spin foam models in the ‘many body’ or ‘thermodynamic’ regime, which is the regime of interest for the construction of the continuum limit. This is mainly due to the highly complex amplitudes encoding the non–linear dynamics of general relativity. Additionally, a conceptual understanding of renormalization and coarse–graining – which is used to construct the continuum limit – needed to be developed for background independent systems. But here much progress has been achieved dittcyl; timeevol; BahrSFR; review14, involving as a tool the inductive limit construction for Hilbert spaces. The latter is used in loop quantum gravity to define a continuum Hilbert space as an inductive limit of a family of discrete Hilbert spaces thomasbook; ashtekarisham. A crucial role is then played by so–called embedding maps which relates coarse configurations to physically equivalent configurations on a finer discretization. This is done by putting the additional degrees of freedom on the finer discretization into a vacuum state. In the constructions of the (so–called kinematical) continuum Hilbert spaces for LQG ashtekarisham; ashtekarlewan2; bfvacuum; bfform3 this vacuum state is pre–chosen, without input from the dynamics of the system, and is therefore not a physical state.
This is an ambitious aim and can only be hoped to be achieved in an approximation scheme. Luckily the coarse–graining schemes developed in dittcyl; timeevol; review14, which aim at an understanding of the continuum limit, provide such an approximation scheme. For instance, to lowest order, one aims at the understanding of the sector of the theory describing homogeneous states. Furthermore, these coarse–graining schemes can be realized by concrete algorithms, known as Tensor Network coarse–graining algorithms levin; guwen; vidalevenbly; evenbly2. Such techniques produce a recursive coarse–graining of the amplitudes of a given model, resulting in effective amplitudes for large building blocks or rather for building blocks carrying a very refined boundary. But one expresses these amplitudes only as functions of coarse boundary data, which is determined by the order of the truncation. The relevant observables which emerge as coarse boundary data are determined by the dynamics of the system. In fact this information is encoded in (now dynamically determined) embedding maps, which can be extracted from the coarse–graining algorithm.
However, tensor network algorithms have been so far mostly been developed for 2D systems, whereas we are ultimately interested in 4D spin foams.
Given this state of affairs, a two–aimed program has been developed: one aim is to develop (tensor network) coarse–graining algorithms applicable to spin foams, another one is to understand the behaviour of the simplicity constraints under coarse–graining. To this end a family of 2D analogue models, called spin net models, has been constructed, which do carry a notion of simplicity constraints eckert1; eckert2; merce; qgroup; BCspinnets . Tensor network techniques have been developed that can deal with the symmetry structure of these models and the phase diagram for various models has been constructed in eckert1; merce; qgroup; SteinhausMatter; BCspinnets. For instance qgroup revealed a rich phase diagram for (quantum group based) so–called spin net models. BCspinnets considers spin nets which have algebraically the same simplicity constraints as the full gravitational spin foam models (again based on a quantum group), dealing successfully – via a redesign of the algorithm – with the challenging task of increasing computational requirements.
The next step is to consider proper spin foam models which require tensor network algorithms applicable to lattice gauge models. In decorated was proposed the first tensor network coarse–graining algorithm, dealing with the problem of redundancy in variables, arising from the gauge symmetry.
In this work we propose a first (Decorated) Tensor Network coarse–graining algorithm, which can be applied to (3D) lattice gauge models based on non–Abelian groups. Here we will deal with lattice gauge models based on finite groups finitesf, as the computational resources needed, scale with the size of the group. Lie groups can be dealt with in principle, either by implementing a further truncation, or by using also semi–analytical tools, to which the Decorated Tensor Network schemes are amenable decorated. An alternative is to involve quantum groups at root of unity, which are finite and describe, or are conjectured to describe, Euclidean gravity with a positive cosmological constant qgroupmodels1; qgroupmodels2; qgroupmodels3; qgroupmodels4; qgroupmodels5; qgroupmodels6; qgroupmodels7. This strategy has also been used in merce; qgroup; SteinhausMatter; BCspinnets for spin nets.
We will test this Decorated Tensor Network algorithm on lattice gauge models based on a finite non–Abelian group, namely the permutation group . Since this group allows for (non–trivial) simplicity constraints, we present here the first coarse–graining results for such models. Furthermore, as mentioned above, spin net models have been constructed such that these capture key ingredients of spin foams. The hope was that the behaviour under coarse–graining for spin nets and spin foams is similar. We will compare the phase diagrams which we obtain for the spin foam models to the phase diagrams obtained for spin nets in merce.
Tensor network algorithms come now in a wide variety, e.g. levin; vidalevenbly; guwen; beijing; looptnr. Given that this is a first tensor network algorithm for 3D non–Abelian gauge theories, we will use a version which requires the least computational resources, and is sufficiently reliable to identify the phase structure and thus possible phase transitions. Algorithms, implementing a so–called (short range) entanglement filtering procedure vidalevenbly; evenbly2; looptnr are better suited to characterize the conformal theories arising at (second order) phase transitions. But these more involved algorithms require larger computational resources and have been developed, so far, only for 2D systems.
The main question, we are interested in, is to identify possibly new phases, which might only arise with spin foam models, but not within the standard form of lattice gauge theories. These phases often correspond to topological field theories, e.g. BF theory. New phases could correspond to new topological field theories, from which one can design new Hilbert space representations based on vacua defined by the topological field theories timeevol. With a phase diagram we can also identify potential phase transitions, which are needed to describe a continuum limit with propagating degrees of freedom. Another question is to confirm or not the similarity of the phase diagrams for spin nets and spin foams.
As we will discuss in detail, the main challenge in the design of the algorithm is due to the non–Abelian structure of the gauge group. In a certain sense gauge invariance is not preserved under coarse–graining for non–Abelian gauge groups. The reason is that under coarse–graining effective degrees of freedom appear which describe a violation of the Gauß constraints (or torsion), these constraints implementing gauge invariance for wave functions eteraCG3; bfform2. This happens because the definition of ‘effective’ Gauß constraints for larger regions require, for non–Abelian groups, a parallel transport involving a connection. In the presence of curvature this leads to a deformation
This paper is organized as follows. In section II we briefly summarize the structure of lattice gauge models. We will in particular detail the structure of spin foam models as a constrained BF theory and how they can be reformulated as (decorated) tensor networks. We then review the Decorated Tensor Network coarse–graining algorithm for 2D systems in section III. The next section IV details the coarse–graining algorithm for 3D lattice gauge models with a non–Abelian structure group. In section V we describe the space of models to which we apply the coarse–graining algorithm and the details of their parametrization. We also present the results of the coarse–graining algorithms. We close with a discussion and outlook in section VI. The appendices collect technical material needed for the development of the coarse–graining algorithm and its application to models with structure group .
Ii Lattice gauge models
ii.1 Definitions
Here we will shortly explain the class of models we will be considering and their connection to gravity. A more detailed introduction, highlighting the connections to statistical models, can be found in finitesf. We will in particular consider first order formulations of gravity whose definitions involve connection variables. In order to arrive at a well–defined path integral, one discretizes the underlying action for these systems. This yields spin foam models, which in their basic kinematical inputs, have a lot in common with lattice gauge theories. We will refer to both spin foam models and lattice gauge theories as lattice gauge models.
In the following we will review shortly the construction of such lattice gauge models. We begin with the action for the corresponding continuum theory.
Let be a compact Lie group and the corresponding Lie algebra. Given a four–dimensional manifold , we consider the Plebanski–action plebanski:
(1) 
Here denotes a valued ()form and the curvature of the connection . The Lagrange multipliers (carrying two Lie–algebra indices) impose the so–called simplicity constraints.
The simplicity constraints ensure that for the four–dimensional case , the two–form is actually ‘simple’, that is of the form
(2) 
describes a topological theory, known as BF theory horowitz. The discretization and related path integral of this topological theory can be constructed without the many ambiguities coming with the discretization of interacting theories. This is therefore the starting point for spin foam models. In a second step one has however to implement the simplicity constraints, which so–far is a process with many ambiguities dittrichryan08; ryanSimp; constraints2; foams3; BarOr; Maite. The simplicity constraints are however key for implementing the correct dynamics, as their role is to turn a topological theory into one with propagating degrees of freedom. It is therefore important to understand how the different implementations and forms of the simplicity constraints differ in their influence on the dynamics arising in the continuum limit. This is a long term goal of a line of research merce; qgroup; BCspinnets, in which this paper constitutes an important step forward.
The four–dimensional spin foam models are of high complexity. Extracting their continuum limit requires the development of appropriate tools and also a better understanding of the impact of simplicity constraints in general. In this work we will therefore consider three–dimensional models, with the aim to investigate these numerically. These have to be understood as ‘analogue’ models, that is we will impose simplicity constraints – following the procedure for the four–dimensional models – and investigate the impact of these simplicity constraints for the continuum limit. A similar strategy, but using two–dimensional (non–gauge) models, led already to interesting insights. In fact we will confirm here the close relationship between these two–dimensional so–called spin net models and (so–far) three–dimensional spin foam models.
Note that the actual theory of three–dimensional general relativity is topological, that is without propagating degrees of freedom. This is described by the BF action (2), with the choice for Euclidean signature space–times
Other theories related to BF theory are Yang Mills theory in first order formulation
(3) 
with being the (metric–dependent) Hodge star operator and the Yang–Mills coupling. Furthermore 3D general relativity with a cosmological constant is described by
(4) 
Lattice gauge theories, which provide lattice versions of Yang Mills theory, will constitute a subset of the phase space of models we will be considering.
As for the construction for spin foam models we consider now a discretization and quantization of BF theory. The BF partition function is given by
(5) 
This expression is only formal and a discretization is necessary to make it well defined.
We denote by the discretization and by the dual 2–complex. The discretization is constructed by gluing –dimensional building blocks along their –dimensional boundaries (often referred to as faces ). The –dimensional building blocks also have –dimensional ‘corners’ or ‘hinges’ in their boundaries shared by several –dimensional building blocks. In the case of a triangulation, all these building blocks are given by simplices of the appropriate dimension. Here we will also consider other discretizations such as a cubical lattice.
The dual complex consists of dual vertices for every –dimensional building block of , connected by dual edges . We therefore have a dual edge for every face of . These dual edges bound dual faces , where we have one dual face for every –dimensional hinge of the discretization .
The dual complex carries the variables for the discretized path integral (5). The connection degrees of freedom will be encoded into holonomies, that is group variables , associated to the dual edges . The curvature of the connection can then be approximated by the holonomies along the smallest available loops – which are given by the boundaries of the dual faces . After choosing a source and target vertex for the loop, we define such holonomies as
(6) 
where are the ordered and oriented edges, starting at the choosen source vertex , bounding the dual face .
We furthermore discretize the –valued –form by associating to each dual face a –valued variable . This represents the –field integrated over the dual face. As a –valued entity, it also needs a frame. We choose the one attached to the dual vertex , the same as for the face holonomy . Thanks to these definitions, we can define the discretized path integral for the BF action as
(7) 
where denotes the Haar measure in and a measure invariant under adjoint action on the Lie algebra .
Note that in (7), the fields appear only linearly in the exponential. Therefore it can be be integrated out, at least formally. This leads to delta functions which enforce the face holonomies to vanish
(8) 
Thus the partition function for BF theory implements an integral over the space of flat connections on the (discretized) manifold .
The starting point for spin foam models is obtained by Fourier transforming the BF partition function (8). That is, we rewrite this partition function as a sum over group representations , replacing the integral over group variables . This is achieved by using the following expression for the group delta function
(9) 
where the sum is over a complete set of (representatives of) irreducible unitary representations of the group and are the corresponding characters and dimensions, respectively.
The partition function becomes
(10) 
After performing the group integrals we obtain
(11) 
where defines the Haar projector. This is a map
(12) 
defined by
(13) 
with the number of dual faces meeting at the dual edge . We denote by the representation matrices. Note that the group integrations in (10) have been absorbed into the Haar projectors (12). The Haar projector is invariant under both left and right action (that is action on the or indices) for linear operators on the representation space . It can be thus written as
(14) 
where is a orthonormal basis of intertwiners (invariant tensors) on the representation space .
The form (11) for the BF partition function is given as a sum over representations (which would be ‘spins’ for or ) and in this sense describes a spin foam model. It is however just one particular example of this class of models, which – due to the topological nature of BF theory – also happens to be triangulation invariant.

Changing the (dual) face weights, which in the case of (11) are given by . In fact, a discretization of Yang Mills theory is given by replacing with appropriate functions . Partition functions which can be written in this form with general face weights associated to the dual faces and the Haar projectors to the dual edges, will be called of lattice gauge theory form. One also encounters different face weights in spin foam models. In fact it appears to be a relevant parameter for the continuum limit Christensen; BahrSteinhausPRL; BCspinnets and also heavily influences the divergence structure for spin foams based on Lie groups perini; aldo; bonzomdittrich; Linqin.

The imposition of the simplicity constraints leads in particular to a replacement of the Haar projector with a map projecting onto a smaller invariant subspace in . We will call models with such a non–trivial restriction of these invariant subspaces, models with non–trivial simplicity constraints, or ‘proper spin foam models’. In general the choice of such invariant maps imposing non–trivial projections is quite large, compared to just changing the face weights. There are however set–ups in which non–trivial simplicity constraints are not possible (or rather artificial), e.g. when dealing with a multiplicity–free group together with a three–dimensional triangulation. In this case we would have to consider invariant tensors on a triple tensor product of representations. (The reason is that in the dual complex each dual edge is shared by three dual faces, reflecting the fact that each triangle has three edges.) These are unique, that is the Haar projectors map onto one– or zero–dimensional spaces. A further restriction is only possible by forbidding some a priori allowed combinations of representations. Forbidding a particular representation to appear altogether can be also imposed via the face weights, and does therefore not count as proper spin foam model. We will avoid this feature, by choosing a cubical lattice. This leads to a quadruple tensor product of representations (as squares are bounded by four edges), which also agrees with the case resulting from four–dimensional triangulations (reflecting the fact that tetrahedra are bounded by four triangles). We can thus test the effect of simplicity constraints also in three–dimensional models by working with a cubical discretization.
In the case one considers spin foams based on , i.e. models with a geometric interpretation, one can also choose simplicity constraints which carry a geometric meaning. For instance one can impose that the squares of the cubical lattice are flat, that is that the four edges making up the square span only a plane.
This defines the space of models we will be considering. We will later specify in more detail how we restrict and parametrize the choice of simplicity constraints. In order to make the models accessible for numerical treatment we will consider a finite group. The integral with respect to the invariant measure becomes where is the order of the group.
ii.2 Reformulations of lattice gauge models
We will now consider specifically a three–dimensional cubical lattice as discretization. Thus the dual complex is also a cubical lattice. We arrived at the following form of the partition function for lattice gauge models
(15) 
where the face weights are associated to the dual faces (or plaquettes) and the invariant tensors to the dual edges. These tensors are contracted among each other according to the pattern depicted in figure 1.
In the following we will rewrite the partition function into a more local form, namely such that we can associate an amplitude to the cubes of the direct lattice (or alternatively the dual vertices). To this end, the tensors are required to split as follows
(16) 
where is a basis for the space , that is a basis of invariant tensors (intertwiners) for .
The fact that such a form for exists follows from the fact that is invariant under both the left and right action of the group. Thus, with respect to any basis of intertwiners , it is of the form
(17) 
Assuming that the matrix is diagonalizable (which indeed is the case if is a projector), leads to the form (16) of . Note however that the basis is not necessarily free to choose. In the case that is indeed a projector, i.e. we can reach a form (16) with the coefficients equal to one or zero (if is an orthonormal basis).
For a regular lattice one can absorb the face weights into the maps , which however affects the projector conditions. In our models we will entirely shift the parametrization of the models towards the choice of the maps .
Performing the splitting (16) for each of the tensors , we can associate an intertwiner variable to each dual half–edge (see figure 2). The intertwiner labels at the two halves of a dual edge have to agree. Note also that the intertwiner is between a set of representations , associated to the dual faces hinging at the dual edge.
The magnetic indices coming with the intertwiners associated to the dual half–edges ending at one dual vertex contract all among themselves (see figure 2). That is for each dual vertex we can define an amplitude
(18) 
where the contraction of the intertwiners is implicit. The dependence on the representation labels is via the dependence of the intertwiners on these representations. (Here we assumed that the face weights have been absorbed into the maps .)
Furthermore, if we change the viewpoint from the dual to the direct lattice, the amplitude can now be associated to a cube. The contraction pattern for the magnetic indices of the intertwiners is the same as for the evaluation of a four–valent spin network on the boundary of the cube. The underlying graph is dual to the surface of the cube. In the direct lattice the representation labels are now associated to the edges of the direct lattice and the intertwiners to the faces .
Putting everything together, we obtain the form of the partition function we will be working with, namely as a gluing of amplitudes associated to cubes:
(19) 
The gluing proceeds by summing over the representation and intertwiner labels associated to the shared edges and faces.
We just mentioned that the representation labels and intertwiners can also be thought of as being associated to a spin network, that is a contraction of intertwiner tensors along a pattern given by the network, on the boundary of the cubes. The gluing of cubes also translates into a gluing of boundaries with embedded spin networks – by summing over representation labels and intertwiners. As explained in appendix B, using again the (inverse) group Fourier transform we can implement a variable transformation and replace the sum over representations and intertwiners by a sum (in the case of finite groups) over group elements. These group elements are holonomies associated to the boundary graph underlying the spin network (see figure 3). The amplitude for a cube is then expressed as a gauge invariant functional of these holonomies where denotes the links of the boundary graph. We therefore rewrite the partition function as
(20) 
The cube amplitudes are invariant under the action of the gauge group at the nodes of the boundary graph. That is for a set of gauge group elements associated to the nodes of the boundary graph, we have
(21) 
where denotes the source node of the link and the target node. This gauge invariance implies that the set of variables provides an over–parametrization of the configurations. In order to reach an effective coarse–graining algorithm it is important to avoid this over–parametrization. To this end we will employ a gauge fixing procedure which will be detailed in section IV.1. (The representation labels and the intertwiners constitute a gauge invariant labelling. However this set of data is not preserved under coarse–graining, as discussed in section IV.1.)
The gauge invariance of the amplitudes allows us also to perform certain changes of the boundary graph. We can for instance expand the four–valent vertices into pairs of three–valent ones, and arrive at a boundary graph for the cubes as depicted in figure 3. This change does not introduce nor removes any gauge invariant data. These gauge invariant data can be constructed as follows: one chooses a set of independent cycles of the graph, all with the same source and target node. The holonomies associated to this set of closed cycles represent almost gauge invariant data. The only gauge action that is left is a global adjoint action of the gauge group on this set of cycle holonomies. As we will later explain in more detail, a set of independent cycles can be found by choosing a rooted connected and spanning tree
An expansion of a four–valent graph into three–valent graphs does not change the number of leaves , as it is determined by the difference of the number of links and the number of nodes:
(22) 
We can thus expand higher–valent nodes into three–valent nodes without changing the amount of gauge invariant information in a given amplitude. In order to find the amplitude for the extended graph, we only need to ensure its gauge invariance at all new nodes. For instance, given a four–valent node with incoming links and associated holonomies , we introduce a new link connecting the target nodes of with the target nodes of . The amplitude with respect to the expanded graph is then given by
(23) 
Gauge invariance of the extended amplitude at the new nodes is due to the gauge invariance of the original amplitude and by construction.
To coarsegrain our lattice gauge models we will work with the cube amplitudes mostly in the holonomy representation. The basic philosophy dittcyl is to glue several cubes together, by integrating over the shared holonomy variables, associated to the matching links on the shared faces. (This might require the subdivision of links into half–links. The corresponding extension of the amplitude can be constructed in the same way as above.)
The resulting building block will carry on its boundary a more complicated boundary graph. It will also carry more gauge invariant information than the graph on the original building blocks, thus the amplitude of the resulting building block will depend on more (gauge invariant) variables. To avoid an (exponential) growth of this number of variables we have to truncate back the number of variables to its original number. This can be done by constructing a so–called embedding map
(24) 
from the space of (coarse) configurations on the original building blocks to the space of (finer) configurations on the larger building block. This allows to pull back the amplitude of the larger building block to the coarser configurations
(25) 
and thus define the new amplitude for the same amount of data as for the original building block.
The construction of this truncation, provided by the embedding map, is the key step of such a coarse–graining procedure. In so–called Tensor Renormalization Group (TRG) algorithms, such a truncation is determined from the dynamics of the system. This is done with the aim to minimize the truncation error in the partition function. In the following we will describe a variant of such tensor network algorithms, the Decorated Tensor Network algorithm decorated. It offers more flexibility, in particular regarding the treatment of gauge models.
Note that the gluing and truncation in these algorithms are organized differently from the description above. The truncation is rather implemented first, via a procedure that splits building blocks to smaller pieces. These pieces are glued to a bigger building block in the second step. In the following we will explain the Decorated Tensor Network algorithm. First, we will review it for a 2D (non–gauge) system, then we will develop the algorithm for 3D lattice gauge models.
Iii (Decorated) Tensor Network Renormalization
Levin and Nave levin suggested the first coarse–graining algorithm, named Tensor Renormalization Group (TRG) algorithm, for 2D statistical models involving tensor networks. Gu and Wen guwen proposed another variant, applicable to statistical models defined on a square lattice.
There are two main points for TRG methods: firstly one reformulates the partition function of the (local) statistical model as a tensor network contraction. Consider for instance a vertex model, that is the partition function is given as a sum over variables associated to the edges of the lattice
(26) 
whereas the weights are associated to the vertices of the lattice and have as arguments the variables associated to the adjacent edges. Each variable appears in two vertex weights (for a 2D lattice without boundaries) and therefore the partition function can naturally be interpreted as a contraction of a tensor network
(27) 
Here we assumed a square lattice and (with some ordering prescription for the edges adjacent to ).
A coarse–graining move proceeds by blocking several vertices into one coarser vertex. Thus we have to sum over all variables shared by vertices belonging to the same coarse vertex. This is described by a certain contraction of tensors into a new tensor. The coarser vertex will then have more adjacent edges than the original vertices. Likewise the new tensors are of higher rank than the original ones. One can summarize several indices of a given tensor into one index, so that the new tensor has the same rank as the original one. This does however increase the index range.
This is where the second main point of the TRG algorithm comes in, the truncation. The idea is to keep the number of adjacent edges constant, or equivalently the range of the corresponding indices fixed. This fixed index range is usually referred to as the bond dimension . The guideline of how to do this is as follows: the edges represent summation over variables shared between the coarser vertices. One wishes to reduce the summation range by neglecting non–relevant variables, that is modes which do not contribute substantially to the sum. To identify such modes one uses a singular value decomposition (SVD), and neglects the modes associated to the smallest singular values (see appendix A).
All (local) statistical 2D models can be reformulated into a tensor network. However it turns out that for (higher dimensional) lattice gauge theories it is rather difficult to find an efficient encoding into a tensor network. (Note that it is always possible to find a tensor network description, see eckert1; decorated, but these have a large initial bond–dimension arising from the need to double variables.) This is the motivation for the introduction of so–called Decorated Tensor Networks, that allow more flexibility in the design of the algorithm. They do also offer additional advantages, for instance a more straightforward access to expectation values for observables decorated.
In the following we will explain this algorithm for a 2D ‘edge model’ or a 2D scalar field. These models can be rewritten into a tensor model, but we use their original form in order to illustrate the Decorated Tensor Network algorithm. The same algorithm will be used for the 3D gauge models.
We again assume a square lattice, but this time the variables are associated to the vertices of the lattice. We then associate to each plaquette an amplitude which in the case of the ‘edge models’ can be written
(28) 
where the weights are associated to the edges. The partition function is finally defined as
(29) 
We can glue neighbouring squares to larger effective squares, by integrating over shared variables in the bulk of the effective squares. Again these effective squares will in general have more boundary variables than the original squares along the edges. We can take this into account by allowing more variables associated to the four edges of the effective squares (see figure 4). In fact these variables can be understood as indices belonging to a tensor which sits at the centre of the square and whose edges are perpendicular to the edges of the square. (Alternatively these indices can be interpreted as values of the original scalar field, arising as described above.) Thus we have a tensor network ‘decorated’ with additional variables .
We therefore assume an effective square amplitude of the form
(30) 
The coarse–graining algorithm takes such a square amplitude as starting point and in each iteration constructs a coarsegrained effective amplitude .
We will now describe this coarse–graining algorithm, which is a ‘decorated’ version of the algorithm in guwen. The algorithm consists of two steps, splitting squares into two triangles, and gluing four triangles back to a square. The splitting step implements the truncation via a singular value decomposition. The gluing step implements the blocking or coarse–graining step.
To describe the splitting and associated SVD, we refer to figure 4. Neighbouring squares are split along the two different diagonals. This allows to glue back these triangles back into bigger squares which are however rotated by , see figure 4. To split the amplitude associated to a square into two amplitudes associated to the two triangles, we first form super–indices and as well as . This allows us to define a family of matrices
(31) 
labelled by the index . Using a singular value decomposition (see appendix A), we can approximate each matrix in this family by a product over two matrices
(32) 
The matrices now define the amplitude for the triangles (see figure 4), for instance
(33) 
The splitting of the squares along the two different diagonals gives four types of triangles, which are then glued back to larger squares according to figure 4:
(34)  
(35)  
(36) 
This finishes one coarse–graining step and (after a possible rescaling of the amplitude) one can now iterate the procedure.
Let us add two remarks: The Decorated Tensor Network algorithm comes with one essential difference to the TNG algorithm guwen, which is that the SVD splitting procedure is performed for an entire family of matrices, parametrized by an additional index . This index summarizes variables which are carried by both triangles arising from this splitting. We will also have such variables in the 3D algorithm for gauge models. Furthermore, note that the lowest possible approximation is given by choosing . This trivializes the indices of the actual tensor networks so that we are only dealing with the original variables . In this case the coarse–graining flow is described by a family of square amplitudes , where indicates the iteration number.
Iv The 3D algorithm for gauge models
iv.1 Overview
As explained in section II.2 we can reformulate the partition function for –dimensional lattice gauge models and spin foams as a gluing over –dimensional building blocks. That is the amplitude is associated to these building blocks, which are characterized by boundary data. For the 3D algorithm we will work with cubes (and prisms) as building blocks. The basic steps of the coarse–graining algorithm, namely splitting and gluing building blocks, then proceed in a way similar to the 2D case. Indeed we apply the same coarse–graining geometry as in 2D in alternating planes of the 3D lattice (see figure 5).
For gauge models the boundary data is encoded in variables associated to a graph embedded into the boundary of the building block. We can, for example, choose holonomies (that is group elements) associated to the links of the graph. Gauge symmetry then forces the amplitude to be invariant under gauge transformations acting on the nodes of the graph. Alternatively, we can use a (gauge invariant) spin network basis, to express the amplitude. This will be however not convenient for several reasons as explained further.
An important consequence of the gauge symmetry is that physical, that is gauge invariant, degrees of freedom are de–localized. Consider for instance the boundary data given in terms of holonomies associated to the links of the boundary graph. The set of these holonomies is redundant due to the gauge symmetry at the vertices. If no special attention is paid to this redundant information, we would obtain a very inefficient algorithm since computational resources would be committed to this redundant data. It turns out that the physical, i.e. gauge invariant, boundary data is encoded in the traces of closed holonomies obtained from the link holonomies. It is however highly non–trivial to find an independent and complete set of fully gauge invariant variables.
We can however obtain an almost gauge invariant set of observables by choosing a root node and considering the loop–holonomies associated to a set of independent cycles, starting and ending at the root node. (These variables are still not completely invariant, as they transform under the adjoint action resulting from gauge transformations at the root node.) The choice of such a set of independent cycles is equivalent to the choice of a connected spanning tree in the graph. Links of the graph which are not part of the tree are called leaves. The set of leaves is in one–to–one correspondence with a set of independent loops. Given a leaf there is a unique loop that visits the root vertex once and traverses only this leaf and tree–edges. The set of loops determined from the leaves is independent, as each loop in this set traverses a different leaf and the corresponding holonomies define the set of loop–holonomies.
The choice of tree can be understood as choosing a set of (almost gauge invariant) observables as well as localizing them (see figure 6). Furthermore we can gauge fix the amplitude so as to obtain a functional of leafholonomies only. Thus, to obtain the gauge fixed amplitude we have to set the holonomies associated to links of the tree to be trivial:
(37) 
where labels the leaves with respect to the tree . The gauge fixed amplitude has one remaining invariance, namely under adjoint action: .
This gauge fixing will play an important role in our algorithm, as we need to localize the degrees of freedoms in a certain way for the gluing and splitting procedures. For instance, after gluing building blocks we will have a natural choice of tree resulting from the trees associated to the original building blocks. This choice will in general not be appropriate for the next splitting step, and therefore a tree transformation will have to be performed.
An alternative to working with the holonomy basis and a gauge fixing of it, would be to work with a gauge invariant spin network basis. There are however two disadvantages in doing this. Firstly, the gauge invariant spin network basis is not preserved under coarse–graining eteraCG3; bfform2. An example can be seen in figure 7, showing an ‘effective node’, representing a coupling between the representations associated to the adjacent links, for which no (‘bare’) intertwiner exist. One would therefore find a way to project such configurations out, or enlarge the configurations space to gauge covariant spin networks. The second issue is an algorithmic one: assume we work only with gauge invariant spin networks one would also make use of the associated reduction of required memory space. This makes a considerable difference: not taking the coupling conditions into account means that the amplitude requires a memory scaling with where denotes the number of representations. Taking the coupling conditions into account requires a memory scaling with a number smaller than . This can be done by introducing super–indices, see e.g. qgroup. These would be non–local, as the super–indices would take the coupling conditions for the entire boundary spin network into account. This would complicate very much the entire algorithm, and we therefore rather work with a gauge fixing, which as mentioned above allows as to localize degrees of freedom in a certain way. Using this gauge fixing we will employ both the holonomy representation, and a (gauge fixed) spin network representation.
iv.2 Technical preliminaries
There are three main steps to our algorithm, namely splitting, gluing and tree transformations. Splitting and gluing are best performed in the Fourier transformed picture (that is with gauge fixed spin network function) since in this picture these operations appear as matrix operations. In contrast, the tree transformation is best performed in the holonomy representation, as it only requires a relabelling of variables in this case. (In the spin network representation, it would require a large matrix multiplication.) Therefore we also need to include a group Fourier transform and its inverse in–between the different steps.
Here we will describe the details of each one of these necessary steps. We start with the description of the boundary graphs and spanning trees.
Boundary graph:
The partition function for the gauge models discussed here can be rewritten as a gluing of cubical building blocks. The building blocks are equipped with an oriented boundary graph that carries group variables. The amplitude associated to a building block is a gauge invariant functional of these group variables. The basic features of this boundary graph are as follows:

The boundary graph arising from the rewriting of the partition function has a priori the following structure. Each face of the cube carries a four–valent node. The nodes of two neighbouring faces on a given cube are connected by a link. These links are thus crossing the edges of the cube. One can introduce two–valent nodes which partition these links into half–links belonging to a definite face. We will furthermore expand each four–valent node in the middle of a face into two three–valent nodes connected by a new link, as described in equation (23). This expansion of four–valent nodes to three–valent ones leads to a dual triangulation of the boundary of the cube. We will later split the cube into a prism along the edges of this triangulation.
After one step of the coarse–graining procedure the boundary graph will be refined in the following way: on two opposite faces we will have an additional cycle of graph links, see figure 9. Further iterations will keep this boundary graph stable.
^{12} The boundary graph for the prisms are similar: the triangular faces carry a three–valent node whereas the quadrilateral faces carry two three–valent nodes arising from expanding four–valent ones. But one quadrilateral face will carry an additional cycle of the graph, inherited from the cubic building block. This face will have four three–valent nodes, see figure 9.

We want to have the same boundary graph for every cube in the lattice. Since gluing neighbouring cubes requires pairs of links which are identified to have the same orientation, the orientations of the (half–) links on opposite faces of the cube have to match.
A boundary graph for the cube is depicted in figure 9, using a planar representation of the boundary of the cube. For the definition of the Fourier transform (see appendix B) we have to introduce a further convention for which we need to colour the faces of the building blocks with two colours, namely white and grey. In the gluing process a white face needs to be matched to a grey face, thus opposite faces on a cube need to be coloured differently.
Spanning tree:
Each part of the algorithm requires a specific choice of rooted and connected spanning tree. In particular, this implies that between the gluing and splitting steps, a change of spanning tree will be necessary as these steps require a different choice. Such a tree transformation is described in the next subsection. For instance, the gluing of two building blocks requires that the tree for the glued building blocks, arising from the trees of the initial building blocks, is connected and spanning again. On the other hand, for splitting a cubic building block into two prisms we demand that the same number of physical degrees of freedom are distributed between both prisms. Counting the number of independent cycles for the cube and the prism (see figure 9), we see that a spanning tree for the cube leads to 9 leaves whereas a spanning tree for the prism leads to 6 leaves. To split 9 degrees of freedom into two sets of 6 degrees of freedoms, we need to copy 3 of the initial 9 to both building blocks.
The previous requirement implies that the distribution of the leaves over the cube must be such that three leaves will be associated to one prism and three other leaves to the other prism. Furthermore, we need three ‘shared leaves’, that is three links of the graph across the boundary along which the cube is cut into prisms. These ‘shared leaves’ are the ones which are copied to both prisms. But cubes are cut in two different ways into prisms. To satisfy the requirement of having three leaves of ‘sharing type’ we need two different trees for the two different splittings. We will refer to the cubes cut in the two different ways as red and blue cubes.
A similar counting argument applies for the gluing procedure which determines the required number of leaves of ‘sharing type’.
Most of the links of the boundary graph are crossing an edge of the corresponding building block, i.e. they are included in two faces. These links can be cut into two half links. If the initial link happens to be a leaf we can extend the tree by choosing one of the half links as tree–link. Note that this does not change the assignment of a full link as ‘shared leaf’. This will be made more obvious in the discussion on the splitting procedure.
Group Fourier transform:
We will perform the gluing and splitting in the Fourier transformed picture since these operations boil down to matrix multiplications in this case. As explained in appendix B, this comes from the fact that the gluing of two amplitudes along two matching graph–links obtained by integrating over a group variable translates into a summation over the representation labels in the Fourier transformed picture. However, for such a translation to hold, it is necessary to introduce two definitions of the Fourier transform. These two definitions differ by a complex conjugation. We colour faces in white or grey to encode which convention applies and impose that only face of different colours can be glued together. For a functional which depends on a single group variable. the Fourier transform is defined as
(38) 
where the primed sum includes a normalization , and are the representation matrix elements for an irreducible unitary representation . The inverse Fourier transform is given as
(39) 
Note that and are associated to the source and target node of the leaf respectively.
iv.3 Basic steps of the algorithm
Here we discuss some basic procedures needed for the coarse–graining algorithm.
Change of spanning tree:
As discussed in the previous section, it is sometimes necessary to change the spanning tree between two steps of the algorithm. This change of spanning tree is more easily performed in the group representation. Let denote the loop–holonomies with respect to the old spanning tree and the loop–holonomies with respect to the new spanning tree. Both trees will have the same root vertex. Remember that the lopp–holonomies are the gauge fixed representatives for the holonomies associated to a basis set of (rooted) cycles, determined by the spanning tree. Since both sets of cycles form a basis, we can express the loop–holonomies of one set as a combination
(40) 
Gluing building blocks:
Gluing two building blocks means identifying the two faces along which the blocks are glued. For the boundary graph, this requires a matching of the pieces of oriented graphs associated to these faces such that leaves are matched to leaves. There are then two situations depending if the support of the leaf is a link which is fully embedded in the face or a link which is only half embedded in the face.
In the first case, we integrate the product of the amplitudes of the two blocks and over the identified loop–holonomy :
(41) 
When only half the support of the leaf is embedded in the glued face,we can subdivide the corresponding link into pairs of half–links. We then only integrate over the holonomy associated with the half link which is embedded in the glued face. Consider for instance figure 8, in which this division into half–links has already taken place. The ‘left’ and ‘right’ holonomies can thus be written
(42) 
The orientation of the half–links to be glued does coincide and we can thus just sum over the shared group element :
(43) 
The glued amplitude is a function of two holonomies associated to two half–links with opposite orientation. This realizes a gluing of the leaves as the support of these holonomies is now contiguous. Furthermore, note that the glued amplitude inherits a gauge invariance at the common node such that
(44) 
We can therefore perform a gauge fixing, e.g. so that we deal with the glued amplitude which depends on a single group variable. Note that we could have applied this gauge fixing also before gluing the links. The choice of half–link to gauge fix then determines the orientation of the resulting link. Of course, the final orientation of the edge can always be changed and the amplitude accordingly transformed.
The gluing procedure in terms of holonomy variables (43) is cumbersome to implement numerically since it represents a type of convolution. Such a convolution involves a group multiplication as compared to a more direct summation over variables. Indeed it turns out that the Fourier transformed picture offers a more efficient way of computing such a gluing of leaves.
As mentioned before, the Fourier transformation must follow the conventions encoded in the grey and white colouring of the faces. For the example presented in figure 8, the colouring imposes
(45) 
where the gauge fixing has already been used to set . Summing over gives
(46)  
Thus we can read off the Fourier transformed glued amplitude
(47) 
Notice that the Fourier transform convention we use for the glued amplitude is the one for white faces. This is consistent with the fact that we have chosen the gauge fixing . Indeed the remaining leaf is associated to the half–link embedded in the white face. Therefore, in the Fourier transformed picture we obtain the glued amplitude (modulo a rescaling by a dimensional factor) by identifying the representations such that and summing over the magnetic index associated to the node sitting in the glued face. The other magnetic indices are copied over for the new amplitude, that is is associated to the source node of the new leaf and is associated to the target node (see figure 8).
This procedure can be generalized to all other cases (of differing orientations or differing colouring of faces). The precise contraction rule in the Fourier transformed picture can be derived with a calculation as in (46). Alternatively one can use a graphical derivation as in figure 8. The main point is that the glued Fourier transformed amplitude arises from summing the initial amplitudes over the magnetic index associated to the glued face. The gluing of building blocks is obtained by repeating the same operation for every leaf whose support is fully or partly embedded in the glued face. The latter ones are the so–called ‘shared leaves’.
Splitting:
At the beginning of each coarse–graining step, neighbouring cubes are split along two different planes. Depending on the choice of cutting plane, the cubes will be referred to as ‘blue’ and ‘red’ cubes. For a given cube, the plane goes through the diagonals of two opposite faces. But the boundary graph is chosen such that its dual triangulation is consistent with the splitting. This means that the cutting plane proceeds only along edges of the dual triangulation without intersecting any.
As discussed in IV.2, the choice of spanning tree localizes the degrees of freedom. This localization is such that the same number of physical degrees of freedom are distributed between both prisms. We exlained earlier that it requires having three sheared leaves which will be copied for both prisms. These shared leaves are actually the ones intersected by the cutting plane. For instance for the blue cube, an appropriate spanning tree is depicted in figure 9. We have chosen a planar representation for the boundary of the cube and the blue line indicates the plane along which this boundary is cut.
The holonomy variables associated to the shared leaves are denoted by , and . As for the gluing, the splitting is better performed in the Fourier transformed picture. However it is only necessary to perform such a transformation for the shared leaves. Following the convention encoded in the grey/white colour of the faces, the transformation reads
(48) 
We can now perform the splitting which can be seen as an inverse procedure to the gluing. In particular, if we glue the amplitude obtained from the splitting we wish to reobtain (approximately) the original amplitude.
In the discussion about the gluing procedure, we saw the representation labels associated with the leaves which are glued need to agree on the two building blocks. For the splitting procedure, it means the representation labels and will act as parameters in the SVD–based splitting procedure. These are the analogue of the label in (31), where we discussed the 2D decorated tensor network coarse–graining procedure. In this case encoded the value of the scalar field on the corners of the tobesplit square, that were copied to both triangles.
From the gluing procedure we can also see how to split the magnetic indices for . These are distributed to the left and right prisms depending on the direction of the links carrying these leaves. If the target node of the (full) link belongs to the ‘left’ half of the cube we associate the index to the left prism, otherwise to the right. Similarly for the source node and the index .
Finally we have to distribute the holonomy variables to the two resulting prisms. This is determined by the position of the associated leaf: according to figure 9 we attribute to the left prism and to the right prism.
With these preliminaries settled we can define a family of matrices
(49) 
parametrized by an index , and with and . Using a singular value decomposition (see appendix A), we write this family of matrices as a family of products of matrices
(50) 
The and matrices encode the amplitudes for the left and right prisms respectively. We will comment on how to fix the value of in a moment.
We again wish to understand these amplitudes as functionals of holonomies defined on the boundary graphs of the prisms. We therefore need to choose a boundary graph for each of the two prisms, together with a white/grey colouring convention for the new face on each building block. Of course the choice for the two building blocks has to match, that is we must be able to glue back the two building blocks along the new face. This means that one of the new face must be white and the other one grey. Nevertheless, some freedom remains about which face to colour in white for instance. The choice we make is such that no change of colours is required in any of the subsequent steps.
For the boundary graph on the new face we choose the coarsest one possible which connects the four links entering or leaving the new face and which is three–valent. Therefore we only need to introduce an additional link (see figure 9). This completes the boundary graph for the prisms and determines the number of leaves to be six. These six leaves are already determined by the three shared leaves crossing into the new face and the other three leaves distributed over the remaining part of the prism. Thus the additional link added to the new face of a given prism is a tree–link. This fixes the spanning tree for the prisms.
We choose the minimal