We discuss a renormalization procedure for random tensor networks, and show that the corresponding renormalization-group flow is given by the Hamiltonian vector flow of the canonical tensor model, which is a discretized model of quantum gravity. The result is the generalization of the previous one concerning the relation between the Ising model on random networks and the canonical tensor model with . We also prove a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of random tensor networks.
Wilson’s renormalization group [1, 2] is an essential and pedagogical tool in modern theoretical physics. Once a renormalization-group flow in a parameter space is given, one can read off relevant degrees of freedom at each step of coarse graining through change of parameters, and understand the phase structure in principle. Therefore, a renormalization-group flow gives us a quantitative and qualitative picture of a system concerned. The aim of this paper is to define a renormalization procedure and derive the corresponding flow equation for random tensor networks, in particular for those proposed as Feynman-graph expressions [3, 4], through the use of the canonical tensor model (CTM, for short).
First of all, CTM has been introduced by one of the authors as a model of quantum gravity by considering space-time as a dynamical fuzzy space [5, 6, 7]. CTM is a tensor model in the canonical formalism, which has a canonical conjugate pair of rank-three tensors, , (), as dynamical variables. This interpretation of tensorial variables in terms of a fuzzy space is different from the one made by original tensor models. Historically, tensor models have been introduced as models of simplicial quantum gravity in dimensions higher than two [8, 9, 10]; although the original tensor models have some drawbacks, tensor models as simplicial quantum gravity are currently in progress as colored tensor models [11, 12] (See - for recent developments.). In CTM, , the cardinality of the rank-three tensors, may be interpreted as the number of “points” forming a space, while physical properties of space-time such as dimensions, locality, etc. must emerge from the collective dynamics of these “points.” So far, the physics of the small- CTM is relatively well understood: the classical dynamics of the CTM agrees with the minisuperspace approximation of general relativity in arbitrary dimensions ; the exact physical states have been obtained for in the full theory [26, 27] and for in an -symmetric subsector ; intriguingly, physical-state wavefunctions, at least for , have singularities where symmetries are enhanced . However, similar brute-force analysis as above for seems technically difficult because of the huge number of degrees of freedom of the tensorial variables, although in order to capture, for instance, emergence of space-time from CTM, the large- dynamics is supposed to be important. Thus, for the purpose of handling large- behaviors of CTM, the present authors have proposed the conjecture that statistical systems on random networks [3, 4, 28], or random tensor networks, are intimately related to CTM : the phase structure of random tensor networks is equivalent to what is implied by considering the Hamiltonian vector flow of CTM as the renormalization-group flow of random tensor networks. This conjecture has been checked qualitatively for . In fact, as more or less desired, random tensor networks turn out to be useful to find physical states of CTM with arbitrary : some series of exact physical states for arbitrary have been found as integral expressions based on random tensor networks .
In this paper, we prove the fundamental aspect of the above conjecture: we show that the Hamiltonian vector flow of CTM can be regarded as a renormalization-group flow of random tensor networks for general . Here the key ingredient is that the Lagrange multipliers of the Hamiltonian vector flow are determined by the dynamics of random tensor networks in the manner given in this paper. This is in contrast with the previous treatment for , in which the Lagrange multipliers are given by a “reasonable” choice . In fact, the previous treatment turns out to have some problems for general , as being discussed in this paper.
This paper is organized as follows. In section 2, we review CTM and random tensor networks. We argue our previous proposal  on the relation between CTM and random tensor networks, and its potential problems. In Section 3, we propose a renormalization procedure for random tensor networks based on CTM, and derive the corresponding renormalization-group flow. In Section 4, we compare our new and previous proposals with the actual phase structures of random tensor networks for . We find that the new proposal is consistent with the phase structures, while the previous one is not. In Section 5, we discuss the asymptotic behavior of the renormalization-group flow, and clarify the physical meaning of the renormalization parameter. In Section 6, we provide a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of the random tensor networks. Section 7 is devoted to summary and discussions.
2 Previous proposal and its problems
where we have used the following short-hand notations,
By using the Wick theorem, the Gaussian integration of in (1) can be evaluated by
the summation over the pairwise contractions of all the ’s in .
Then the partition function (1)
can graphically be represented by the summation over all the possible
closed networks of trivalent
vertices. In each of such networks, every vertex is weighted by ,
and the indices are contracted according to the connection of the network,
as in Fig.1.
Therefore, since the pairwise contractions are taken over all the possible ways,
the statistical system represented by (1) can be regarded as random tensor networks
of trivalent vertices. In general, such a network may contain disconnected sub-networks,
but this probability vanishes in the thermodynamic limit
where represents the spin degrees of freedom taking , is a magnetic field, and is a two-by-two matrix satisfying
with giving the nearest neighbor coupling of the Ising model. For a ferromagnetic case , there exists a real matrix satisfying (4).
The partition function (1) is obviously invariant under the orthogonal transformation , which acts on as
since the transformation can be absorbed by the redefinition . In addition, the overall scale transformation of ,
with an arbitrary real number , does not change the properties of the statistical system, since this merely changes the overall factor of (1). For example, for , these invariances allow one to consider a gauge,
with real .
The free energy per vertex in the thermodynamic limit can be defined by
where we have considered only even numbers of vertices, since an odd number of trivalent vertices cannot form a closed network. The phase structure of the statistical system can be investigated by studying the behavior of the free energy (8). For the case of , the phase transition lines of the free energy (8) in the gauge (7) are shown by solid lines in Fig.2 [3, 4].
The transitions are first order, except at the Curie point, , where
the first derivatives of are continuous, but the second ones are not [3, 4, 28].
In fact, for arbitrary , the free energy can exactly be obtained by applying
the Laplace method to evaluate (8) [3, 4].
The result is
and is defined so as to minimize as a function of for given , namely, is one of the solutions to the stationary condition,
The most important implication of the previous paper  was that the phase structure of the Ising model on random networks (more exactly, random tensor networks with ) in Fig.2 can be derived from the Hamilton vector flow of CTM for , if one regards the Hamilton vector flow as a renormalization-group flow, as shown in Fig.2. This is surprising, since CTM was proposed aiming for quantum gravity, and there exist no apparent reasons for CTM to be related to statistical systems on random networks. CTM is a totally constrained system with a number of first-class constraints forming an algebra which resembles the Dirac algebra of the ADM formalism of general relativity . In the classical case, the constraints are given by
where and are the kinematical symmetry generators corresponding to the (5) and the scale (6) transformations, respectively, and and may be called Hamiltonian and momentum constraints, respectively, in analogy with general relativity . Here the bracket for the indices of symbolically represents the antisymmetry, , and is the canonical conjugate variable to defined by
where denotes the Poisson bracket, and the summation over runs over all the permutations of to incorporate the symmetric property of the three-index tensors. The constraints form a first class constraint algebra,
where , and .
The Hamiltonian of CTM is given by an arbitrary linear combination of the constraints as
where ’s are the multipliers, which may depend on in the context of this paper, considering a flow in
the configuration space of .
Then, the Hamiltonian vector flow is given
where is a fictitious parameter along the flow. In the previous paper , which compares CTM with the random tensor networks for , the multiplier is chosen to be
based on that this is the simplest covariant choice. The other multipliers and , related to the symmetry generators, are chosen so that the Hamilton vector flow (21) keeps the gauge condition (7). Indeed the flow in Fig.2 has been drawn with these choices. One can also check that other covariant choices such as do not change the qualitative nature of the flow and therefore the coincidence between the phase structure of the random tensor networks and the one implied by CTM with .
Though the coincidence is remarkable for , from further study generalizing gauge conditions and values of , we have noticed that there exist some problems in insisting the coincidence as follows:
First of all, no physical reasons have been given for the coincidence. A primary expectation is that there exists a renormalization group procedure for statistical systems on random networks, and the procedure is described by the Hamiltonian of CTM in some manner. However, it is unclear how one can define a renormalization group procedure for statistical systems on random networks, which do not have regular lattice-like structures.
As will explicitly be shown later, in the case of , the phase transition lines deviate from the expectation from the Hamilton vector flow of CTM. What is worse is that different choices of , such as or , give qualitatively different Hamilton vector flows, which ruins the predictability of the transition lines from the flow.
In Fig.2 for , on the phase transition lines, the flow goes along them, and there exist a few fixed points of the flow on the transition lines. The fixed point at is a co-dimension two fixed point, and the associated phase transition is expected to be of second order rather than first order, if the flow is rigidly interpreted as a renormalization-group flow and we follow the standard criterion . This is in contradiction to the actual order of the phase transition. This contradiction is more apparent in the diagram in another gauge in Section 4.
The purpose of the present paper is to solve all the above problems, and to show that CTM actually gives an exact correspondence to random tensor networks. It turns out that should not be given by any “reasonable” choices as above, but should rather be determined dynamically as to be discussed in the following sections. Then, we can show that the Hamiltonian of CTM actually describes a coarse-graining procedure of random tensor networks, and that the Hamilton vector flow is in perfect agreement with the phase structure irrespective of values of .
3 Renormalization procedure and renormalization-group flow
In this subsection, we discuss a renormalization group procedure of the random tensor networks, and obtain the corresponding renormalization group flow.
Let us consider an operator which applies on as
where is an expectation value defined by
and the numerical factor in the first term of (24) is due to the rescaling of for in the exponential.
Here (24) and (25) must be used with a caution. If taken literally, since , (24) and (25) do not seem useful by themselves. The reason for is that the contributions at with arbitrary cancel with each other in the integration of (1). To avoid this cancellation and make (24) and (25) useful, let us consider a finite small region in the space of around one of the solutions which minimize (10). For later convenience, we take the sign of so as to satisfy
This can be taken, because, if , in (10) diverges and cannot be the minimum, and . Especially, should not contain the other minimum . Then, let us consider a replacement,
In the thermodynamic limit , the integral (27) is dominated
by the region with width around
Taking into account the discussions above, we can put and in (24) for . Therefore, in (24), the first term dominates over the second term, and one can safely regard as an operator which increases the size of networks. To regard this operation as a flow in the space of rather than a discrete step of increasing , let us introduce the following operator with a continuous parameter ,
If we consider , we can well approximate the operation with the first term of (24) as explained above, and one obtains
By increasing , the right-hand side is dominated by larger networks, and diverges at , which is given by
On the other hand, in the thermodynamic limit, the left-hand side of (29) can be computed in a different manner. In the thermodynamic limit, can be replaced with the mean value , and the operator can be identified with a first-order partial differential operator,
where is a partial derivative with respect to with a normalization,
The solution to (33) can be obtained by the method of characteristics and is given by
where is a solution to a flow equation,
where must be regarded as a function of by substituting with .
Here we summarize what we have obtained from the above discussions. can be evaluated in two different ways. One is (29), a summation of random tensor networks, the dominant size of which increases as increases, while is unchanged. The other is (34), where changes with the flow equation (35), while the size of random networks is unchanged. This means that the change of the size of networks can be translated into the change of . Therefore the flow of in (35) can be regarded as a renormalization-group flow of the random tensor networks, where increasing corresponds to the infrared direction.
The above derivation of the renormalization group flow uses the particular form of in (12). Since, in general, there exist various schemes for renormalization procedures for statistical systems, one would suspect that there would be other possible forms of which describe renormalization procedures for random tensor networks. However, this is unlikely, and the form (12) would be the unique and the simplest. The reason for the uniqueness is that, as outlined in Section 2, the algebraic consistency of with the symmetry, which is actually the symmetry of random tensor networks in the form (1), requires the unique form (12) under some physically reasonable assumptions . On the other hand, the reason for the simplest can be found by considering the diagrammatic meaning of the operation in (23). acts on a vertex as
and hence can be regarded as an operator which inserts a vertex on an arbitrary connection in a network (See Fig.3). This is obviously the most fundamental operation which increases the number of vertices of a network.
Here we comment on our new proposal in comparison with the previous one . Our main claim is that the multiplier should take rather than “reasonable” choices such as , etc., taken in the previous proposal. With , the Hamiltonian vector flow is uniquely determined by the dynamics, while “reasonable” choices are ambiguous. Even if ambiguous, there are no problems in the case, since there are no qualitative changes of the flow among “reasonable” choices, and the phase structure can uniquely be determined from the flow. However, as will be shown in Section 4, this is not true in general for . In fact, is special for the following reasons. It is true that can well be approximated by near the absorbing fixed points in Fig.2, because all of them can be shown to be gauge-equivalent to . This means that at least an approximate phase structure can be obtained even by putting . In addition, what makes the case very special is that the phase transition lines are the fixed points of the symmetry corresponding to reversing the sign of the magnetic field of the Ising model on random networks. Therefore, the phase transition lines are protected by the symmetry, which stabilizes the qualitative properties of the flow under any changes of respecting the symmetry.
In this section, we will check the proposal of this paper in the cases of by comparing with the phase structures of the random tensor networks.
Let us first consider the case with a gauge,
as a typical example. The difference from (7) is the gauge fixing value of . One can obtain the phase structure in the parameter space of by studying the free energy (9). Alternatively, one can apply the and scale transformations, (5) and (6), on so that the phase structure in the gauge (7) given in Fig.2 is transformed to that in the gauge (40). In either way, one can determine the phase structure in the new gauge, and the result is given in Fig.4. Here we draw the Hamilton vector flows for , based on the former proposal, and , based on our new proposal, in the left and right figures, respectively.
The rough features of the two flows based on the different proposals seem consistent with the phase structure: the flows depart from the transition lines, and go into the same absorption fixed points. This was the main argument in our previous paper . However, there are some physically important differences in details between the left and the right figures. In the left figure, on the phase transition lines, the flow is going along them. Moreover, there exist a few fixed points of the flow on the transition lines at in the left figure. If the flow is strictly interpreted as a renormalization-group flow, the phase transition line on the righthand side of the fixed point near is expected to be of second order, rather than first order, since the points on the both sides of the transition line in its vicinity flow to the same fixed point near without any discontinuities. On the other hand, in the right figure, the flow has discontinuity on the transition lines, except for the Curie point at the endpoint of the transition line. Thus, the flow based on our former proposal clearly contradicts the actual order of the phase transitions, while the one based our new proposal is in agreement with it, i.e. first order except for the Curie point.
An interesting property of the flow is that it does not vanish even on the Curie point, as can be seen in the right figure of Fig.4 and can also be checked numerically. This seems curious, because the second derivatives of the free energy contain divergences on the point. In a statistical system on a regular lattice, such divergences originate from an infinite correlation length. Therefore, such a point will typically become a fixed point of a renormalization-group flow. On the other hand, the correlation length of the Ising model on random networks is known to be finite even on the Curie point . This means that, even starting from the Curie point, a renormalization process will bring the system to one with a vanishing correlation length. This implies that the Curie point cannot be a fixed point of a renormalization-group flow, and this is correctly reflected in the fact that our flow does not vanish on the Curie point.
Let us next consider the case. There seem to exist too many parameters to treat this case in full generality. So let us specifically consider a subspace parametrized by
which is invariant under the transformation permuting the index labels, 1,2,3. Through numerical study of the free energy (9) (and some analytic considerations), one can obtain the phase structure shown in Fig.5. In the indicated parameter region, there exist two regions of an symmetric phase labeled by S with . There also exist two distinct non-symmetric phases labeled by NS1 and NS2. At any point in the two regions, the minimization of the free energy (10) has three distinct solutions of non-symmetric values, , , , and hence three distinct phases coexist in these regions. When symmetric subspace (41) is extended to more general cases, each of NS1 and NS2 becomes the common phase boundary of the three phases.
The flow in the left figure of Fig.5 is drawn based on our previous proposal . There, the flow is not in good agreement with the phase structure, though it seems to capture some rough features. We tried other possibilities such as , etc., but the flow depended on the choices, and no good agreement could be found. On the other hand, in the right figure based on our new proposal , the flow in the symmetric region S is consistent with the phase structure: the flow departs from the transition lines, and, since it does not vanish on the lines, the order is expected to be first order. This is in agreement with the property of the free energy except for a point . At this point, the free energy is continuous in the first derivatives, but singular in the second. However, since the flow does not vanish on the point, the correlation length is expected to be finite. This is similar to the Curie point of the case.
In the non-symmetric phases, NS1 and NS2, the expectation values are not symmetric. Therefore, the flow has generally directions away from the symmetric subspace, and cannot be drawn on the figure. To also check the consistency of the flow in these regions, it would be necessary to extend the parameter region. This would require to take a different systematic strategy for consistency check to avoid too many parameters. In Section 6, we will take another way of consistency check by proving a theorem relating the renormalization group flow and the phase transitions of the random tensor network.
5 Asymptotic behavior of the flow
In Section 3, we argued that provides a renormalization procedure for the random tensor network. As can be seen from (29) and (34), diverges in the limit in (30). On the other hand, in the numerical analysis of Section 4, is kept normalized as (40) and (41) by appropriately tuning the multiplier for the scale transformation (with as well). As in Fig.4 and 5, one can find fixed points of the flows in the limit , where denotes the fictitious parameter parameterizing the normalized flows. In this section, we will show that these two limits of and are physically equivalent.
where is meant to be , and hence is regarded as a function of . The solution to (42) is
which indeed diverges at . Since is normalized by , the divergence of (43) can be translated to the divergence of with a behavior,
or higher order in the case that some components of vanish in .
Now let us compare the two flows, unnormalized and normalized ones. For notational simplicity, let us denote the three indices of by one index as . The flow equations in and can respectively be expressed as
where generally depends on , and are the quadratic polynomial functions of , which can be read from (35). The last term of the second equation comes from in (20), and is assumed to be tuned to satisfy a gauge condition normalizing . Here we ignore the generators, , for simplicity, but it is not difficult to extend the following proof to include them.
The physical properties of the random tensor network do not depend on the overall scale of . So let us define the relative values of and as
In the same manner,
Note that the last term of (46) does not contribute to the flow equation of . Since the initial condition (47) implies , and the righthand sides of (49) and (50) are identical, the flow equations, (49) and (50), describe an identical flow with a transformation between the fictitious parameters,
Therefore, the limits of and are physically equivalent. As can be checked easily, this physical implication does not change, even if we consider the case that diverges with an order higher than (44).
To investigate the physical meaning of the fictitious parameter , let us estimate (29) near . We obtain
Then the average size of networks can be estimated as
This means that corresponds to the standard renormalization-group scale parameter often denoted by in field theory.
6 Discontinuity of the renormalization-group flow and phase transitions
In Section 4, we see that the renormalization-group flow has discontinuity on the first-order phase transition lines in the examples of the random tensor networks. In this section, we will prove a general theorem on this aspect.
By using the free energy in the thermodynamic limit (9),
the stationary condition (11) and the flow equation (35),
we can prove the following theorem.
Theorem: The following three statements are equivalent.
The first derivatives of are continuous at .
is continuous at .
The renormalization-group flow is continuous at .
Proof: Let us first prove . From (9), the first derivatives of are given by
where we have used derived from (11).
Here note that is continuous at any ,
because the free energy itself in (9) is continuous at any
The reverse, , is obviously true from (56). Therefore, the statements (i) and (ii) are equivalent: .
Next, as for (ii) (iii), it is obvious that, if is continuous, the renormalization group flow (35) is also continuous.
Finally, let us prove (iii)(ii), which will complete the proof of the theorem. To prove this, we will show that there is a contradiction, if we assume both (iii) and that has discontinuity on .
Let us suppose that there is discontinuity of at a point . Then, from the definition of , there exist multiple distinct solutions of to (11) which give the same minimum of (10) at . Let us take any two of them, and . As shown above, is continuous at any point, which means
On the other hand, the assumption (iii) and (35) imply
Finally, by contracting (60) with , we obtain
where we have used (62). This concludes ,
which contradicts the initial assumption of the existence of discontinuity of .
Consequently, we have proven the equivalence of (i), (ii) and (iii).
By taking contrapositions, a corollary of the theorem is given by
Corollary 1: The following three statements are equivalent.
is a first-order phase transition point. (Not all of the first derivatives of are continuous.)
is not continuous at .
The renormalization-group flow is not continuous at .
Another corollary of physical interest is
Corollary 2: If is a phase transition point higher than first-order, the renormalization-group flow is continuous at the critical point.
The qualitative behavior of the renormalization-group flow shown in the right figure of Fig.4 respects the theorem and corollaries as it should be: Corollary 1 is realized on the phase transition lines, and Corollary 2 on the Curie point.
7 Summary and discussions
In the previous paper , it has been found that the phase structure of the Ising model on random networks (or random tensor networks with ) can be derived from the canonical tensor model (CTM), if the Hamilton vector flow of the CTM is regarded as a renormalization-group flow of the Ising model on random networks. This was a surprise, since CTM had been developed aiming for a model of quantum gravity in the Hamilton formalism [7, 6, 5]. Considering the serious lack of real experiments on quantum gravity, the aspect that CTM may link quantum gravity and concrete statistical systems would be encouraging.
The main achievement of the present paper is to have shown that the Hamilton vector flow of CTM with arbitrary gives a renormalization-group flow of random tensor networks, where the case, in particular, corresponds to the Ising model on random networks. In the previous paper , we considered the Hamiltonian of CTM, , with “reasonable” choices of . Though it was successful in the case, we have shown in this paper that the previous procedure of taking does not work for general , and have argued that the correct one is given by , where are the integration variables for describing random tensor networks. Here an advantage of the present procedure is that is uniquely determined by the dynamics of random tensor networks, but not by the ambiguous “reasonable” choices of the previous procedure. In fact, applied on random tensor networks, is an operator which randomly inserts vertices on connecting lines, and therefore it increases sizes of tensor networks. This provides an intuitive understanding of the role of