Renormalization group flows of Hamiltonians using tensor networks
Abstract
A renormalization group flow of Hamiltonians for two-dimensional classical partition functions is constructed using tensor networks. Similar to tensor network renormalization ([G. Evenbly and G. Vidal, Phys. Rev. Lett. 115, 180405 (2015)], [S. Yang, Z.-C. Gu, and X.-G Wen, Phys. Rev. Lett. 118, 110504 (2017)]) we obtain approximate fixed point tensor networks at criticality. Our formalism however preserves positivity of the tensors at every step and hence yields an interpretation in terms of Hamiltonian flows. We emphasize that the key difference between tensor network approaches and Kadanoff’s spin blocking method can be understood in terms of a change of local basis at every decimation step, a property which is crucial to overcome the area law of mutual information. We derive algebraic relations for fixed point tensors, calculate critical exponents, and benchmark our method on the Ising model and the six-vertex model.
pacs:
02.70.-c, 05.10.Cc, 05.20.-y, 05.50.+q, 75.10.HkIntroduction — The study of phase transitions and critical properties of lattice models has long been at the center of statistical physics. Universal properties of critical systems can be captured by conformal field theories (CFTs), which act as low-energy effective descriptions of critical models, and whose scaling dimensions can be related to the critical exponents of asymptotic correlation functions. One way to gain insight into these phenomena is through real-space renormalization group (RG) methods, which predate the development of the modern renormalization group and can be traced back to Kadanoff’s block spin procedure Kadanoff (1966); ?; ?. In his treatment of block spin methods on the lattice, Wilson emphasized that one should be able to do precise numerical calculations using pure RG methods combined with approximations based only on locality Wilson (1975). For real-space RG to work, the effective Hamiltonian at every step should be dominated by short-range interactions as interactions of arbitrary complexity are generated in subsequent iterations. Additionally, the calculation of any particular term in the coarse-grained Hamiltonian should involve but a small number of fine-grained spins.
Tensor networks are efficient, local, real-space variational ansätze for many-body wavefunctions, which are constructed by mimicking the spatial distribution of entanglement and correlations. Renormalization group methods based on tensor networks satisfy Wilson’s requirements insofar as their inherent real-space locality and finite bond dimension restrict the range of newly generated effective interactions and provide a controlled approximation that can be systematically improved.
For two-dimensional lattice systems, the tensor renormalization group (TRG) algorithm Levin and Nave (2007); Gu et al. (2008) put the idea of tensor network renormalization (TNR) into practice in a most explicit way. Wholly based on truncation using singular value decomposition (SVD), this algorithm works extremely well for gapped systems because of the same entanglement reasons that explain the success of the density matrix renormalization group (DMRG) for quantum spin chains Levin and Nave (2007). Despite remarkable accuracy in determining critical exponents for finite systems, none of the methods based on TRG Xie et al. (2009); Zhao et al. (2010); Xie et al. (2012) is sustainable in the sense that it is capable of yielding true (approximate) fixed points tensors at criticality Evenbly and Vidal (2015a). Recently, novel TNR algorithms respectively based on the multi-scale entanglement renormalization ansatz (MERA-TNR) Evenbly and Vidal (2015a, b); Evenbly (2017); Evenbly and Vidal (2016) and matrix product states (Loop-TNR) Yang et al. (2017) have been developed which do manage to flow to approximate fixed point tensors, even at criticality. Our work has been inspired by the latter proposal which formulates TNR in terms of periodic matrix product states (MPS). For the 2D classical Ising model, impressive numerical results have been obtained that seem to defy the breakdown of TRG at criticality.
In this paper, we demonstrate how tensor networks can be used to achieve explicit real-space RG flows in the space of classical Hamiltonians. To this end, we have developed a sustainable and manifestly nonnegative TNR method (TNR) to coarse-grain classical partition functions. By virtue of the element-wise nonnegativity of all tensors involved, we can explicitly associate a Hamiltonian to the fixed point tensors of the RG flow generated by our algorithm. We thus believe our work opens up the possibility to begin to address one of the main concerns raised by the traditional real-space RG community about all TNR schemes: the lack of an insightful RG interpretation of what are essentially supposed to be real-space RG methods
Tensor network renormalization — The salient features shared by all TNR algorithms developed up to now are twofold. First, the breaking apart of the tensor product structure, which was introduced in TRG by splitting tensors using SVD, is crucial to the construction of new effective degrees of freedom and the removal of correlations. The reason why Kadanoff’s spin blocking fails can be traced back to the bounds on correlations imposed by the mutual information between a block and its environment. In order to overcome this barrier, it is essential to reorganize degrees of freedom by doing a local basis transformation. Secondly, both MERA-TNR and Loop-TNR address the additional need to extend the domain of the coarse-graining step to act on a block of sites in order to remove intra-block correlations. The disentangling power of both MERA-TNR and Loop-TNR can be found in surrounding a block of sites with a coarse-graining operator. This explains, for instance, why there is no way for TRG, which acts locally on each site, to detect the short-range correlations that it sets out to remove at criticality.
Coarse-graining nonnegative tensor networks — Consider a two-dimensional bipartite square lattice of classical spins described by an energy functional . The classical statistical partition function is then given by
(1) |
where denotes the free energy. If we imagine the spins living on the vertices of the lattice, the Boltzmann weight of a site depends on the configuration of the bonds connected to the site. We can write these probabilities as a rank-four tensor , so that the sum over all configurations in the partition function boils down to contracting a nonnegative tensor network,
(2) |
By coarse-graining tensor networks, we then refer to a real-space RG procedure constructing a sequence of partition functions , where each effective partition function is defined on a coarser lattice than the one before, until we are left with a single effective site after iterations. If we now want to additionally retain element-wise nonnegativity of all involved tensors at every step, we cannot resort to using SVD, which is the backbone of all other TNR approaches. Instead, we are led to nonnegative matrix factorization (NMF) algorithms sup () to approximate the following matrix factorization problem: given an element-wise nonnegative matrix and a rank , find the matrices and minimizing
Now let us focus on a block of four adjacent sites (Fig. 1(a)), which we, following Yang et al. Yang et al. (2017), interpret as a periodic four-site matrix product state (MPS) with respective physical and virtual dimensions. The local coarse-graining procedure then proceeds according to the canonical real space RG steps by (i) introducing new effective degrees of freedom, which here involves approximating the local block with an ansatz that has a different tensor product structure in order to remove short-range correlations (Fig. 1(b)), (ii) summing over old degrees of freedom by recombining the optimized tensors into new coarse-grained tensors and (Fig. 1(d)). The virtual dimension in step (i) can be increased at will, which in turn determines the local dimension of the degrees of freedom living on the new lattice. While step (ii) explicitly sums over the old outer (physical) degrees of freedom to construct the coarse-grained tensors, step (i) also contains an implicit summation over the old inner (virtual) degrees of freedom. After a single RG step, the roles of the physical and virtual MPS dimensions have interchanged and the linear dimension of the lattice is reduced by . The tensors in Fig. 1(e) then serve as input to the next step, where we take into account that we have to break up the tensor product structure again. Notice that in Fig. 1(c) we identify the coarse-grained lattice with the “vertex” configuration inside the dashed bounding box and not the “plaquette” configuration inside the dotted one. Even though a priori they look similar, the latter configuration leads to worse numerics which can be understood by it not being able to remove short-range correlations of the corner double-line (CDL) form sup ().
Renormalization group flow —
In Fig. 1(f) we have depicted the tensor network generated by the action of TNR on an open boundary of the lattice. In much the same way as TRG produces a tree tensor network and MERA-TNR a multi-scale entanglement renormalization ansatz Evenbly and Vidal (2015b), our TNR algorithm builds up a nonnegative tensor network approximation to the leading eigenvector of the transfer matrix. Given the nonnegativity and the alternating pattern of one iteration “disentangling” (blue tensors) and the next one reducing the degrees of freedom (green tensors), TNR can be said to generate a stochastic MERA
(3) |
In the scale invariant regime of the RG flow, this MPO is identified with the radial transfer matrix Evenbly and Vidal (2015b), which can be diagonalized to give . Here, the scaling dimensions and approximate lattice representations of the primary fields and descendants of the underlying CFT description are found only if the relative gauge freedom of the coarse-grained partition functions has been fixed, i.e. if the degrees of freedom we deem equivalent after two iterations do indeed match sup (). For critical systems, we thus end up with a window of an approximately invariant alternating sequence of partition functions after the initial part of the flow has sufficiently suppressed irrelevant lattice details and up until the accumulated truncation errors eventually prevail.
We can furthermore consider the fixed point equations of TNR as an algebraic set of equations in their own right by finding tensors which (approximately) satisfy
(4) |
Exact solutions of these equations include trivial product states and GHZ states corresponding to gapped infrared fixed points, potentially with symmetry breaking. Including additional symmetry constraints, there might exist non-trivial solutions which approximately yet accurately satisfy the RG fixed point equations. The sets of these solutions and their stability under perturbations could then point towards the conditions required for a classification of all possible (approximate) RG fixed points of TNR schemes
Application to classical partition functions — We have benchmarked our algorithm on the classical Ising model and the six-vertex model. The partition function of the ferromagnetic Ising model can be encoded by associating a tensor to each vertex, where denotes the contribution of the interaction between spins and . The Ising model exhibits a phase transition at the critical temperature described by a free fermion CFT, separating the symmetry breaking phase for from a trivial disordered phase for . The partition function of the zero-field six-vertex model can be written in terms of the non-vanishing tensor elements , , and , where denote the Boltzmann weights of the allowed bond configurations. In terms of the parameter , the six-vertex model has a phase boundary determined by which separates four phases: two ferroelectric phases for , an antiferroelectric phase for , and a gapless disordered phase for . The six-vertex model belongs to special classes of Hamiltonians which violate the universality hypothesis in that its phase diagram contains a continuum of critical points with continuously varying critical exponents captured by a free boson CFT. In what follows, we will consider the example of spin ice, i.e. and .
In Fig. 2(a), the relative error of the free energy per site is plotted at criticality in function of the bond dimension. We observe very accurate free energies, with the difference in accuracy between the simulations of the two models reflecting the less trivial nature of the six-vertex model. To study the implicit approximate scale invariance of the RG flow, we calculate the smallest scaling dimensions from the linear transfer matrix MPO constructed from effective partition function tensors,
(5) |
in function of system size (or, equivalently, iteration step) in Fig. 2(c,d) sup (). We observe that the numerically obtained implicit fixed point is stable under subsequent coarse-graining and remains so for a prolonged number of steps, in agreement with other TNR approaches Evenbly and Vidal (2015a); Yang et al. (2017)
exact | Ising TNR | exact | Spin ice TNR |
0.125 | 0.125236 | 1/6 | 0.163117 |
1 | 0.999282 | 1/6 | 0.167204 |
1.125 | 1.123883 | 2/3 | 0.659684 |
1.125 | 1.123883 | 2/3 | 0.662008 |
2 | 1.998575 | 1 | 0.997413 |
2 | 1.992823 | 1 | 0.997286 |
2 | 1.996882 | 7/6 | 1.163503 |
2 | 1.994090 | 7/6 | 1.163503 |
Effective Hamiltonians —
In Fig. 3, we have plotted non-negative fixed point tensors
Conclusion and Outlook — We have proposed a manifestly nonnegative tensor network renormalization algorithm to coarse-grain classical partition functions in real space, and provided additional evidence that tensor network renormalization techniques provide an approximation that behaves in a controlled way, introducing the required freedom to approximate the relevant physics at larger length-scales using effective interactions among effective degrees of freedom that are determined variationally. By restricting to nonnegative tensors, our work provides a bridge between heuristic block-spin prescriptions and modern tensor network approaches to coarse-graining.
Further improvement of the numerical results should be possible by taking lattice and internal symmetries into account and by improving the control on the gauge freedom. Due to the algorithm’s formulation in terms of periodic MPS, we expect that the interplay with well-established theoretical and numerical MPS and MPO results will be of great importance in this regard. A generalization of our scheme to the quantum case is possible by constructing sequences of completely positive maps acting on projected-entangled pair states (PEPS) wave functions Verstraete and Cirac (2004). Another application would be to incorporate the formalism of MPO algebras Bultinck et al. (2017) in order to put topological restrictions on the CFT data extracted from tensor network renormalization Aasen et al. (2016); Hauru et al. (2016).
Acknowledgements.
Acknowledgements — M.B. would like to thank L. Vanderstraeten, D. Williamson and S. Yang for discussions. This work is supported by an Odysseus grant from the FWO, a PostDoc grant from the FWO (J.H.), ERC grants QUTE and ERQUAF, and the EU grant SIQS.Appendix A Nonnegative matrix factorization
a.1 Statement of the problem
A nonnegative matrix is a matrix in which all elements are equal to or greater than zero. Given a nonnegative matrix and a factorization rank , the problem of nonnegative matrix factorization (NMF) is then to find a matrix decomposition , where and are both nonnegative matrices as well. We can reformulate this problem as the following optimization problem:
(6) |
where denotes the Frobenius norm. Note that, without the nonnegativity constraints, the optimal solution to Eq. (6) is obtained via the singular value decomposition (SVD) of .
It is clear that NMF is not unique in general because we can always insert a matrix and its inverse such that the matrix product remains invariant,
(7) |
If the two matrices and are again nonnegative, they represent an equivalent parametrization of the same factorization (X,Y). The requirements and are surely satisfied if is a nonnegative monomial matrix , where is a permutation matrix and an invertible diagonal matrix containing only positive diagonal elements. More generally, there might also exist equivalent parametrizations with and where is not a monomial matrix, which can potentially spoil the uniqueness in a more severe way. Note that the smallest possible rank for which an exact factorization exists is the nonnegative rank of , denoted with . It satisfies , and is defined as the smallest number of nonnegative vectors such that every column of can be written as a nonnegative linear combination of those vectors. When assuming , a given exact factorization of can be said to be unique if implies and , i.e. if the only ambiguity of the factorization can be completely captured in terms of permutation and scaling matrices as defined above Gillis (2012).
a.2 Intuition
To understand why NMF is popular in the machine learning community, assume for instance that each -dimensional column vector of contains an element of a set of data. Finding and so that approximates as accurately as possible then corresponds to extracting features that capture latent properties of the dataset. Indeed, given the nonnegativity constraints, each element of the dataset is approximately reconstructed by summing over the basis elements in the columns of with coefficients given by the columns of , yielding a representation of the data which is a sum of distinctive parts. Applications include, but are not limited to, image processing, facial feature extraction, text mining and document classification, bioinformatics, recommender systems, clustering problems and spectral data analysis. Note however that, in general, the lack of uniqueness alluded to in Eq. (7) can be troublesome when the goal is to actually attribute significance to these emerging basis elements. For this reason, the uniqueness of NMF is closely linked to whether the numerically found features are really the only sensible interpretation of the data Donoho and Stodden (2004); Huang et al. (2014).
a.3 Algorithms
In practice, the optimization problem Eq. (6) has been shown to be NP-hard Vavasis (2010), and all available algorithms are only guaranteed to converge to a local optimum. The algorithm that kickstarted NMF developments was Lee and Seung’s multiplicative update rule Seung and Lee (1999),
(8) | |||
(9) |
where and denote Hadamard product and division respectively. It is an extremely simple alternating algorithm that updates the matrices element-wise, but has a rather slow convergence rate. Numerous variations and extensions have since been developed, and we refer the interested reader to Refs. Wang and Zhang (2012); Gillis (2014). In practice, we supplemented these algorithms by implementing a projected conjugate gradient approach (see Algorithm 1) to improve solutions or convergence if required. Note that there is no agreed upon convergence criterion for NMF optimization, so in practice one is free to implement a strategy that takes into account cost function values, gradient norms, projected gradient norms, local tolerances, global tolerances, and maximum number of iterations.
Another important aspect of NMF optimization is the choice of initialization . Starting from random nonnegative matrices surely is an option, but as NMF algorithms are local minimization algorithms, the choice of initial conditions can be crucial to the quality of the resulting local minimum and the speed of convergence. For our purposes, we favoured a semi-deterministic NNDSVD initialization based on the best rank- approximation of given by the SVD Boutsidis and Gallopoulos (2008). The initialization works by first calculating the subset of the largest singular values and vectors of , i.e. , where the singular values appear in descending order and have been absorbed in the vectors and . Each rank-one term generally contains positive and negative values (apart from the dominant term due to Perron-Frobenius if the largest singular value is non-degenerate). A sensible nonnegative initialization is then obtained by replacing each , for , with the nonnegative outer products or , depending on whichever has larger norm. The zero elements can be filled with small random values if need be.
Additionally, it can be convenient to fix the monomial gauge freedom Eq. (7). This can be done by calculating the row vector containing the sum of each column of and column vector containing the sum of each row of . We then insert the identity twice so that . After sorting the values on the diagonal of in descending order, we permute the columns of and the rows of accordingly.
Appendix B Details on the TNR implementation
b.1 Nonnegative tensor factorization
To coarse-grain a 2D bipartite lattice made up of rank-four tensors and in a manifestly nonnegative way, we will need to extend the nonnegative matrix factorization described above to nonnegative tensor factorization (NTF). Indeed, as pointed out in the main text and previously in Ref. Yang et al. (2017), we can interpret a block of four adjacent sites as a four-site periodic matrix product state (MPS) by reinterpreting and rank-three tensors after grouping the -dimensional “physical indices” as follows
(10) |
where the remaining -dimensional indices have become “virtual indices”, and are summed over in the matrix products. In the first step of the coarse-graining process, we construct an ansatz to approximate this block with a “rotated block” represented again by a ring of four sites with different rank-four tensors , where (physical MPS dimension), and (virtual MPS dimension). After grouping the physical dimension, we again obtain a periodic MPS representation of the block of sites. The cost function for the local approximation is then given by the constrained MPS overlap,
(11) |
or, after matching indices explicitly,
(12) | ||||
(13) |
Note that the original -block and the “rotated” -block have different tensor product structures. It is this breaking apart of the tensor product structure at each step which we believe to be a crucial feature of the success of all TRG-inspired methods.
b.2 Sweeping projected conjugate gradient
The cost function Eq. (11) can be optimized using a generalization of the alternating inner loops of the PCGNMF procedure explained in Appendix A by reformulating the problem in terms of matrices. This is possible, as the cost function is equal to
(14) |
Now assume we want to optimize , keeping , , and fixed. Reshaping to a matrix and writing the gradient with respect to as the following matrix,
(15) |
we have all ingredients to implement an alternating projecting conjugate gradient method to sweep over all tensors.
One way to initialize the tensors is by constructing a tensor renormalization group (TRG) Levin and Nave (2007) solution. If we use PCGNMF to find the rank- nonnegative decompositions of the matrices and ,
(16) |
our initilization looks like
(17) |
If we regard this particular initialization as a nonnegative TRG solution, we observe numerically that the local approximation error can be made significantly smaller than the initial solution. Just as MERA-TNR Evenbly and Vidal (2015a) and Loop-TNR Yang et al. (2017), our TNR algorithm is capable of systematically improving upon TRG. We also observed that the error does not keep on increasing at criticality, but remains approximately constant for a prolonged number of iterations. Off criticality, the error decreases quickly because the tensors flows to a simple fixed point encoding a trivial Hamiltonian.
b.3 Coarse-graining
From the optimized tensors, we can immediately construct the tensors of the coarse-grained lattice. For the general case under consideration where we impose neither lattice nor reflection symmetries, there is an ambiguity in constructing the new tensors of the next layer. We refer to Appendix C for an explanation as to why the “plaquette” grouping of the tensors mentioned in the main text is flawed. The “vertex” grouping, depicted below,
(18) |
can either be done by identifying the new tensors (and in this way choosing the orientation of the tilted lattice) counter-clockwise (a) clockwise (b), which is relevant for the construction of the radial transfer matrix MPO in Section F.2.
b.4 A natural gauge choice for periodic nonnegative matrix product states
When implementing MPS optimization algorithms, choosing a canonical gauge is important both for manifestly revealing the entanglement content of a state as well as stabilizing the optimization through better conditioning of the matrices involved. Our TNR algorithm can also benefit from fixing the nonnegative monomial gauge freedom Eq. (7) by providing a sensible basis for truncation purposes (see Appendix C), and by aiding in the recovery of explicit scale invariance (see Appendix E). We we will now describe a constructive way to fix the gauge freedom for a ring of nonnegative MPS tensors.
Without loss of generalization, consider a ring of four sites with rank-four tensors , for , (physical MPS dimension), and (virtual MPS dimension), so that the periodic nonnegative MPS is given by
(19) |
Let us now cut the bond connecting and , and sum over all physical indices of the resulting tensor,
where denotes a vector of ones. We then find the diagonal matrices and such that the matrix becomes doubly stochastic, i.e. . Denoting the diagonals of and respectively as vectors and , we can find these fixed point solutions by iterating
which converges quickly as long as contains sufficiently many nonzero elements Knight (2008). We then substitute the identity twice on the bond that was cut, absorb and into and respectively, and obtain a central diagonal matrix,
After sorting the diagonal elements of in descending order (which yields a permutation ), we can check for small values relative to the largest value and truncate up to some tolerance (by means of an isometry ). In the end, we arrive at the following matrices to be absorbed into and respectively,
(20) | ||||
(21) |
where the matrix is just the identity if there is no truncation or implicit truncation by setting the small singular values to zero, and an isometry onto the subspace that is retained if there is explicit truncation. Notice how gives us a nonnegative analogue of Schmidt values in the MPS case. The above gauge fixing can be repeated independently for all other bonds by permuting the tensors accordingly.
Appendix C Corner double line tensors and entanglement filtering
Corner double line tensors are a pathological case of non-critical fixed points of the TRG flow in the space of tensors. As argued in the main text, all TNR approaches (including the TEFR thanks to its entanglement filtering pre-processing step Gu and Wen (2009)) are capable of removing CDL tensors because they surround a block of sites with a coarse-graining operation that can in principle detect correlations inside the block. Indeed, one can judiciously construct coarse-graining tensors that eliminate short-range correlations with a particular CDL structure Evenbly and Vidal (2015a), which for TNR proceeds as follows,
(22) |
The vertex grouping groups the plaquettes which do not contain a loop, which results in a product state that can be approximated in the next iteration with a MPS. In contrast, grouping the plaquettes that contain loops reinstates corner double line tensors.
The above considerations however do not imply that numerical algorithms built on this premise will act accordingly, since CDL configurations are still local minima of the cost function and fixed points of the RG flow. It is important to mention that, in the ideal case above, the presence of CDL correlations is reflected in the degeneracies of the Schmidt values of the MPS. In practice however, there is no obvious way to detect these (approximate) tensor product structures inside the virtual bond and, numerically, there is often no structure to be inferred at all if local corner tensors contain non-degenerate eigenvalues. One way to deal with this is by monitoring the Schmidt values on the bonds of the ring Eq. (19) using the gauge fixing described in Appendix B.4 to filter local correlations in a similar way to the tensor entanglement filtering step for TEFR and Loop-TNR discussed in Refs. Gu and Wen (2009); Yang et al. (2017). Reformulated in conventional MPS language: what entanglement filtering does is truncating a periodic MPS (which here describes a block of sites of a classical 2D lattice model) by truncating its virtual dimension (which here contains short-range correlations “inside” the block of sites). Note that a similar kind of truncation in MERA-TNR corresponds to alternating bond dimensions every step and inserting multiple optimized isometries at different stages in the actual implementation of the algorithm to reduce the intermediate bond dimensions and steer the optimization towards a preferred local minimum Evenbly (2017). Another possible strategy, which requires no gauging, would be to maximize the overlap of a four-site periodic MPS containing irrelevant local details with a different MPS with a lower bond dimension and accept the lower-dimensional one if the fidelity is high enough. This can be done for nonnegative MPS with the algorithm described in Appendix B.1. Note that entanglement filtering has no effect for critical systems as there is in general no truncation possible due to the slowly decaying distribution of the Schmidt values (recall that the exact critical fixed point would require an infinite bond dimension), but by sorting diagonal values the permutations on the virtual indices can still be fixed.
Appendix D Symmetries and tensor network renormalization
A straightforward application of Yang et al.’s Yang et al. (2017) insight that it is worthwhile to model blocks of sites with periodic matrix product states (MPS), is the fundamental theorem of MPS Pérez-García et al. (2008); Cirac et al. (2017) and its use in relating symmetries on the physical level to those on the virtual level. Consider an on-site symmetry operator acting on all sites, e.g. the flip operator
(23) |
for the symmetry of the Ising model. Invariance under the action of the symmetry implies that the transformed MPS should have an overlap with the original state that has modulus one. As such, the mixed transfer matrix of the original and the transformed state must have a dominant eigenvector with eigenvalue . It can then be shown that the effect of this statement is that the action of the symmetry on the physical level can be pushed through to the virtual level, up to a phase, which amounts to having a projective representation of the symmetry on the virtual level in the Schmidt basis. Now recall from the main text that the physical and virtual dimensions switch roles every RG iteration. This implies that the symmetry action on the virtual level becomes the action on the physical level of the next iteration, which in turn can be pushed through. In this way, the representation of the symmetry operator can be tracked throughout the entire coarse-graining network.
Appendix E Approximate scale invariance
As demonstrated in the main text, the TNR algorithm yields tensors which correspond to approximate fixed points of the RG equations, and are approximately scale invariant at criticality. By observing gauge-invariant quantities, such as the eigenvalues of the linear transfer matrix (see Appendix F.1), it is clear that the fixed point tensors are implicitly approximately scale invariant and remain so for a large number of iterations. To recover explicit approximate scale invariance at the level of the individual tensor elements however, we need to fix the gauge freedom of the partition function across different scales. Note that the partition function written in terms of and remains invariant under the following transformations,
(24) |
One simple way to achieve approximate scale invariance during the optimization itself is by adding additional constraints to the cost function Eq. (13). We can introduce a small penalty term for each individual by adding
(25) |
to the cost function. This modified cost function will favor solutions which stay close to the previous equivalent solutions, i.e. those of the even or odd iterations connecting lattices of the same orientation, which in turn renders the respective and tensors of the even and odd iterations approximately explicitly scale invariant as well.
Appendix F Conformal data from tensor networks
f.1 Linear transfer matrix
Scaling dimensions of the conformal field theory (CFT) underlying a critical partition function can be extracted directly from its tensor network representation by constructing the linear transfer matrix Gu and Wen (2009). At every iteration step, we can construct the effective and row-to-row transfer matrices,
(26) |
whose gauge invariant eigenvalues can be directly related to the scaling dimensions of the primary operators and descendants of the CFT. The leading contribution to partition function (ignoring non-universal finite-size corrections) on a torus of size is given by
(27) |
where the non-universal contribution can be taken care of in the tensor network representation by properly normalizing the tensors, and and are respectively the scaling dimensions the central charge. We can then write the partition function in terms the row-to-row transfer matrix , whose eigenvalue decomposition can be shown to be given by
(28) |
which immediately yields numerical estimates for the scaling dimensions and the central charge given that . Note that we have assumed to be Hermitian, yet small deviations are to be expected numerically if no symmetries are enforced, resulting in distinct left and right eigenvectors.
As an aside, we can interpret the row-to-row transfer matrix of Eq. (26) as an infinite MPO
(29) |
by blocking the tensors inside the dashed squares. Using the numerical MPO techniques recently developed in Ref. Haegeman and Verstraete (2017), we can then calculate the low-lying excitation spectrum of this operator directly in the thermodynamic limit in terms of MPS excitation ansätze. As is to be expected, Fig. 4 reveals a linear dispersion relation reflecting the continuum collapse of the CFT finite-size scaling results.
f.2 Radial transfer matrix
Alternatively, we can extract conformal data from the radial transfer matrix, which can be obtained by tracking the RG flow around an open impurity Evenbly and Vidal (2016). For the RG flow of TNR, we obtain after two iterations,
(30) |
where, for scale invariant systems, doing the next iteration everywhere on the block of tensors inside the rightmost bouding box gives rise to the same tensors as those obtained from the first iteration. Note that (see Appendix B.3), we used different combinations of X-tensors in constructing the new C-tensors for even and odd iterations to arrive again at the same orientation after two iterations (“rotate back-and-forth”). If we would use the same contraction each iteration it would take eight iterations to again arrive at the original orientation (“rotate clockwise or counter-clockwise”), which would lead to a rather impractical superoperator. For scale invariant systems, we thus end up with repeated applications of the following MPO,
(31) |
which, after proper normalization, can be diagonalized to give
(32) |
Note that here again we have assumed to be Hermitian, yet small deviations are to be expected numerically if no symmetries are enforced, resulting in distinct left and right eigenvectors. If the gauge freedom has been fixed across RG steps, and are found to be respectively the scaling dimensions and approximate lattice representations of the primary fields and descendants. Indeed, for this construction to work, it is crucial to fix the gauge (see Appendix E), or else the degrees of freedom we deem equivalent do not match due to the different local gauges Eq. (24).
Footnotes
- “[…] the more recent tensor-style work often employs indices which are summed over hundreds of values, each representing a sum of configurations of multiple spinlike variables. All these indices are generated and picked by the computer. The analyst does not and cannot keep track of the meaning of all these variables. Therefore, even if a fixed point were generated, it would not be very meaningful to the analyst. In fact, the literature does not seem to contain much information about the values and consequences of fixed points for the new style of renormalization” Efrati et al. (2014)
- Given the nature of the problem, one might expect an -norm. In practice, tackling the -norm optimization problem is not economical due to the large number of constraints, hence the relaxation to a smooth optimization in practice.
- Note that all these boundary tensor networks are but different low-rank tensor network approximations of the leading eigenvector of the transfer matrix written as a matrix product operator (MPO) Haegeman and Verstraete (2017).
- Note that by working with symmetric tensor networks, we can also extract CFT data of non-local fields if we modify Eq. (3) to include a matrix product operator (MPO) threading through the transfer matrix (which encodes the anti-periodic boundary conditions and reduces to just a string of matrices for the tensor product symmetry considered in Ref.Evenbly and Vidal (2016)). Similarly, the algebraic fixed point equations Eq. (4) can be modified to include an additional MPO symmetry string.
- Eventually though, the accumulated truncation errors act as a relevant perturbation steering the flow away from criticality.
- We have only plotted , but the behavior of the other tensors , , is very similar. See also Appendix C.
- A comprehensive analysis of the nature of these effective Hamiltonians will be reported elsewhere M. Bal et. al. ().
- Although one might be tempted to extend the domain of the coarse-graining operation to even bigger blocks, there is of course a numerical trade-off between the locality of a coarse-graining scheme and the bond dimension that can be attained in practice.
References
- L. P. Kadanoff, World Scientific 2, 263 (1966).
- L. P. Kadanoff, Physical Review Letters 34, 1005 (1975).
- L. P. Kadanoff, A. Houghton, and M. C. Yalabik, Journal of Statistical Physics 14, 171 (1976).
- K. G. Wilson, Reviews of Modern Physics 47, 773 (1975).
- M. Levin and C. P. Nave, Physical Review Letters 99, 120601 (2007).
- Z.-C. Gu, M. Levin, and X.-G. Wen, Physical Review B 78, 205116 (2008).
- Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng, and T. Xiang, Phys. Rev. Lett. 103, 160601 (2009).
- H. H. Zhao, Z. Y. Xie, Q. N. Chen, Z. C. Wei, J. W. Cai, and T. Xiang, Phys. Rev. B 81, 174411 (2010).
- Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and T. Xiang, Phys. Rev. B 86, 045139 (2012).
- G. Evenbly and G. Vidal, Physical Review Letters 115, 180405 (2015a).
- G. Evenbly and G. Vidal, Physical Review Letters 115 (2015b), 10.1103/PhysRevLett.115.200401, 1502.05385 .
- G. Evenbly, Phys. Rev. B 95, 045117 (2017).
- G. Evenbly and G. Vidal, Physical Review Letters 116 (2016), 10.1103/PhysRevLett.116.040401, 1510.00689 .
- S. Yang, Z.-C. Gu, and X.-G. Wen, Phys. Rev. Lett. 118, 110504 (2017).
- “[…] the more recent tensor-style work often employs indices which are summed over hundreds of values, each representing a sum of configurations of multiple spinlike variables. All these indices are generated and picked by the computer. The analyst does not and cannot keep track of the meaning of all these variables. Therefore, even if a fixed point were generated, it would not be very meaningful to the analyst. In fact, the literature does not seem to contain much information about the values and consequences of fixed points for the new style of renormalization” Efrati et al. (2014).
- See Appendices for details on nonnegative matrix factorization, numerical implementations, entanglement filtering, symmetries, approximate scale invariance, and extracting conformal data from tensor networks, which includes Refs. Gillis (2012); Donoho and Stodden (2004); Huang et al. (2014); Vavasis (2010); Seung and Lee (1999); Wang and Zhang (2012); Gillis (2014); Boutsidis and Gallopoulos (2008); Knight (2008); Pérez-García et al. (2008); Cirac et al. (2017).
- Given the nature of the problem, one might expect an -norm. In practice, tackling the -norm optimization problem is not economical due to the large number of constraints, hence the relaxation to a smooth optimization in practice.
- Note that all these boundary tensor networks are but different low-rank tensor network approximations of the leading eigenvector of the transfer matrix written as a matrix product operator (MPO) Haegeman and Verstraete (2017).
- Note that by working with symmetric tensor networks, we can also extract CFT data of non-local fields if we modify Eq. (3\@@italiccorr) to include a matrix product operator (MPO) threading through the transfer matrix (which encodes the anti-periodic boundary conditions and reduces to just a string of matrices for the tensor product symmetry considered in Ref.Evenbly and Vidal (2016)). Similarly, the algebraic fixed point equations Eq. (4\@@italiccorr) can be modified to include an additional MPO symmetry string.
- M. Bal et. al., (unpublished) .
- Eventually though, the accumulated truncation errors act as a relevant perturbation steering the flow away from criticality.
- We have only plotted , but the behavior of the other tensors , , is very similar. See also Appendix C.
- Z.-C. C. Gu and X.-G. G. Wen, Physical Review B - Condensed Matter and Materials Physics 80, 155131 (2009), arXiv:0903.1069 .
- A comprehensive analysis of the nature of these effective Hamiltonians will be reported elsewhere M. Bal et. al. ().
- Although one might be tempted to extend the domain of the coarse-graining operation to even bigger blocks, there is of course a numerical trade-off between the locality of a coarse-graining scheme and the bond dimension that can be attained in practice.
- F. Verstraete and J. I. Cirac, (2004), arXiv:cond-mat/0407066 .
- N. Bultinck, M. Mariën, D. Williamson, M. Şahinoğlu, J. Haegeman, and F. Verstraete, Annals of Physics 378, 183 (2017).
- D. Aasen, R. S. K. Mong, and P. Fendley, Journal of Physics A: Mathematical and Theoretical 49, 354001 (2016), arXiv:1601.07185 .
- M. Hauru, G. Evenbly, W. W. Ho, D. Gaiotto, and G. Vidal, Phys. Rev. B 94, 115125 (2016).
- E. Efrati, Z. Wang, A. Kolan, and L. P. Kadanoff, Reviews of Modern Physics 86, 647 (2014).
- N. Gillis, The Journal of Machine Learning Research 13, 3349 (2012).
- D. Donoho and V. Stodden, Proc. Advances in Neural Information Processing Systems 16 , 1141 (2004), arXiv:1512.00567 .
- K. Huang, N. D. Sidiropoulos, and A. Swami, IEEE Transactions on Signal Processing 62, 211 (2014).
- S. A. Vavasis, SIAM Journal on Optimization 20, 1364 (2010).
- H. S. Seung and D. D. Lee, Nature 401, 788 (1999).
- Y.-X. Wang and Y.-J. Zhang, IEEE Transactions on Knowledge and Data Engineering 25, 1 (2012).
- N. Gillis, in Regularization, Optimization, Kernels, and Support Vector Machines, edited by M. S. J.A.K. Suykens and A. Argyriou (Chapman & Hall/CRC, 2014).
- C. Boutsidis and E. Gallopoulos, Pattern Recognition 41, 1350 (2008).
- P. A. Knight, SIAM J. Matrix Anal Appl. 30, 261 (2008).
- D. Pérez-García, M. M. Wolf, M. Sanz, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 100, 167202 (2008).
- J. Cirac, D. PÃ©rez-GarcÃa, N. Schuch, and F. Verstraete, Annals of Physics 378, 100 (2017).
- J. Haegeman and F. Verstraete, Annual Review of Condensed Matter Physics 8, 355 (2017).