Gauge fixing, canonical forms and optimal truncations in tensor networks with closed loops

Gauge fixing, canonical forms and optimal truncations
in tensor networks with closed loops

Glen Evenbly Département de Physique and Institut Quantique, Université de Sherbrooke, Québec, Canada glen.evenbly@usherbrooke.ca
July 12, 2019
Abstract

We describe an approach to fix the gauge degrees of freedom in tensor networks, including those with closed loops, which allows a canonical form for arbitrary tensor networks to be realised. Additionally, a measure for the internal correlations present in a tensor network is proposed, which quantifies the extent of resonances around closed loops in the network. Finally we describe an algorithm for the optimal truncation of an internal index from a tensor network, based upon proper removal of the redundant internal correlations. These results, which offer a unified theoretical framework for the manipulation of tensor networks with closed loops, can be applied to improve existing tensor network methods for the study of many-body systems and may also constitute key algorithmic components of sophisticated new tensor methods.

pacs:
05.30.-d, 02.70.-c, 03.67.Mn, 75.10.Jm

I Introduction

Tensor network methodsTN1 (); TN2 () have proven to be exceptionally useful in the study of quantum many-body system and, more recently, have also found a diverse range of applications in areas such as quantum chemistryChem1 (); Chem2 (), machine learningMach1 (); Mach2 (); Mach3 (), and holographyHolo1 (); Holo2 (); Holo3 (); Holo4 (). In context of many-body systems, tensor networks circumvent the exponentially growth of Hilbert space with system size by allowing a quantum many-body wavefunction to be expressed as a product of many small tensors. As exemplified by White’s density matrix renormalization group (DMRG) algorithm DMRG1 (); DMRG2 (); DMRG3 (), which is based on matrix product statesMPS1 (); MPS2 () (MPS), and through more newly developed algorithms such as those based on projected entangled pair statesPEPS1 (); PEPS2 (); PEPS3 () (PEPS) and the multi-scale entanglement renormalization ansatzMERA1 (); MERA2 (); MERA3 (); MERA4 () (MERA), tensor networks can potentially allow many-body systems to be accurately addressed in the thermodynamic limit directly.

Key to formulation of DMRG, by far the most widely established tensor network method, is the singular value decomposition (SVD), also called the Schmidt decomposition in the context of quantum information theoryNeilson (). The Schmidt decomposition is an integral part of the DMRG algorithm as it allows (i) the gauge degrees of freedom in an MPS to be fixed, in turn leading to the notion of a canonical form for MPS, and (ii) the internal indices of MPS to be truncated in an optimal way. Use of the Schmidt decomposition can also be extended to arbitrary loop-free (or acyclic) tensor networks, generically referred to as tree tensor networksTTN1 (); TTN2 () (TTN), of which MPS is a particular instance. However for tensor networks that contain closed loops, such as PEPS or MERA, the Schmidt decomposition is no longer applicable. Thus there does not exist a well-defined canonical form for networks with closed loops, nor is it easy to truncate internal indices in an optimal manner. These difficulties have been a major stumbling block in the development of tensor network algorithms for quantum systems in dimensions, such as those based on PEPS.

In this manuscript we present, for an arbitrary tensor networks including those with closed loops (also called cyclic networks), (i) a method of fixing the gauge degrees of freedom, related to a generalization of the Schmidt decomposition, (ii) a means of quantifying the extent of internal correlations through closed loops and (iii) an algorithm for optimally truncating internal indices, which can potentially remove the redundant internal correlations from the network description. We demonstrate that the proposed method of fixing gauge degrees of freedom has the same uniqueness properties as that of the Schmidt decomposition, such that it may be used to define a canonical form for arbitrary tensor networks. As many commonly used tensor network optimization schemes rely on the accurate truncation of network indices, the result (iii) could have substantial application across a variety of different state-of-the-art tensor network algorithms. For instance, in methods for the renormalization of tensor networksTRG1 (); TRG2 (); TRG3 (); TRG4 (); TRG5 (); TRG6 (); TRG7 (), the proper removal of internal correlations from within closed loops was demonstrated to be of key importance with the advent of tensor network renormalizationTNR1 (); TNR2 (); TNR3 (); TNR4 () (TNR) and related approachesTNRc1 (); TNRc2 (); TNRc3 (); TNRc4 (); TNRc5 (). It follows that the proposal for removing internal correlations presented here may also be applied as a core part of a TNR-like numerical method for the simulation of a many-body systems.

This manuscript is organised as follows. First we refresh basic notions of gauge freedom in tensor networks, before introducing the concept of a bond environment. We then present an algorithm for fixing the gauge of an index, and numerically demonstrate that it converges to a unique gauge in a general network. The concept of internal correlations in cyclic networks is then discussed, including the proposal of a measure to quantify them. An algorithm for truncation of internal indices is then proposed, and subsequently demonstrated to be effective in the removal of internal correlations from cyclic networks. Finally, we discuss some of the applications of the methods presented.

Ii Tensor networks

We consider a network composed of a set of tensors , as depicted in Fig. 1, and distinguish between internal indices, which each connect a pair of tensors within the network, and external indices which each only attach to a single tensor. Additionally, for future convenience, we allow the possibility a bond matrix to be situated on each internal index, e.g. where denotes the bond matrix situated between tensors and . The bond matrices could initially bet set as (trivial) identity matrices, , if desired. To each external index one associates a Hilbert space of equal dimension, such that the network can be interpreted as describing a quantum state on the tensor product space, , where the product is over all external indices. An internal index is called a bridge if its removal would split the tensor network into two disconnected components (or equivalently, an index is a bridge if and only if it is not contained in any cycle). It follows that, in a loop-free tensor network (also called an acyclic network), all internal indices are bridges. Recall that there is a gauge freedom on the internal indices of a network ; introducing an arbitrary invertible matrix and its inverse on an internal index leaves the state invariant, but the network representation is changed when absorbing and into the adjoining tensors as depicted in Fig. 1(d-e).

Before we discuss a means by which to fix the gauge degrees of freedom it is useful to introduce the concept of a bond environment. For any internal index the bond environment is a four index tensor defined through contraction of while leaving the indices connected to the associated bond matrix (and its conjugatereal ()) open, see Fig. 1(b-c). It follows that the scalar product is obtained by contracting a bond environment with the two associated bond matrices,

(1)

Bond environments have many useful properties, including:

  1. The bond environment of an index is invariant with respect to choice of gauge on all other internal indices of the network

  2. All bond environments are invariant with respect to unitary transfomation acting on the external indices of the network

  3. A bond environment factorizes into a product of two tensors, , if the associated index is a bridge.

It should be clarified that property 2 refers to invariance with respect a unitary transformation that can act jointly over all the external indices of a network, not only unitary transformations acting singularly on each external index. Notice that property 2 further implies that the factorization of 3 occurs if there exists a unitary transformation on the external indices that would allow the index to become a bridge in the transformed network (even if this specific is unknown), see Sect.A of the appendix for further discussion. The concept of a bond environment is key to the results presented in this manuscript: the algorithms that we present for (i) fixing the gauge on an index, for (ii) quantifying the internal correlations though an index and for (iii) truncating the dimension of an index each only require the corresponding bond environment and bond matrix as inputs.

Figure 1: (a) Quantum state defined from a network of tensors with bond matrices sitting between pairs of tensors. (b) Tensor network for . (c) The bond environment is defined by contracting while leaving the index between and (and their conjugates) open. (d-e) A change of gauge, which leaves the state invariant, is enacted on the index between and via matrices and together with their inverses. (f) Depiction the new bond environment and associated bond matrix from the gauge change in (e).

Iii Gauge fixing

If an internal index of a tensor network is a bridge, then the gauge freedom can be fixed using by imposing a Schmidt form with respect to that index. This involves choosing the gauge such that (i) the sub-networks on each side of the bridge each represent an orthonormal basis (in the respective Hilbert spaces corresponding to their external indices) and (ii) the bond matrix is diagonal and positive, with the Schmidt coefficients, which are ordered . While fixing an index in Schmidt form removes most of the gauge freedom, some freedom can still remain. Specifically, if two or more of the Schmidt coefficients are exactly degenerate, then there remains a unitary gauge freedom within the degenerate subspace. There is also a phase ambiguity; the Schmidt form is still retained under a gauge change (see Fig. 1(d-e)) with and as diagonal matrices with entries of unit magnitude, i.e. for some real angles and with the complex unit. Notice that, in the case of real tensors, the phase ambiguity reduces to a (positive/negative) sign ambiguity in the Schmidt basis vectors. A canonical form for any acyclic network is defined by requiring that every internal index is in Schmidt form.

We now propose a means for fixing gauge degrees of freedom that is applicable to arbitrary internal indices, not only to bridges. In order to fix the gauge on an internal index of a tensor network , we first compute the corresponding bond environment and bond matrix , as defined earlier. From these we define left and right boundary matrices, and ,

(2)

see also Fig. 2(a-b), which are symmetric and positive by construction (though not necessarily normalised with unit trace). Notice that, under change of gauge on the index under consideration, both corresponding the bond environment and bond matrix are themselves altered, with and as depicted in Fig. 1(f), which also changes and . We now propose a particular choice of gauge:

Weighted trace gauge: An index from a tensor network is in the weighted trace gauge (WTG) if associated the left and right boundary matrices and are equal to the identity, , and the bond matrix is diagonal with positive elements in descending order, with . The elements are henceforth referred to as the WTG coefficients of the index. We say the network is in canonical form if all of the internal indices of the network are in the WTG.

Before addressing a method to identify the gauge change matrices and , see Fig. 1(d-e), that can bring an index into the WTG, we discuss some of its properties. Firstly, we note that if an internal index is a bridge of the network (or could realised as a bridge through a suitable unitary reorganisation of the external indices, see Sect.A of the appendix), then property 3 of the environment implies that the boundary matrix constraints of Eq. 2 are equivalent to left/right orthogonality condition of the Schmidt form. Thus for bridge indices, the WTG is precisely equivalent to the Schmidt gauge, such that the WTG coefficients are equal to the Schmidt coefficients. Notice also that, since the gauge condition on an index is only a property of the associated bond environment and bond matrix, properties 1 and 2 imply that the WTG on an index is invariant with respect to (i) the choice of gauge on other internal indices in the network and (ii) unitary transformation of the external indices. Implication (i) is particularly useful from an algorithmic standpoint, as it allows a network to be bought into canonical form by fixing the gauge on each internal index one at a time.

We now turn to the problem of identifying the gauge change matrices that can bring an internal index into the WTG. Unfortunately, we do not know of a deterministic method to identify this choice of gauge, so instead we shall rely on an iterative approach. This approach alternates between making a gauge change to satisfy the left boundary constraint, , followed by making a gauge change to satisfy the right boundary constraint, , see Sect.C of the appendix for additional details. This procedure is iterated until the index is sufficiently close to the WTG, i.e. when both boundary constraints are simultaneously satisfied up to some desired accuracy, which usually requires of order 20 iterations.

Figure 2: (a) The left boundary matrix is formed from contracting a bond environment with (two copies of) the associated bond matrix . (b) The right boundary matrix . (c) The weighted trace gauge (WTG) is the choice of gauge that yields trivial environment matrices, .

Notice that, as with the Schmidt form, the WTG is not necessarily unique for a given index. In particular, if there is degeneracy between two or more of the WTG coefficients, then one has unitary gauge freedom within degenerate subspace (identical to the freedom when there exists degeneracy in the Schmidt coefficients). Similarly the WTG also has the same phase ambiguity as discussed earlier in the context of the Schmidt decomposition. However, apart from these cases, there is no obvious class of non-trivial gauge transformation compatible with preserving the WTG. Thus it is plausible to conjecture that the WTG is unique (modulo the potential freedoms discussed above), although we are not able to prove this statement; instead we use numerics to investigate the uniqueness of the WTG. The methodology employed is as follows. (i) We start with a tensor network that has been brought into canonical form (using the gauge fixing algorithm discussed above) and then make a random change of gauge on each index of the network, implemented by matrices and and their inverses as depicted Fig. 1(d-e). (ii) We then reapply the gauge fixing algorithm in order to compute the gauge change matrices and necessary to bring the network back into canonical form. (iii) Finally, we compare with and with . If these matrices are always found to be identical (to within numeric finite-precision errors) over many random changes of gauge then this is evidence towards the uniqueness of the canonical form.

In order to test this protocol we use two example networks. The first example network, representative of a generic tensor network, is of the form depicted in Fig. 1(a), with external index dimension and internal index dimension , which is initialised with random elements in each tensor. The second example network, representative of a strongly correlated state, is obtained from a taking an section from the partition function of the (classical) square-lattice Ising model at critical temperature , and then blocking to obtain a network of four index tensors as depicted in Fig. 5(a). Notice that each of the tensors , which has indices of dimension , encodes the Boltzmann weights of a block of Ising spins. The initial gauge change matrices and are chosen randomly on each index, except that they are normalized and their singular values are restricted to being greater than , in order to improve numeric stability by avoiding near-singular changes of gauge. We then test the protocol over 1000 random changes of gauge on each internal index of each of the test network types. In all cases we find that the starting canonical form is recovered within the accuracy expected from double precision numerics (up to the aforementioned phase ambiguity), achieved by applying less than 20 iterations of the gauge fixing algorithm. Specifically we find the for all instances (where denotes the element-wise absolute value of the matrix, which is used in order to avoid the phase ambiguity), and similarly for the matrices. These numeric results provide strong evidence that the canonical form resulting from the WTG is unique both for generic random tensor networks and for networks that describe strongly correlated quantum states.

Figure 3: (a) Tensor and its internal structure, see also Eq. 3. (b) State given by a periodic MPS composed of four copies of . (c) Tensor and its internal structure, see also Eq. 4. (d) State given by a periodic MPS composed of four copies of . (e) Bond environment from an internal index in . (f) Bond environment from an internal index in . (g) In order to compute the cycle entropy, a bond environment is contracted with its corresponding bond matrices and then the eigen-decomposition is then taken.

Iv Internal correlations

Given that the WTG simplifies to the Schmidt form in acyclic networks, one may be tempted to believe that the WTG coefficients of an index directly relate to some physical property of the quantum state described by a tensor network, just as the Schmidt coefficients relate to the bipartite entanglement entropy of a quantum stateNeilson (). However it turns out that this is not the case, due to the possibility of internal correlations within tensor networks that contains closed loops, as we now demonstrate with a simple example.

Let be a three index tensor where each index is of dimension , such that . It follows that each index can be decomposed a product of two finer indices of dimension , i.e. with . Tensor is then defined as having -function correlations on the finer indices,

(3)

as also depicted in Fig. 3(a). Let us now construct a periodic MPS, denoted as , of bond dimension formed from four copies of , as depicted in Fig. 3(b). Notice that the network represents a quantum state that consists of a product of nearest neighbour singlets. Similarly we define a new tensor ,

(4)

where the indices are of dimension , i.e. , with the index . We then form a periodic MPS, denoted , from four copies of as depicted in Fig. 3(c). It is easily understood that the quantum state described by the network is again a product of nearest neighbour singlets, identical to the previous state up to normalization. However, despite describing the same quantum state, the two tensor network representations and are fundamentally different; network contains a string of internal correlations around the closed loop, although these do not contribute to any property of the corresponding quantum state. It is also easily checked that the WTG coefficients differ between the two tensor network representations; network has four equal WTG coefficients on each index versus two equal coefficients in . This is a clear demonstration that the WTG coefficients of a cyclic tensor network do not necessarily correspond to a physical property of the quantum state represented by a tensor network. One can understand this as a consequence of the inability of the WTG coefficients to distinguish between (physically meaningful) correlations between external indices and (physically irrelevant) internal correlations around closed loops.

The possibility of such internal correlations marks an important distinction between acyclic and cyclic tensor networks. We now propose a way to quantify the presence of internal correlations through an internal index of a tensor network. The are some natural criteria that a such measure should satisfy: (i) it should be zero if index under consideration is a bridge and (ii) it should be invariant under choice of gauge on the index. In order to arrive at such a measure, we first contract the bond environment with bond weights associated to the index to form tensor ,

(5)

and then compute the eigenvalues of viewed as a linear mapping from indices to indices indices, as depicted in Fig. 3(g). Notice the eigenvalues are clearly independent of the choice of gauge (as changing the gauge is equivalent to a change of basis on ). We now take absolute value of the eigenvalues (which, in general, may be complex) and normalise them, , and define the cycle entropy as the von-Neumann entropy of this normalized spectrum,

(6)

Notice that, by property 3 of the bond environment, it is clear that if the index under consideration is a bridge, as desired. The reverse statement is also true: if then it follows that the index under consideration can be realised as a bridge, perhaps after some appropriate unitary cycle reduction as discussed in Sect.A of the appendix. Thus, if the cycle entropy is zero, then the WTG coefficients are precisely equal to the Schmidt coefficients of this bridge realization. It follows that if the cycle entropy through an index is zero (or sufficiently small) one can achieve an optimal (or near optimal) truncation of this index by transforming to the WTG and then simply discarding the smallest WTG coefficients. This demonstrates the usefulness of the cycle entropy in quantifying the extent of cycle correlations through an internal index of a tensor network.

V Optimal truncations

A task that ubiquitous in tensor network algorithms is that of truncating an internal index from some initial dimension to some smaller dimension in an optimal manner. In the case of bridge indices this is easily accomplished be discarding the smallest of its corresponding Schmidt coefficients. A common approach to the truncation of non-bridge indices is to reduce the index under consideration to a bridge by “cutting” open other indices of the network, a process we refer to as a cycle reduction via cutting in Sect.B of the appendix, and then applying a Schmidt decomposition. However if the cycle entropy through an internal index is non-zero, then such a cycle reduction will not produce an optimal truncation. In this case a more sophisticated approach is required, one which can distinguish and remove the redundant internal correlations from the network. We now propose an algorithm that can potentially achieve an optimal truncation even for internal indices which have non-zero cycle entropy, , which we call a full environment truncation (FET).

Figure 4: (a-c) The index connecting tensors and is truncated to smaller dimension, , by replacing with the product , where are isometries and is a matrix. (d-f) The overlaps , , of the states from (a-b), which have been expressed in terms of the bond environment .

Let us assume that we have a tensor network , describing a quantum state , and that we wish to optimally truncate the dimension of a chosen internal index from initial dimension to final dimension , i.e. as to leave the resulting state as close to the original as possible. Here we quantify the difference between the initial state and the final state of the truncated network using the fidelity ,

(7)

which we seek to maximise. The truncation can be implemented by replacing the bond matrix of the index under consideration with some rank matrix which, making use of the SVD, can generically be expressed as the product , see Fig. 4(b). Here and are isometries, such that , with the identity matrix, and is a diagonal matrix of positive real values. Notice that the isometries and can be absorbed into their adjoining tensors respectively, see Fig. 4(c), such that the resulting tensor network , which defines the new quantum state is of the same geometry as the original but with a reduced index dimension of . The task of optimally truncating an internal index can thus be recast as optimizing isometries , and a matrix such as to maximise the fidelity of Eq. 7.

The isometries , and the matrix can be iteratively optimised as to maximise this fidelity, as we now describe. Here was present only an outline of this optimization algorithm, where the full details can be found in Sect.D of the appendix. For the first step we define , then solve for the optimal while the tensor is held fixed. This can be achieved through standard techniques, as it is equivalent to solving a generalized eigenvalue problem for . Once the optimal is obtained the SVD is taken to produce updated tensors and . At the next step, the product is similarly updated with held fixed. These two steps are iterated until the all tensors converge. Notice that the terms in the fidelity of Eq. 7 can be expressed solely using the corresponding bond environment and bond matrix , as depicted in Fig. 4(d-f). Thus the FET optimization algorithm only requires these two tensors as an input, and can be applied regardless of the wider structure of the network under consideration (so long as the environment can be computed).

Figure 5: An internal index is truncated from (a) , (b) and (c) networks of tensors .
(a)
(b)
(c)
Table 1: Fidelity errors from truncation of index from initial dimension to final dimension in the networks of Fig. 5, comparing error from a Schmidt decomposition of a cycle reduction (see Fig. 9) to the error from a full environment truncation (FET). The cycle entropy of index before and after the FET is also given.

In order to test the FET algorithm we apply it to the partition function of the (classical) square-lattice Ising model at critical temperature . We begin from the standard tensor network representation of the partition function, where each four index tensor represents the Boltzmann weights of a plaquette of Ising spinsTNR4 (). Then we form coarse-grained tensors through application of four iterations of the higher order tensor renormalization group (HOTRG) algorithmTRG6 () (with bond dimension limited at ), such that each now represents the Boltzmann weights of a (coarse-grained) block of Ising spins. Finally, we apply the closed-loop truncation algorithm to truncate an internal index of , and blocks of tensors, as depicted in Fig. 5, from initial dimension to final dimension . The results of this test for the error in the fidelity, , are displayed in Tab. 1. We compare between the error from a Schmidt decomposition applied to a cycle reduction (see Fig. 9 of the appendix), to the error from the FET algorithm. In all instances it is seen that the FET is more accurate, with , and that the magnitude of the accuracy improvement grows as the cycle entropy increases. This is as expected, since the FET algorithm achieves a more accurate truncation through removal of internal correlations, seen in the reduction of the cycle entropy in Tab. 1, whereas the cycle reduction approach preserves the internal correlations. In all cases, less than 20 iterations were required to optimise the tensors necessary for the FET algorithm. It was also found that the optimizations seemed to converge to the same final tensors regardless of how they were first initialized. This seems to suggest that the FET algorithm is converging to the global minimum in the fidelity error, as opposed to getting stuck in a local minima.

Vi Discussion

While tensor networks that contain closed loops are undoubtedly more complicated than their loop-free counterparts, this manuscript has introduced several ideas to facilitate working with such networks. These include: (i) a method of fixing the gauge degrees of freedom (which yields a well-defined canonical form for arbitrary networks), (ii) a means of quantifying the extent of internal correlations through an index (via the cycle entropy ), and (iii) an algorithm that potentially allows for the optimal truncation of indices through removal of internal correlations. We envision these results will have useful applications across a broad range of tensor network algorithms, some of which we discuss below.

Fixing the gauge is particularly useful in optimisation algorithms for tensor networks as it can allow certain intermediate tensors to be reused between optimisation iterations. For instance, a large amount of computation time in the iPEPS algorithmPEPS1 (); PEPS2 (); PEPS3 () is spent computing the boundary MPS, necessary for evaluation of the local environment, which must be recomputed every time the PEPS tensors change. Fixing the gauge allows the boundary MPS from a previous iteration to be used as the starting point for the calculation of the updated boundary MPS (whereas, if the gauge were not properly fixed, then the previous boundary MPS may be in a different gauge and thus not suitable as the starting point). For the specific case of translation invariant iPEPS an alternative means of fixing the gauge was already proposed in Ref.PEPS4, , which allowed for significant improvements to the efficiency through proper recycling of the environment. The results of this manuscript provide a more general way to accomplish this task of recycling intermediate tensors, which could be applied to arbitrary networks.

The measure for quantifying extent of internal correlations, and the FET scheme for removing internal correlations, are directly applicable to tensor renormalization group (TRG) schemes for coarse-graining path integrals and partition functions. A significant problem with the original TRG scheme of LevinTRG1 (), and its later generalizationsTRG2 (); TRG3 (); TRG4 (); TRG5 (); TRG6 (); TRG7 (), is that they fail to remove internal correlations. These internal correlations can thus accumulate over successive RG steps and cause a computational break down of the approach. This problem was resolved with tensor network renormalizationTNR1 (); TNR2 (); TNR3 (); TNR4 () (TNR), which introduced unitary disentanglers to remove internal correlations and prevent their accumulation, allowing a sustainable RG flow. Many similar methods have followedTNRc1 (); TNRc2 (); TNRc3 (); TNRc4 (); TNRc5 (), using a variety of alternate techniques to remove internal correlations as part of the coarse-graining step. Likewise the FET algorithm, which was demonstrated to be effective in the removal of internal correlations, can directly be incorporated as part of a TNR-like renormalization scheme for tensor networks. The details of this implementation and some benchmark results are described in Sect.E of the appendix. For the classical Ising model at critical temperature, this approach was able to resolve the free energy per spin with an accuracy of on a lattice of spins. This calculation required approximately 20 minutes computation time on a desktop PC, which compares favourably with previous approaches. A key feature of the FET is that it can be applied to remove internal correlations from arbitrary networks regardless of their geometry, similar to the recently proposed approach of Ref. TNRc4, , in contrast to most other previous approaches which are specialised to a single geometry. For instance, the proposed approach could straight-forwardly be generalized to coarse-grain networks, allowing quantum systems to be studied, which will be considered in future work.

The author thanks Markus Hauru for useful discussions. This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund.

References

  • (1) J. I. Cirac and F. Verstraete, Renormalization and tensor product states in spin chains and lattices, J. Phys. A: Math. Theor. 42, 504004 (2009).
  • (2) R. Orus, A practical introduction to tensor networks: Matrix product states and projected entangled pair states, Ann. Phys. 349, 117 (2014).
  • (3) G. K.-L. Chan and S. Sharma, The density matrix renormalization group in quantum chemistry, Annu. Rev. Phys. Chem. 62, 465 (2011).
  • (4) N. Nakatani and G. K.-L. Chan, Efficient tree tensor network states (TTNS) for quantum chemistry: generalizations of the density matrix renormalization group algorithm, J. Chem. Phys. 138, 134113 (2013).
  • (5) A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade and M. Telgarsky, Tensor decompositions for learning latent variable models, Journal of Machine Learning Research 15, 2773–2832 (2014).
  • (6) A. Novikov, D. Podoprikhin, A. Osokin and D. Vetrov, Tensorizing neural networks, arxiv:1509.06569 (2015).
  • (7) E. M. Stoudenmire and D. J. Schwab, Supervised learning with tensor networks, Advances In Neural Information Processing Systems 29, pp. 4799– 4807 (2016).
  • (8) B. Swingle, Entanglement renormalization and holography, Phys. Rev. D 86, 065007 (2012).
  • (9) B. Swingle, Constructing holographic spacetimes using entanglement renormalization, arXiv:1209.3304 (2012).
  • (10) P. Hayden, S. Nezami, X. L. Qi, N. Thomas, M. Walter and Z. Yang, Holographic duality from random tensor networks, arXiv:1601.01694 (2016).
  • (11) G. Evenbly, Hyperinvariant tensor networks and holography, Phys. Rev. Lett. 119, 141602 (2017).
  • (12) S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992).
  • (13) S. R. White, Density-matrix algorithms for quantum renormalization groups, Phys. Rev. B 48, 10345 (1993).
  • (14) U. Schollwoeck, The density-matrix renormalization group, Rev. Mod. Phys. 77, 259 (2005).
  • (15) M. Fannes, B. Nachtergaele, and R. F. Werner, Finitely correlated states on quantum spin chains, Commun. Math. Phys. 144, 443 (1992).
  • (16) S. Ostlund and S. Rommer, Thermodynamic limit of density matrix renormalization, Phys. Rev. Lett. 75, 3537 (1995).
  • (17) F. Verstraete and J. I. Cirac, Renormalization algorithms for quantum-many body systems in two and higher dimensions, arXiv:cond-mat/0407066.
  • (18) F. Verstraete, J.I. Cirac, and V. Murg, Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems, Adv. Phys. 57, 143 (2008).
  • (19) J. Jordan, R. Orus, G. Vidal, F. Verstraete, and J. I. Cirac, Classical simulation of infinite-size quantum lattice systems in two spatial dimensions, Phys. Rev. Lett. 101, 250602 (2008).
  • (20) G. Vidal, A class of quantum many-body states that can be efficiently simulated, Phys. Rev. Lett. 101, 110501 (2008).
  • (21) L. Cincio, J. Dziarmaga, and M. M. Rams, Multiscale entanglement renormalization ansatz in two dimensions: quantum Ising model, Phys. Rev. Lett. 100, 240603 (2008).
  • (22) G. Evenbly and G. Vidal, Entanglement renormalization in two spatial dimensions, Phys. Rev. Lett. 102, 180406 (2009).
  • (23) G. Evenbly and G. Vidal, Quantum criticality with the multi-scale entanglement renormalization ansatz, Chapter 4 in Strongly Correlated Systems: Numerical Methods, edited by A. Avella and F. Mancini (Springer Series in Solid-State Sciences, Vol. 176 2013).
  • (24) M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press 2000).
  • (25) Y. Shi, L. Duan and G. Vidal, Classical simulation of quantum many-body systems with a tree tensor network, Phys. Rev. A 74, 022320 (2006).
  • (26) L. Tagliacozzo, G. Evenbly and G. Vidal, Simulation of two-dimensional quantum systems using a tree tensor network that exploits the entropic area law, Phys. Rev. B 80, 235127 (2009).
  • (27) M. Levin and C. P. Nave, Tensor renormalization group approach to two-dimensional classical lattice models, Phys. Rev. Lett. 99, 120601 (2007).
  • (28) H. C. Jiang, Z. Y. Weng, and T. Xiang, Accurate determination of tensor network state of quantum lattice models in two dimensions, Phys. Rev. Lett. 101, 090603 (2008).
  • (29) Z.-Y. Xie, H.-C. Jiang, Q.-N. Chen, Z.-Y. Weng, and T. Xiang, Second renormalization of tensor-network states, Phys. Rev. Lett. 103, 160601 (2009).
  • (30) Z.-C. Gu and X.-G.Wen, Tensor-entanglement-filtering renormalization approach and symmetry protected topological order, Phys. Rev. B 80, 155131 (2009).
  • (31) H.-H. Zhao, Z.-Y. Xie, Q.-N. Chen, Z.-C. Wei, J. W. Cai, and T. Xiang, Renormalization of tensor-network states, Phys. Rev. B 81, 174411 (2010).
  • (32) Z.-Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and T. Xiang, Coarse-graining renormalization by higher-order singular value decomposition, Phys. Rev. B 86, 045139 (2012).
  • (33) A. Garcia-Saez and J. I. Latorre, Renormalization group contraction of tensor networks in three dimensions, Phys. Rev. B 87, 085130 (2013).
  • (34) G. Evenbly and G. Vidal, Tensor network renormalization, Phys. Rev. Lett. 115, 180405 (2015).
  • (35) G. Evenbly and G. Vidal, Tensor network renormalization yields the multi-scale entanglement renormalization ansatz, Phys. Rev. Lett. 115, 200401 (2015).
  • (36) G. Evenbly and G. Vidal, Local scale transformations on the lattice with tensor network renormalization, Phys. Rev. Lett. 116, 040401 (2016).
  • (37) G. Evenbly, Algorithms for tensor network renormalization, Phys. Rev. B 95, 045117 (2017).
  • (38) S. Yang, Z.-C. Gu, and X.-G. Wen, Loop optimization for tensor network renormalization, Phys. Rev. Lett. 118, 110504 (2017).
  • (39) M. Bal, M. Mariën, J. Haegeman, F. Verstraete, Renormalization group flows of Hamiltonians using tensor networks, Phys. Rev. Lett. 118, 250602 (2017).
  • (40) L. Ying, Tensor Network Skeletonization, arXiv:1607.00050 (2016).
  • (41) M. Hauru, C. Delcamp and S. Mizera, Renormalization of tensor networks using graph independent local truncations, arXiv:1709.07460 (2017).
  • (42) G. Evenbly, Implicitly disentangled renormalization, arXiv:1707.05770 (2017).
  • (43) Note that we assume the tensors to be real-valued in order to avoid having to distinguish complex conjugates in the notation, although all of the results presented can easily be extended to complex-valued tensors.
  • (44) H. N. Phien, J. A. Bengua, H. D. Tuan, P. Corboz and Roman Orus, The iPEPS algorithm, improved: fast full update and gauge fixing, Phys. Rev. B 92, 035142 (2015).

Appendix A Cycle reductions via external unitaries

In this Appendix we discuss examples of tensor networks networks where a non-bridge internal index (i.e. an index that is contained in a cycle) can be reduced to a bridge via a suitable external unitary transformation , which we call a unitary cycle reduction.

Consider the example presented in Fig. 6(a-b); here a network, composed of corner double line (CDL) tensors, describes a quantum state on a four site lattice. It is easily seen that, for the labelled index , there is no possible partitioning of the external indices such that is a bipartition of the state . However, in this example, there exists a unitary such that in the transformed state, , the index has become a bridge, as depicted in Fig. 6(c-d). A second example, consisting of a periodic MPS, is presented in Fig. 7. We assume that the MPS is injective and has a finite correlation length , as would be the case if the MPS described the ground state of a gapped periodic system. Then one may argue that there exists some unitary , acting on sizes, that would reduce the periodic MPS to an acyclic network as depicted in Fig. 7(b-c).

Once an internal index has been reduced to a bridge then its gauge may be fixed or its dimension truncated using the Schmidt decomposition. The results from the main text are particular useful in characterising when it is possible for an internal index to become a bridge; there exists an external unitary that allows an internal index to become a bridge if and only if the corresponding cycle entropy is zero, . However, in instances that , the Schmidt gauge that would be reached after the external unitary is precisely equivalent to the WTG, but the latter can be determined without first needing to determine .

Figure 6: (a) A tensor network describes a quantum state on a lattice of four sites. (b) A specific instance of the network from (a), where each tensor index can be decomposed as a product of two finer indices, and that the tensors have -function like correlations in the finer indices as depicted. (c-d) Application of a suitably chosen unitary on the external indices allows index to be realised as a bridge of the network.
Figure 7: (a) A periodic MPS, which describes a quantum state . We assume that the MPS is injective and has a small correlation length, . (b-c) An appropriately chosen unitary may disentangle the MPS to an acyclic network.

Appendix B Cycle reductions via index cutting

Discussed in this appendix is a standard way of dealing with gauge-fixing and truncation of non-bridge indices in cyclic tensor networks, whereby the index under consideration is made into a bridge by “cutting” indices of the original network. An example of this is given in Fig. 8, where it is assumed that one wants to fix the gauge and/or truncate index from a network describing quantum state . This can be achieved by cutting internal index , thus promoting it to a pair of external indices and , see Fig. 8(b). Notice that index is now a bridge of the new tensor network, which now describes a quantum state in an enlarged Hilbert space. We call this manipulation a cycle reduction (via cutting) of the network with respect to . One could then fix the gauge on index and truncate its dimension using a Schmidt decomposition on the reduced network.

However, there are several significant problems with the cycle reductions based on cutting. The first problem is that they are not unique. Consider, for instance, the example given in Fig. 8(c) where a change of gauge on the internal index changes the quantum state produced by the reduction, see Fig. 8(d), thus also changes the Schmidt basis on index . More generally, one also has freedom in the choice of which internal indices are cut to produce the reduction. The second, perhaps more severe, problem is that this type of cycle reductions does not (in general) allow for an optimal truncation of an internal index. This follows as the cycle reduction will promote internal correlations into physical correlations (in the enlarged Hilbert space), such that they will be preserved in the subsequent Schmidt decomposition. In contrast, a method which takes the internal correlations in account, such as the FET algorithm presented in the main text, can potentially achieve a more accurate truncation through identification and removal of internal correlations. This can be seen in Tab. 1, which compares truncation of the networks depicted Fig. 5 based on FET against truncation based on the cycle reductions of Fig. 9.

Figure 8: (a) A tensor network describes a quantum state . (b) Internal index is cut as to become a pair of external indices and , such that internal index becomes a bipartition of the resulting network. (c) A gauge transformation is made on index from the network of (a). (d) The state produced from cutting index after the gauge transformation differs from the state from (b).
Figure 9: (a-c) Cycle reductions used to truncate internal index in the networks in Fig. 5.

Appendix C Gauge fixing algorithm

In this appendix we detail a numeric algorithm that can be used to fix an internal index of a tensor network in the weighted trace gauge (WTG). This algorithm enacts a sequence of gauge changes, implemented through gauge change matrices , , and as depicted in Fig. 1(d-e), in order to eventually converge to the WTG. There are two different possible versions of this algorithm. In the first, one alternates between enacting the that satisfies the left gauge constraint and enacting the that satisfies the right gauge constraint. In the second version of the algorithm, one finds the that satisfies the left gauge constraint and (independently) finds the that satisfies the right gauge constraint before enacting both simultaneously. In this appendix we focus on the latter version of the algorithm, which tends to be slightly faster in practice.

The first step of the algorithm is to compute the bond environment of the internal index under consideration, which only needs to be computed once. The following sequence of steps is then iterated util the WTG is reached. In the first step, (i) we use the environment and bond matrix to compute the left and right boundary matrices, and respectively,

(8)

and the take the eigen-decomposition of these matrices,

(9)

see also Fig. 10(a-b). In the second step (ii) we compute the product,

(10)

before taking its singular value decomposition,

(11)

see also Fig. 10(c). In the final step (iii) one implements gauge change matrices , , and defined,

(12)
(13)

see also Fig. 10(d-e). This change of gauge generates a new environment tensor and new bond matrix as depicted in Fig. 1(f), which serve as the starting point for the next iteration. One repeats the steps (i-iii) until the WTG constraints are satisfied to within the desired tolerance. The number of iterations required depends on the extent of internal correlations through the index under consideration: if the index has trivial cycle entropy, i.e. , then it will converge to the WTG after only a single iteration, but in general it will take more iterations to converge in cases where is larger. However in the examples considered in the main text, which included examples where the cycle entropy was significant, less than 20 iterations were needed to converge to the WTG with high precision.

Figure 10: Depiction of the steps required in the gauge fixing algorithm. (a) Evaluation of the left boundary matrix and its eigen-decomposition. (b) Evaluation of the right boundary matrix and its eigen-decomposition. (c-e) Definitions of the gauge change matrices and produced by an iteration of the gauge fixing algorithm.
Figure 11: (a-c) Tensor is first decomposed into a product of tensors via the SVD, and then the connecting index is truncated down to a smaller dimension . (d-e) A pair of indices in the network is truncated to a single effective index.

Appendix D Algorithm for optimal internal truncations

In this appendix we describe the algorithm for a full environment truncation (FET), which allows for a potentially optimal truncation of an internal index in a network. Before introducing this algorithm we note that, although formulated as a method for truncating a single internal index, the FET method can be easily applied to several alternate scenarios. For instance, the FET could also be applied if one wanted to split a single tensor from a network into a pair of tensors in an optimal manner. Here one can first decompose the tensor using a (truncation-free) singular value decomposition, and then apply the FET algorithm to truncate the connecting index, as depicted in Fig. 11(a-c). One can also apply the FET algorithm to truncate multiple internal indices down to a single effective index, as depicted in Fig. 11(d-e).

Figure 12: Diagrams relating to the full environment truncation (FET) algorithm. (a) The fidelity between an initial and a truncated state expressed in terms of a bond environment , see Fig. 4. (b-d) Definitions of tensors , and . (e) The fidelity can be expressed as a generalized eigenvalue problem in as . (f) The fidelity is maximized with the choice . (g) Updated tensor , and are obtained from the SVD of the product .

As discussed in the main text, the problem of truncating an internal index of a tensor network from some initial dimension to a smaller dimension can be reformulated as one of replacing the bond matrix on with a product of tensors as depicted in Fig. 4(b). Here and are isometries, such that , with the identity matrix, and is a diagonal matrix of positive real values. We now propose an iterative algorithm to find these tensors to maximise the fidelity, see Eq. 7, of the truncated state with the original state. Before starting the iterations, we compute the bond environment of the index under consideration, which allows the fidelity to be expressed as a simple quotient of tensor networks containing , see Fig. 12(a). One should then initialise the tensors , which can be done in a number of ways. Perhaps the simplest initialization is achieved by taking a truncated SVD of the bond matrix , retaining the largest singular values. In the first step we define , and then seek to solve for the optimal while the tensor is held fixed. Let us define tensors and from the environment as depicted in Fig. 12(c-d). This allows us to write express the fidelity as a generalized eigenvalue problem in ,

(14)

with , see also Fig. 12(e). Given that is simply the outer product of vectors , the solution for that maximises the fidelity of Eq. 14 is known analytically as . One can then take the SVD of to obtain updated and tensors, see Fig. 12. At the next step, the product is similarly updated with held fixed. These two steps should be iterated until the tensors converge sufficiently. In the examples considered in the main text convergence required less than 20 iterations.

Figure 13: An iteration of a coarse-graining algorithm for a square lattice network, which uses the FET approach to reduce internal correlations. (a) Tensors and are decomposed into products of 3-index tensors using the singular value decomposition, where singular values have been retained. (b) The closed-loop truncation scheme is applied to a section of the network containing a loop of 8 tensors, in order to truncate indices of dimension to smaller dimension , as illustrated in (d). (c) A coarser square-lattice network is formed through the contractions depicted in (e).

Appendix E Application to tensor network renormalization

In this appendix we discuss the application of the proposed full environment truncation (FET) method towards tensor renormalization algorithms for the coarse-graining of path integrals and partition functions. Here the goal is to improve over the standard tensor renormalization groupTRG1 () (TRG) approach by removing internal correlations from within closed-loops of the network, similar to what was achieved with the tensor network renormalizationTNR1 (); TNR2 (); TNR3 (); TNR4 () (TNR) approach and related algorithmsTNRc1 (); TNRc2 (); TNRc3 (); TNRc4 (); TNRc5 () which also remove internal correlations from closed loops.

Figure 14: Relative error in the free energy per site of the classical Ising model on a lattice of spins at critical temperature, comparing (i) tensor renormalization groupTRG1 () (TRG), (ii) tensor renormalization group with enlarged environmentTRG5 () (TRG + env) and (iii) tensor renormalization group that includes full environment truncations (TRG + FET).

We consider a square lattice tensor network with a 2-site unit cell, composed of 4-index tensors and , as depicted in Fig. 13(a). This network could be representative of the path integral of a quantum system or the partition function of a classical system, see for instance Ref. TNR4, . An overview of an iteration of the proposed coarse-graining scheme is presented in Fig. 13. The iteration begins with use of the SVD to decompose 4-index tensors into a product of 3-index tensors, identical to the standard TRG approach, where we retain at most singular values for each index. Then the FET scheme is applied to remove internal correlations within loops of 8 tensors, by sequentially truncating each of the four indices of dimension within the loop to a smaller dimension , see also Fig. 13(d). Finally, groups of tensors are contracted to form new 4-index tensors and as depicted in Fig. 13(e), such that a coarser square lattice tensor network is obtained. These steps can can be iterated many times to generate a sequence of increasingly coarse-grained lattices.

This renormalization scheme is benchmarked through application to coarse-grain the classical Ising model at critical temperature. We compare the scheme against standard TRG and an improved form of TRGTRG5 () that takes a larger region of the environment into account in order to achieve greater accuracy. For each method, 32 coarse-graining steps are applied in order to reach a lattice size of classical Ising spins. The results for the (per-site) error in the free energy as a function of bond dimension are compared in Fig. 14. It is seen that the TRG that includes the FET step significantly improves on standard TRG as well as the TRG with enlarged environment. With bond dimension , which required approximately 20 minutes computation time on a desktop PC, the TRG+FET scheme achieved a relative error in the free energy of . The accuracy we achieve versus computation time appears to improve on the standard TNRTNR1 () approach as well as the so-called loop-TNR approachTNRc1 (), and to be comparable to the recently proposed GILT methodTNRc4 (). A key feature of the FET is that it is easily incorporated in any network geometry, similar to the GILT method, such that it could also be directly implemented, for instance, in higher dimensional networks.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
112470
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description