An Efficient Algorithm for approximating 1D Ground States

An Efficient Algorithm for approximating 1D Ground States

Dorit Aharonov School of Computer Science and Engineering,
Hebrew University, Jerusalem, Israel
   Itai Arad School of Computer Science,
Tel-Aviv University, Tel-Aviv, Israel
   Sandy Irani Computer Science Department,
University of California, Irvine , CA, USA
July 3, 2019

The density-matrix renormalization-group method is very effective at finding ground states of one-dimensional (1D) quantum systems in practice, but it is a heuristic method, and there is no known proof for when it works. In this article we describe an efficient classical algorithm which provably finds a good approximation of the ground state of 1D systems under well defined conditions. More precisely, our algorithm finds a matrix product state of bond dimension whose energy approximates the minimal energy such states can achieve. The running time is exponential in , and so the algorithm can be considered tractable even for which is logarithmic in the size of the chain. The result also implies trivially that the ground state of any local commuting Hamiltonian in 1D can be approximated efficiently; we improve this to an exact algorithm.

I Introduction

Finding ground states of local one-dimensional (1D) Hamiltonian systems is a major problem in physics. The most commonly used method is the density-matrix renormalization-group (DMRG) White92 (); White93 (); ostlund1 (); ostlund2 (); peschel (); DMRG-overview (), discovered in 1992. DMRG can be cast in the form of matrix product states (MPSs) which are succinct representations of 1D quantum states using matrices, where the coefficients in the state can be written in terms of products of these matrices. The number of matrices is , where is the dimension of each individual particle and is the number of particles in the system. The parameter is called the bond dimension. DMRG works essentially as follows: The algorithm starts with some initial MPS and sweeps from one end of the chain to the other, optimizing the entries of the matrices at one site with the other parameters fixed. Some versions allow optimizing over two neighboring sites at once, which enables the algorithm to increase the bond dimension in the course of the algorithm for improved accuracy. In all cases, the approach is to apply local optimizations iteratively. It is thus easy to construct examples in which the DMRG algorithm gets trapped in a local minimum. To illustrate this, think of a 1D spin chain whose Hamiltonian consists of two types of interactions: One type consists of interactions which force the spins to be aligned; every two neighboring sites gain an energy penalty of say if they are not aligned. The other type of term gives every spin an energy penalty of if it points upward. Starting from the all-up string, a local move only increases the energy; thus, local update rules cannot take the system to its ground state, the all-down string. This example can of course be handled by randomizing the initial string, for example, or increasing the window size; however, it demonstrates that DMRG has a fundamental difficulty in addressing non local characteristics of the system. It is natural to ask if there is a general algorithm that does not get stuck in local minima as DMRG does and provably always find a good approximation of the ground state of a given 1D system in a reasonable amount of time.

To answer this question, we first ask what is known regarding the analogous question in the easier, classical, case. It was Kitaev kitaev:book () who drew the important connection between the problem of finding ground energy and ground states of local Hamiltonians, and the well-known classical constraint satisfaction problem (CSP). The input to a CSP consists of constraints on -state classical particles. Each acts on particles (for some constant ) and is given as a Boolean function on the possible assignments to those particles; when the configuration is forbidden and when it is allowed. The problem is to determine the maximum number of constraints that can be satisfied, or alternatively, to minimize . The decision version of this problem is to determine whether it is possible to satisfy more than some given number of constraints. This is one of the most well-known NP-complete problems. CSP can clearly be seen as a special case of the problem of finding ground states and ground energies of local Hamiltonians, in which the terms in the Hamiltonian are projections on local forbidden configurations. This analogy has led over the past few years to many interesting insights regarding the local Hamiltonian problem (see, e.g., LABEL:kitaev:book,_bravyi,_adiabatic,_focsversion,_detectability,_randomsat).

Let us therefore see what the known classical results regarding CSP in 1D can teach us about 1D local Hamiltonians and their ground states. We recall that in the classical case, 1D CSPs (in which the particles are arranged in a line and constraints are between adjacent neighbors) are dramatically easier than their higher-dimensional counterparts. While even the 2D case is NP complete, the 1D problem can be solved in polynomial time. The reason for the tractability of the problem in 1D is essentially that the problem can be divided into sub problems, namely, the left- and the right-hand sides of the chain, which interact only via the particles on the border. The fact that these particles can only be assigned a small number of possible values makes it possible to handle the problem by solving each sub problem separately for each fixed possible assignment to the border particles and then gluing the sub solutions together by picking the best choice for the middle particles. We explain the algorithm in detail later; the outcome is an algorithm which is linear in the number of particles in the chain and quadratic in the number of states per particle.

Unfortunately, there is no hope of getting such a general result for the 1D quantum problem. Aharonov et al focsversion () have shown that approximating the ground energy for general 1D quantum systems is as hard as quantum-NP. Even when restricted to ground states that are well-approximated by MPSs of polynomial bond dimension, the problem is NP-hard, as was shown by Schuch et al schuch08 (). A related earlier result due to Eisert eisert06 () showed that optimizing a constant number of matrices in the MPS representation subject to fixed values in the other matrices is NP-hard. These results indicate that the dichotomy between the computational difficulty of 1D and 2D classical systems does not carry over to the quantum setting, and it is highly unlikely that the quantum 1D problem is tractable. Nevertheless, we show here that using the classical 1D algorithm as a template for an algorithm for the quantum problem leads to a solution for a wide and interesting class of local Hamiltonian problems, namely, for those cases in which we can assume that the bond dimension is small.

i.1 Main Result

We derive an efficient algorithm for approximating the minimal energy of a 1D system among all states of a bounded bond dimension . The algorithm is exponential in and thus can be considered reasonable, though maybe not practical, even for , which is logarithmic in the size of the chain. The algorithm also provides a description of an MPS with the approximate minimal energy.

Theorem 1

Let be a nearest-neighbor Hamiltonian on a 1D system of -dimensional particles. Let be a bound on the operator norm of each local term. There is an algorithm that takes as input , and and produces an MPS of bond dimension , such that for any MPS of bond dimension with ,


The algorithm runs in time , where

Several remarks are in place here. First, note that the restriction that the interactions are nearest neighbor is done without loss of generality since any 1D system can be reduced to a 2-local 1D system with nearest neighbor interactions by grouping neighboring particles together.

Note also that the running time in the above theorem is phrased as a linear function in , the size of the system, times some fixed amount of time spent per particle. The error, however, scales with . One may want to apply the theorem to derive an approximation with a fixed additive error , in which case simply set in the above theorem to get the running time as a function of .

This result shows that the problem of finding bounded bond dimension MPSs can be done in polynomial time. Unfortunately, the running time, though efficient in theory, is quite impractical, as even for and the error a constant, we get a running time which scales like . It is hard to imagine that these running times are practical. Nevertheless, it is very likely that the running time can be improved; in particular, when solving specific problems with certain symmetries, dramatic improvements may be possible. Moreover, it is possible that this algorithm can be used to boost DMRG in certain cases where it gets stuck or to create the initial state of DMRG. All these improvements are left for further research.

We now provide an overview of the algorithm. To understand the general idea, we first recall how the classical 1D algorithm works in detail. Consider the case of the classical CSP on a line with , namely the problem of minimizing the energy function . An optimal assignment can be found efficiently by a standard algorithmic technique called dynamic programming. Define the partial problem up to the th particle, . The algorithm starts with the partial problem defined for and creates a list of possible assignments to the first two particles as follows: For each of the possible assignments to particle , the algorithm finds an assignment to particle which minimizes . That optimal is called the tail of . For each the algorithm keeps its tail and also the energy of this partial assignment, . thus contains the best possible partial assignment with each possible ending. After iterations, we assume the algorithm has a list consisting of an optimal tail for each of the possible assignments to the th particle, where optimality is measured with respect to . In other words, the algorithm has a solution to the subproblem confined to the first particles, with any possible ending. To include the next particle, and create the next list , the algorithm finds the optimal tail of each assignment . This is done by considering all items in the list as possible tails for and taking the tail which minimizes In each of the iterations, the algorithm checks for each of the possible assignments , all items in the list . Thus, in time which is linear in and quadratic in , we can derive the final list . The final solution is an assignment of minimal energy in that list.

The main idea in this article is to generalize the above algorithm to MPSs by replacing assignments to particles by possible values of MPS matrices. Since matrices are continuous objects, we use an -net over all possible matrices of bond dimension . The number of possible assignments to one variable, , will now be replaced by the number of points in the -net, denoted as . We will move from one site to the next, keeping track of the minimum-energy MPS state, which ends in each MPS matrix for the right most particle that the algorithm has reached.

In order to carry out this idea, it must not happen that the choice of the MPS matrix of a later iteration can change the optimality of the partial MPS state found in an earlier iteration. To avoid this, we work with a restricted form of MPSs called canonical MPSs, in which the energy of each term in the Hamiltonian depends only on MPS matrices associated with nearby particles. There are, however, various technical issues we need to handle. In particular, we cannot use perfectly canonical MPSs but only an approximated version of those, which imposes further technicalities, and in particular, the neighboring MPS matrices do not match perfectly (we call this imperfect stitching). These technicalities make the error analysis a bit subtle. Before we formally define canonical MPSs and provide the details of the algorithm, we mention an implication for a related problem.

i.2 Commuting Hamiltonians in 1D

A problem related to finding minimum-energy MPS states is the complexity of calculating the ground energy of commuting Hamiltonians in which all the local terms commute. Bravyi and Vyalyi proved that for 2-local Hamiltonians the problem lies inside NP bravyi (). For -local commuting Hamiltonians with , the complexity of the problem is still open. The complexity of the 1D case was not studied before as far as we know; an immediate corollary of Theorem 1 is that there is an efficient classical algorithm for approximating the ground energy of commuting Hamiltonians in 1D to within . This is because the ground state of a commuting Hamiltonian in 1D is an MPS of constant (this is a well-known fact that we explain later for completeness), and therefore Theorem 1 can be applied. In fact, the result can be improved to an exact algorithm (up to exponentially good approximations due to truncations of real numbers) for a certain general class of problems. We prove the following.

Theorem 2

Given is a 1D Hamiltonian whose terms commute. There is an efficient algorithm that can compute the ground energy of this Hamiltonian to within any desired accuracy in time polynomial in and in . If we may assume also that the ground space of the total Hamiltonian is well separated from the higher excited states, by a spectral gap which is at least , then the algorithm can find both the ground energy and a description of an MPS for the ground state exactly (i.e., up to exponentially small errors due to handling of real numbers).

The basic idea for the exact algorithm can be illustrated when the terms in the Hamiltonian are all projections and the ground state is unique. Since the terms commute, the ground state is an eigenstate of each term separately, with eigenvalue either or . We start by applying the dynamic programming algorithm, to create a good approximation of the ground state. From this approximation we can deduce the correct eigenvalue ( or ) for each of the terms. The projections on the relevant eigenspaces can then be applied to the MPS of the approximate state to make it exact. One gets a tensor network of small depth, which can be converted into an MPS again. It can be shown that applying the projections does not increase the bond dimension of the MPS too much with respect to the approximating state. The details are fleshed out in the proof (Sec. V).

Handling the degenerate case is very easy; essentially, we force the dynamic algorithm to choose one state of the various possible states. The assumption on the spectral gap ensures that the errors created by the epsilon net approximations would not cause a confusion between the ground space and some excited states.

We provide an alternative proof of Theorem 2, which also uses dynamic programming. In fact, this proof holds for a somewhat stronger version of the theorem, in which the conditions on the spectrum are far less restrictive. In the algorithm given by this approach, the state is not provided as an MPS but rather as a tensor product of two-particle states. The construction is based on the work of LABEL:bravyi in which it is proved that the ground states of -local commuting Hamiltonians have this special structure. Bravyi and Vyalyi use this structure to show that general -local commuting Hamiltonians problem is in NP. Since 1D chains with -local interactions can always be made -local by treating nearby particles as one particle of a larger dimension, LABEL:bravyi implies that the 1D commuting problem lies in NP. However, by exploiting the special form of these ground states, dynamic programming can be applied to find the solution efficiently in a very similar manner to the 1D CSP, in which the NP witness is found using the 1D structure. Unfortunately, in this approach too, it seems that one cannot avoid some assumption on the spectrum of the total Hamiltonian, albeit a significantly less restrictive one. Throughout its execution, the dynamic programming algorithm compares various partial energies. If these are too close, and cannot be distinguished even by computations performed with exponentially good precision, then the algorithm might get confused between the ground energy and a slightly excited state. A sketch of the alternative proof of Theorem 2, providing the stronger version of it, and a discussion of the above precision issue are given in Sec. V.

We mention that this latter proof (and in particular the observation that dynamic programming can be useful for 1D quantum systems and not only for 1D classical systems) was the inspiration for the current article, rather than its corollary.

i.3 Discussion and Open Questions

It is natural to ask how much the results in this article can be improved. By LABEL:schuch08, we know that no polynomial algorithm exists for finding optimal approximations of polynomial bond dimension (unless P=NP). However, the difficult instances of LABEL:schuch08 have a spectral gap of . Hastings has shown that ground state of 1D quantum system with a constant gap can be approximated by a MPS with polynomial bond dimension hastings (). However, this is too large to immediately yield an efficient algorithm from our result. It may still be true, however, that under the additional restriction that the Hamiltonian has a constant gap, a polynomial time algorithm exists, even when the bond dimension is as large as polynomial.

It is very likely that the efficiency of our algorithm can be significantly improved even for the general case. In particular, a factor of would be shaved from the error in Theorem 1 if we could use an -net which is both exactly canonical and enables perfect overlap between matrices at neighboring particles, as we later explain. Unfortunately, even if this can be done, the running time for this general algorithm is still quite large.

As mentioned earlier, we leave for further research the question of how this algorithm can be used in combination with DMRG, and how certain symmetries in the problem can be utilized to enhance its performance time for specific interesting cases.

We note that very similar results to those presented in this article were derived independently by Schuch and Cirac NI ().

i.4 Paper Organization

Section II starts by defining tensor networks, MPSs and canonical MPSs. In Sec. III we describe the algorithm. This is where the -nets are defined and an algorithm to generate them is given. Also in Sec. III, we show how they are used in the dynamic programming algorithm. Sec. IV provides an exact analysis of the error accumulated in the algorithm. The complexity is analyzed as a function of the desired error. In Sec. V we provide the proof regarding the approximate and exact solutions for the commuting 1D case. We defer several technical lemmas to the Appendix.

Ii Tensor Networks and Matrix Product States

ii.1 Tensor Networks

We start with some background on tensor networks, since MPSs are a special case of those. A detailed introduction to the use of tensor networks in the context of quantum computation can be found in Refs markov2008simulating (); aharonov2006quantum (); arad2008quantum ().

A tensor network is a graph in which we allow some of the edges to be incident to only one node. These edges are called the legs of the network. Each node is assigned a tensor whose rank (number of indices) is equal to the degree of the node. Each index of the tensor corresponds to one edge that is incident to that node. To each edge (or index) we also assign a positive integer which indicates the range of the index. The indices associated with some of the edges in the tensor network may be assigned fixed values. The other edges are called free edges.

We call an assignment of values to the indices of the free edges in the network a configuration. With all the indices fixed, the tensor at each node in the network yields a particular value. We say that the value of the configuration is the product of the values for each of the nodes.

The value of the network is in general a tensor, whose rank is equal to the number of legs in the network. If there are no such legs, the value is simply a number (a scalar). Each assignment of values to the indices associated with the legs of the network gives rise to a value for the network tensor. We compute the tensor value for this assignment by summing over all configurations which are consistent with that assignment the value of each such configuration.

We note that often in the literature, one assigns values not to entire edges but to the two sides of an edge separately (where each side inherits its range of indices from the tensor associated with the node on that side). In the evaluation of the network, we require that the values on the two sides of one edge are equal, or else the entire configuration contributes zero to the sum.

Tensors will be denoted as bold-face fonts: . Their contraction will be denoted as an expression like , when it is clear from the context along which indices the contraction is performed.

It is possible to restrict a tensor of rank to a tensor of rank by assigning a fixed value to one of its legs. For example, is the restriction of the tensor to the case in which the relevant edge associated with the index is given some value (which, by the usual abuse of notation of variables and their values, will also be denoted as ).

It is convenient to associate with every tensor (which can be given as a contraction of a tensor network) a quantum state. For example, let be a rank- tensor. Then we define .

ii.2 Matrix Product States

We work in the notation of Vidal vidal1 () for MPSs, with minor changes. A MPS of a chain of dimensional particles, with bond dimension , is a tensor network with a 1D structure as in Fig. 1. Horizontal edges correspond to indices ranging from to the bond dimension and are denoted with ,…, while vertical edges correspond to indices ranging from to the physical dimension . (In our description, the end particles will actually have a different physical dimension, denoted . This is required due to a technical reason described in Sec. II.3) The indices of vertical edges are denoted with ,,…The figures show two types of nodes: black and white. The tensors of black nodes are typically of rank (except for the boundary tensors, which are of rank ), and we denote them with ’s. For example, when the tensor that is second from left is written with its indices, it is denoted as , where the index in brackets corresponds to its location in the graph. The tensors associated with white nodes are always of rank and are denoted with ’s. They are required to be diagonal and hence are given only one index (i.e., ). Without loss of generality, we will also demand that the entries of are non negative since the phases can be absorbed in the neighboring tensors.

The MPS defined by this network is with

Figure 1: MPS as a tensor network.
Figure 2: A description of a canonical MPS. The tensors are chosen such that cutting a MPS between the th and th particles corresponds to the Schmidt decomposition between the left and right parts: .

In the language of tensor states, is exactly the tensor state of the contraction .

ii.3 Canonical MPSs

An MPS is in canonical form if every cut in the chain induces a Schmidt decomposition (as in Fig. 2). In other words, we can rewrite the MPS by changing the order of summation to sum last over the index of the th tensor: , where () denote the contraction of the all the tensors to the left (right) of the cut with fixed and () are their corresponding states. Then the canonical conditions are that for all from to , and . In addition, for normalization, we require that the entire MPS state is normalized, which is guaranteed by the normalization requirement on the tensors.

There is a small technical issue that needs attention: The canonical conditions cannot be satisfied at the boundaries if . Consider for example the left boundary; there are not enough dimensions in the Hilbert space of the left particle for an orthonormal set of vectors to exist. This issue remains a problem even as we move away from the boundary by one particle, as the dimension of the left-side Hilbert space increases to which may still be smaller than . There are many ways of handling this technicality; here we choose to assume that the particles at the end of the chain have dimension of at least . This will ensure that at any cut along the chain, the Hilbert space of the subsystems on each side have dimension of at least . We can achieve this by grouping particles at each end of the chain into a single particle, where is chosen to be the smallest integer such that . Denote as , the dimensionality of each of those end particles. Note that . The dimension of the rest of the particles will remain . We renumber the particles after the grouping, so that the new is now the sum of the old for ranging from to . The term in the Hamiltonian for the last two particles is adjusted in a similar manner. We will assume from now on that the Hamiltonian is given in this form.

Let us now see how the canonical conditions can be stated in a local manner. Graphically, the second condition is equivalent to


and similarly from the other side. Here the upper part of the network corresponds to , and the lower part corresponds to . Notice that the canonical conditions imply that we can “collapse” the network both from the left side and from the right side. Moreover, as this condition holds at every bond, it is not difficult to see that a necessary and sufficient condition for an MPS to be canonical consists of the following local conditions on : For every ,


For and , for :


We also require that the are normalized, namely, that for every from to ,


Graphically, these conditions are summarized in Fig. 3.

Figure 3: (a) The normalization condition for . (b) The left/right canonical conditions for [see Eqs. (3) and (4)]. (c) The boundary canonical conditions for and [see Eq. (5)].

Any triplet that satisfies the normalization and the left and right canonical conditions [Eqs. (3), (4), and (6)] is called a canonical triplet. Such a triplet can be associated with a quantum state on three particles , with the following properties: ; the Schmidt basis of the first particle is the standard basis, with Schmidt coefficients ; and the Schmidt basis of the third particle is the standard basis, with Schmidt coefficients . A canonical MPS can thus be described as a set of canonical triplets (or equivalently -particle states) such that the right tensor of one state is equal to the left tensor of the next canonical triplet.

Instead of describing a canonical MPS in terms of canonical triplets , we will often describe it using canonical pairs , where

The advantage is that for canonical MPSs, the elements in are always bounded (since the norm of satisfies ; see Sec. II.4), unlike whose entries can approach infinity when the corresponding entries approach zero.

An MPS that is described by the contraction can also be denoted as . No information is lost since can always be recovered from : is the norm (see Sec. II.4) of the tensor state :111Recall that corresponds to a Schmidt coefficient in a Schmidt decomposition that coincides with the standard basis.

We define this way also for non-canonical pairs.

The advantage of working with the canonical form is that the energy of local Hamiltonians involves only the local tensors, as the following figure illustrates:

The above equality was obtained using the canonical conditions that are described in Eq. (2). Consequently, the energy only involves five tensors: , , and . Similarly, only depends on , , , and only depends on , . It is important that each energy term does not involve tensors further to the right in the chain since the algorithm attempts to compute (or approximate) the optimal MPS up to a certain point. We would like to be able to grow the description of the state from left to right, without affecting the energies we have already computed. If matrices in the right side of the chain affected energies of terms in the left side, we would need to go back and change the MPS matrices of the particles we have already handled after we make new assignments to particles on the right. This would ruin the entire idea of dynamic programming.

Fortunately, any MPS representing a normalized state can be written as a canonical MPS with no increase in bond dimension. This follows from LABEL:vidal1, in which it is shown that any state with Schmidt rank of at most across any cut can be written as a canonical MPS with bond dimension .

ii.4 Tensor Norms and Distances

We use the norm on tensors . This norm of course induces a metric, namely, a way of defining the distance between tensors of the same rank. It is easy to see that the norm of a tensor is equal to the Eucledian norm of its corresponding state . Also, for a rank-2 tensor (which can be viewed as a matrix), it is known that its operator norm is not larger than its tensor norm (which in this case is simply the Frobenious norm).

It is true that for any three tensors, , we have . In fact, many times in the context of MPSs, a much stronger inequality holds. Assume connects with or along one edge, indexed by . Assume further that for every (in the context of canonical MPSs, it will often be the case that we consider the contraction of one side of the chain with a fixed index of the cut edge, and this contraction is indeed of norm by the canonical conditions). In this case, we have a much stronger inequality, which can be easily verified:


We can apply this to cases of interest, when we compare contractions of tensors which differ in only a single term. For example, consider vector with norm and two tensors and such that when is fixed, the resulting tensors and have norm 1. Let , and be tensors with the same rank and dimensions as , and . We have, by Eq. (7),


and also


And similarly,


Iii The Algorithm

As discussed earlier, in order to carry out the outline described in Sec. I.1, we would like to work with canonical MPSs. Additionally, since the tensor pairs for neighboring nodes overlap, we would like an net over canonical pairs such that could be equal to the of the next pair (we call this perfect stitching). We do not know how to efficiently construct an net that satisfies those conditions exactly; we resort to approximately canonical MPSs with approximate stitching.

iii.1 nets

We fix (to be determined later) and define two nets. We start with discretizing and .

Definition 1 (the net)

is a set of canonical boundary tensors [see Eq. (5)] such that, for any canonical boundary tensor there is such that for each , .

We now define an net over the intermediate tensors, or more precisely, for the pairs .

Definition 2 (the net)

is a set of pairs of tensors such that:

  1. is positive and normalized: For all and

  2. is an net: For every canonical triplet there is such .

  3. is perfectly right canonical: For every , (here are the left Greek indices of ).

  4. are approximately left canonical: For every ,


iii.2 net Generators

We now explain how to construct such nets efficiently. Both generators for the nets will make use of the following general lemma

Lemma 3

For any positive integers and any in the range , we can generate a set of matrices over the complex numbers such that for any , the rows of are an ortho-normal set of length vectors. Furthermore, for any matrix whose rows form a set of orthonormal vectors, there is a matrix such that each row of has norm at most . The size of is at most . The time to generate is . If , we can generate a set of vectors with real non-negative entries, rather than complex. The size of the net is and the time to generate it is .

The proof appears in the Appendix.

iii.2.1 Generating :

Invoke Lemma 3 with , , and . For every , add a to the net, where . Note that the conditions of Lemma 3, are satisfied if . Since , the size of the net is at most and the time to generate it is times the size of the set.

iii.2.2 Generating :

We generate by first generating an -net over the ’s and an -net over the ’s. To generate the net of the ’s, invoke Lemma 3 with , and the in the lemma set to . Note that we would like to have a with non negative real entries. According to Lemma 3, this actually requires fewer items in our net since we are omitting the phases in each entry in the tensor. The resulting net for the ’s has size and can be generated in time .

To generate the net over the ’s, we invoke Lemma 3 with , , and . Note that in order to invoke Lemma 3, we require that . For any matrix in the set, we generate a tensor where . This way we generate a set of pairs which satisfies both the normalization condition [condition (1) of Definition 2] and the condition of being perfect right canonical [condition (3) of Definition 2].

To see that we in fact have an net [i.e. condition (2) is satisfied], consider a perfectly canonical pair , and let us find a pair in the net that is -close to it. We first replace with a from the first net and then replace with a from the second net. Using Eq. (8), we have that

Using Eq. (10), we also have

Next, we discard all tensors that are not approximately left canonical, namely, those that violate condition . It remains to show that the remaining tensors still satisfy condition , that is, the net condition. We do that by showing that a pair that is close to a canonical triplet must necessarily be approximately left canonical. Therefore, such a pair would not have been eliminated.

To see this, let the tensor be the contraction of the canonical triplet and be the contraction of from the net such that . The fact that is perfectly left canonical is expressed in the fact that for every , . To prove that is approximately left canonical, we need to show . Indeed, implies for every . Assume . Then

This concludes the proof that is indeed an net according to Definition 2.

iii.2.3 Complexity of Generating and :

By Lemma 3, , the size of the net is


This is the size of the set formed by taking all pairs , where each and come from their respective nets. The time required to generate the original net (before tensors are discarded) is . The cost of checking whether a pair is approximately left canonical is , so the total cost of generating the net is .

For , both the number of points and the running time which were determined in Sec. III.2.1, are bounded above by the corresponding bounds of .

iii.3 The algorithm

When processing particle , the algorithm creates a list of partial solutions, one for each pair in . For each such partial solution, a tail (i.e., the tensors to the left of the th particle) and energy is kept.

First step:

Create the first list : For each , find its tail, namely the which minimizes the energy with respect to of the tensor . Denote this minimal energy by . We keep both the tail and the computed energy, for each pair .

Going from to :

we assume we have created the list . For each pair there is a tail in :

and an energy value that we denote by . To create , we find a tail for each . We require that the tail for a given is an item in which satisfies the “stitching” condition:


We pick the tail for to be an item in which satisfies the stitching condition and minimizes . The minimum such value is defined to be .

Final step:

The final step, , is exactly as in the intermediate steps except the algorithm goes over , rather than over pairs from and there is no stitching constraint. More precisely, we pick the tail for to be the item in which minimizes . The minimal value is defined to be .

Finally, we choose which minimizes . We output the MPS that is defined by and its tail:


together with the energy which the algorithm calculated:


Note that since each is perfectly right canonical, the state is normalized. This can be seen by contracting the tensor network corresponding to the inner product from right to left.

Unlike in the classical case, our algorithm does not search all states due to the discretization. Moreover, it does not optimize over the real energy of the states that it does check, but rather over . is different from the true energy because the states are not exactly canonical. Note that the output is thus just an approximation of the real energy of the output MPS . We output anyway, since our guarantee on its error is somewhat better than on the error for , as we will see in Sec. IV.

The following claim easily follows from the same reasoning as for the classical dynamic programming algorithm:

Claim 4

The algorithm finds the state which minimizes among all MPSs of the form , such that , for all , and the stitching conditions (Eq. (13)) are all satisfied.

Iv Error and Complexity Analysis

In order to finish the proof of Theorem 1, we will prove the theorem below. As noted above, this theorem actually gives a better error bound on than the bound on that is given in Theorem 1.

Theorem 5 (Error bound)

Let be the minimal energy that can be achieved by a state with bond dimension , and the maximal operator norm over all terms. Then:


It is easy to verify that as long as , Eq. (16) implies Eq. (1) of Theorem 1.


By definition, . We first prove that . Let:

be a state with of bond dimension , written as a canonical MPS. For every triplet for , we associate a pair which is -close to that triplet. In addition, we find close to and close to . We define the state:

Just like , this state is normalized due to the fact that the tensors in and are perfectly right canonical.

We first claim that . This follows from the fact that belongs to the set of states over which the dynamic algorithm searches (see Claim 4), since the and satisfy the stitching condition (13), as promised by the following lemma:

Lemma 6

For every ,


Proof: We use the fact (established in Lemma 8 in the Appendix) that for any two bipartite states , with normalized , with normalized , we have

The tensors and represent two quantum states on 3 particles, where in both states, the Schmidt basis of the first particle is the standard basis, and the perfect right canonical condition of Definition 2 (or alternatively, the condition of Equation 4) holds. The Schmidt coefficients are given by and , respectively. According to the above fact (Lemma 8)


Similarly, we know that . Consider now these 3-particle states expanded in terms of the basis vectors of the third particle. Denote these expansions by , with normalized , and with normalized , respectively. Then by definition, , and . We can therefore apply again Lemma 8 and get: . Together with Eq. (18), we therefore obtain .     

Thus far, we have established that . We will therefore prove the inequality by showing that . Observe that each energy term in depends solely on two overlapping triplets . The corresponding energy term in depends only on . We now bound the distance between these two tensors. We have

Taking the LHS and RHS sides of the above equation, and using Eq. (8) and Eq. (9), we have that