A Strictly SingleSite DMRG Algorithm with Subspace Expansion
Abstract
We introduce a strictly singlesite DMRG algorithm based on the subspace expansion of the Alternating Minimal Energy (AMEn) method. The proposed new MPS basis enrichment method is sufficient to avoid local minima during the optimisation, similarly to the density matrix perturbation method, but computationally cheaper. Each application of to in the central eigensolver is reduced in cost for a speedup of , with the physical site dimension. Further speedups result from cheaper auxiliary calculations and an often greatly improved convergence behaviour. Runtime to convergence improves by up to a factor of 2.5 on the FermiHubbard model compared to the previous singlesite method and by up to a factor of 3.9 compared to twosite DMRG. The method is compatible with realspace parallelisation and nonabelian symmetries.
I Introduction
Since its introduction in 1993,White (1992, 1993) the Density Matrix Renormalisation Group method (DMRG) has seen tremendous use in the study of onedimensional systems.Schollwöck (2005, 2011) Various improvements such as realspace parallelisation,Stoudenmire and White (2013) the use of abelian and nonabelian symmetriesMcCulloch and Gulácsi (2002) and multigrid methodsDolfi et al. (2012) have been proposed. Most markedly, the introductionWhite (2005) of density matrix perturbation steps allowed the switch from twosite DMRG to singlesite DMRG in 2005, which provided a major speedup and improved convergence in particular for systems with longrange interactions.
Nevertheless, despite some progress,Yan et al. (2011); Depenbrock et al. (2012); Stoudenmire and White (2012) (nearly) twodimensional systems, such as long cylinders, are still a hard problem for DMRG. The main reason for this is the different scaling of entanglement due to the area law:Vidal et al. (2003); Eisert et al. (2010) In one dimension, entanglement and hence matrix dimensions in DMRG are essentially sizeindependent for ground states of gapped systems, whereas in two dimensions, entanglement grows linearly and matrix dimensions roughly exponentially with system width.
As a result, the part of the Hilbert space considered by DMRG during its ground state search increases dramatically, resulting mainly in three problems: firstly, the DMRG algorithm becomes numerically more challenging as the sizes of matrices involved grow (we will assume matrixmatrix multiplications to scale as throughout the paper). Secondly, the increased search space size makes it more likely to get stuck in local minima. Thirdly, while sequential updates work well in 1D chains with shortrange interactions, nearestneighbour sites in the 2D lattice can be separated much farther in the DMRG chain. Therefore, improvements to the core DMRG algorithm are still highly worthwhile.
In this paper, we will adopt parts of the AMEn methodDolgov and Savostyanov (2014) developed in the tensor train/numerical linear algebra community to construct a strictly singlesite DMRG algorithm that works without accessing the (full) reduced density matrix. Compared to the existing centermatrix wavefunction formalism (CWF),McCulloch (2007) we achieve a speedup of during each application of to in the eigensolver during the central optimisation routine, where is the dimension of the physical state space on each site.
The layout of this paper is as follows: Section II will establish the notation. Section III will recapitulate the density matrix perturbation method and the CWF. Section IV will introduce the subspace expansion method and the heuristic expansion term with a simple twospin example. The strictly singlesite DMRG algorithm (DMRG3S) will be presented in Section V alongside a comparison with the existing CWF. As both the original perturbation method and the heuristic subspace expansion require a mixing factor ,White (2005) Section VI describes how to adaptively choose for fastest convergence. Numerical comparisons and examples will be given in Section VII.
Ii DMRG Basics
The notation established here closely follows the review article Ref. Schollwöck, 2011. Consider a state of a system of sites. Each site has a physical state dimension , e.g. , for a system of spins:
(1) 
In practice, the dimension of the physical basis is usually constant, , but we will keep the subscript to refer to one specific basis on site where necessary.
It is then possible to decompose the coefficients as a series of rank3 tensors of size respectively, with . The coefficient can then be written as the matrix product of the corresponding matrices in :
(2) 
The maximal dimension is called the MPS bond dimension. In typical onedimensional calculations, , but for e.g. cylinders, is often necessary. It is in these numerically demanding cases that our improvements are of particular relevance.
Similarly, a Hamiltonian operator can be written as a matrix product operator (MPO), where each tensor is now of rank 4, namely :
(3) 
is called the MPO bond dimension. We will usually assume that for most , and . In practice, this holds nearly everywhere except at the ends of the chain, where the grow exponentially from to . The basis of () of dimension () is called the lefthand side (LHS) basis, whereas the basis of dimension () is the righthand side (RHS) basis of this tensor. For simplicity, , and can also refer to the specific basis (and not only its dimension) when unambiguous.
Instead of , we will also write ( for a left (right) normalised MPS tensor:
(4)  
(5) 
If we then define the contractions
(6)  
(7) 
we can rewrite from (2) as
(8) 
That is, when only considering one specific bond , the left and right MPS bases at this bond are built up from the states generated by the MPS tensor chains to the left and right of the bond. Individual elements of an MPS basis are therefore called “state”.
Furthermore, define and with summation over all possible indices. Similarly, and . With these contractions, it is possible to write
(9) 
for any .
DMRG then works by sweeping over the system multiple times. During each sweep, each site tensor is sequentially updated once with each update consisting of one optimisation step via e.g. a sparse eigensolver and possibly one enrichment step during which the left or right MPS basis of is changed in some way. Depending on the exact implementation, updates may work on one (singlesite DMRG) or two sites (twosite DMRG) at a time. The enrichment step may be missing or implemented via Density Matrix Perturbation or Subspace Expansion.
Iii Perturbation Step and Centermatrix Wavefunction Formalism (CWF)
iii.1 Convergence Problems of SingleSite DMRG
During singlesite DMRG, only a single MPS tensor on site is optimised at once. Compared to twosite DMRG, the search space is reduced by a factor of , leading to a speedup of at least per iteration.White (2005) However, since the left and right bases of the tensors are fixed and defined by the environment ( and ), this approach is likely to get stuck. While also occurring if there are no symmetries implemented on the level of the MPS, this issue is most easily visible if one considers symmetries:Schollwöck (2011) assume that all basis states to the right of the RHS bond of transform as some quantum number . If we now target a specific sector, e.g. overall, then on the LHS of this bond (i.e. from the left edge up to and including ), all states must transform as . In this configuration, it is impossible for a local change of to add a new state that transforms as, say, , to its right basis states, as there would be no corresponding state to the right of that bond, rendering the addition of the state moot from the perspective of the local optimiser, as its norm will be zero identically. A concrete example of this issue is given in Section VII.1.
DMRG is a variational approach on the state space available to MPS of a given bond dimension. As such, the algorithm must converge into either the global or a local minimum of the energy in this state space. Hence, we will call all cases where DMRG converges on an energy substantially higher than the minimal energy achievable with the allowed MPS bond dimension cases where DMRG is stuck in local minima.
iii.2 Density Matrix Perturbation
This convergence problem has been solved by White (2005).White (2005) In the following, we will assume a lefttoright sweep, sweeping in the other direction works similarly, but on the left rather than right bonds. After the local optimisation of the tensor , the reduced density matrix
(10) 
is built on the next bond. This is the reduced density matrix resulting from tracing out the part of the system to the left of bond .
is then perturbed as
(11) 
The new is then used to decide on a new set of basis states on the RHS of , with the inverse mapping from the new to the old basis being multiplied into each component of . The mixing factor is a small scalar used to control the perturbation. A new scheme to find the optimal choice of is discussed in Section VI.
iii.3 Centermatrix Wavefunction Formalism (CWF)
In a standard singlesite DMRG calculation, the reduced density matrix is never used. More importantly, even building on a given bond will not yield a density matrix that can be used in (11), as it only contains the states existing on that bond already without knowledge of the states on the bond one step to the left. In other words, it is not possible to choose the optimal set based only on , rather, one requires also and .
The centermatrix wavefunction formalismMcCulloch (2007) was developed to cope with this problem. Given a site tensor on a lefttoright sweep, it introduces a “centermatrix” and replaces the original site tensor as
(12) 
is constructed to be leftorthogonal and is essentially an identity matrix mapping the left basis and the physical basis onto a complete basis containing all states on its right. The new basis is “complete” in the sense that all states reachable from the left bond basis and the local physical basis are contained within it.
The contents of are placed in accordingly and the original state remains unchanged. The reduced density matrix is then and has access to all states, as required above. A perturbation of according to (11) hence allows the introduction of new states.
The DMRG optimisation step can work on alone, with built prior to optimisation of from the expanded . During each eigensolver step, the effective Hamiltonian on site has to be applied onto . The application is done by contraction of , and at cost per step. After optimisation, the perturbation is added. Its computational cost is dominated by the calculation of at . The bond between and can then be truncated down to using and the remaining parts of are multiplied into to the right.
The resulting algorithm converges quickly for onedimensional problems and performs reasonably well for small cylinders. However, both the cost of the applications of to as as well as the large density matrix cause problems if and become large.
Iv Subspace Expansion
The idea of using subspace expansion instead of density matrix perturbation originates Dolgov and Savostyanov (2014, 2013) in the tensor train/numerical linear algebra community. There, a stringent proof was given regarding the convergence properties of this method when the local tensor of the residual
(13) 
is used as the expansion term. Here, we will only use the method of subspace expansion and substitute a numerically much more cheaply available expansion term.
The following section is divided into three parts: firstly, we will explain the concept of subspace expansion acting on two neighbouring MPS tensors , . Secondly, the expansion term employed in DMRG3S is introduced and motivated. Thirdly, a simple example is described.
iv.1 Subspace Expansion with an Arbitrary Expansion Term
In the following, we will describe subspace expansion of the RHS basis of the current working tensor, as it would occur during a lefttoright sweep.
Assume a state described by a set of tensors . At the bond , we can then decompose the state as a sum over left and right basis states as in Eq. (8).
Now we expand the tensor by some expansion term for each individual physical index component:
(14) 
This effectively expands the RHS MPS basis of from to . Similarly, expand the components of with zeros:
(15) 
The appropriatelysized block of zeros only multiplies with the expansion term . In terms of a decomposition as in (8), this is equivalent to
(16) 
where is the result of multiplying and , with the in the second expression similarly resulting from the in . While the state remains unchanged, the local optimiser on the new site can now choose the initiallyzero components differently if so required: The necessary flexibility in the left/right basis states to escape local minima has been achieved without referring to the density matrix.
Note that while orthonormality of is lost, we do not need it between the enrichment step on site and the optimisation step on site . The orthonormality of can be restored via singular value decomposition as usual. Furthermore, it is usually necessary to truncate the RHS basis of down from to immediately following the expansion: this preserves the most relevant states of the expansion term while avoiding an exponential explosion of bond dimensions.
When sweeping from right to left, the left rather than right MPS basis of the current working tensor is expanded, with the left tensor being zeropadded as opposed to the right tensor :
(17)  
(18) 
iv.2 Expansion Term
Using the exact residual as the expansion term is computationally expensive: The term can be updated locally and is mostly unproblematic, but the subtraction of and subsequent reorthonormalisation is costly and has to be done after each local optimisation, as the current value of changes. This exact calculation is hence only possible for , which is far too small to tackle difficult twodimensional problems.
Instead, we propose the very cheaply available terms
(19) 
to be used during lefttoright sweeps and for use during righttoleft sweeps with some scalar mixing factor . In the regime where the exact residual can be computed, these terms work essentially equally well.
This expression for can be heuristically motivated as follows: (19) is equivalent to the partial projection of onto to the left of the current bond. Hence, in the ground state and ignoring numerical errors, the RHS basis of this is identical to that of . Truncation from to is then possible without inducing errors.
Numerically, it seems possible to choose arbitrarily large without hindering convergence or perturbing the state too much in simple (onedimensional) problems. However, if the chosen maximal bond dimension is insufficient to faithfully capture the ground state of the given system, has to be taken to zero eventually to allow convergence. Otherwise, will continuously add new states and disturb the result of the eigensolver, which is optimal at this specific value of but not an eigenstate of yet.
The cost of a single subspace expansion is for the calculation of , potentially for the addition to and respectively and for the SVD of an matrix formed from . If we restrict the SVD to singular values, then the resulting matrices will be of dimension , and respectively. The first can be reformed into at cost and the second and third multiplied into at cost . The total cost of this step is dominated by the cost of the SVD at , which is still cheaper than the calculation of the perturbation term in (11), not considering the other costs associated to using the density matrix for truncation.
iv.3 Subspace Expansion at the Example of a Spin System
In the following, we will demonstrate and illustrate the method of subspace expansion at the simple example of a system of two spins with from to as it would occur during a lefttoright sweep.
Assume the Hamiltonian
(20)  
(21) 
with MPOcomponents
(22)  
(23) 
Let the initial state be an MPS, described by components
(24)  
(25) 
where square brackets denote matrices in the MPS bond indices. Due to the standard normalisation constraints, there are only two free scalar variables here, and .
Subspace expansion of is straightforward (keep in mind that for convenience):
(26)  
(27)  
(28)  
(29)  
(30) 
resulting in and directly after the expansion:
(31)  
(32)  
(33) 
Normalising via a singular value decomposition as and multiplying gives:
(34)  
(35)  
(36)  
(37)  
(38) 
As expected, the final state is still entirely unchanged, but there is now a onetoone correspondence between the four entries of and the coefficients in the computational basis, making the optimisation towards trivial.
V Strictly SingleSite DMRG
We can now combine standard singlesite DMRG (e.g. Ref. Schollwöck, 2011, p. 67) with the subspace expansion method as a way to enrich the local state space, leading to a strictly singlesite DMRG implementation (DMRG3S) that works without referring to the density matrix at any point.
With the notation from Section II, the steps follow mostly standard singlesite DMRG. In an outermost loop, the algorithm sweeps over the system from lefttoright and righttoleft until convergence is reached. Criteria for convergence are e.g. diminishing changes in energy or an overlap close to between the states at the ends of subsequent sweeps.
The inner loop sweeps over the system, iterating over and updating the tensors on each site sequentially. Each local update during a lefttoright sweep (righttoleft sweeps work analogously) consists of the following steps:

Optimise the tensor : Use an eigensolver targeting the smallest eigenvalue to find a solution to the eigenvalue problem
(39) is the new current energy estimate. This first step dominates the computational cost.

Build according to (19) using . Build an appropriatelysized zero block after the dimensions of are known.

Subspaceexpand with and with .

Apply a SVD to and truncate its right basis to again, resulting in .

Multiply the remainder of the SVD () into .

Build from , and .

Calculate a new energy value after truncation based on , , and . Use this energy value and to adapt the current value of (cf. Section VI).

Continue on site .
Of these, step 2 and 3 implement the actual subspace expansion, whereas all others are identical to standard singlesite DMRG.
It is important to note that the only change from standard singlesite DMRG is the addition of an enrichment step via subspace expansion. Therefore, this method does not interfere with e.g. realspace parallelised DMRG,Depenbrock (2013); Stoudenmire and White (2013) the use of nonabelian symmetriesMcCulloch and Gulácsi (2002); McCulloch (2007) or multigrid methods.Dolfi et al. (2012)
To analyse the computational cost, we have to take special care to ensure optimal ordering of the multiplications during each eigensolver iteration in (39). The problem is to contract , with and , and . The optimal ordering is then :

Contract and over the left MPS bond at cost .

Multiply in over the physical bond of and the left MPO bond at cost .

Finally contract with over the right MPO and MPS bonds at cost .
The total cost of this procedure to apply to is . Assuming large is small, this gives a speedup in the eigensolver multiplications of over the CWF approach, which takes .
In addition to this speedup, the subspace expansion is considerably cheaper than the density matrix perturbation. Since the perturbation/truncation step can often take up to 30% of total computational time, improvements there also have a high impact. At the same time, the number of sweeps at large needed to converge does not seem to increase compared to the CWF approach (cf. Section VII) and sometimes even decreases.
Vi Adaptive Choice of Mixing Factor
Both density matrix perturbation and subspace expansion generally require some small mixing factor to moderate the contributions of the perturbation terms. The optimal choice of this depends on the number of states available and those required to represent the ground state, as well as the current speed of convergence. Too large values for hinder convergence by destroying the improvements made by the local optimiser, whereas too small values lead to the calculation being stuck in local minima with vital states not added for the reasons given in Section III.2. The correct choice of hence affects calculations to a large degree, but is also difficult to estimate before the start of the calculation.
Fig. 1 displays the individual steps within a single update from the energy perspective: Let denote the gain in energy during the optimisation step and let denote the subsequent rise in energy during the truncation following the enrichment step. only occurs if some enrichment (either via density matrix perturbation or subspace expansion) has occurred, otherwise there would be no need for any sort of truncation. We can hence control the approximate value of via , which leads to a simple adaptive and computationally cheap algorithm:
If was very small or even negative (after changing the optimised state by expansion of its right basis) during the current update, we can increase during the next update step on the next site. If, on the other hand, , that is, if the error incurred during truncation nullified the gain in energy during the optimisation step, we should reduce the value of at the next iteration to avoid making this mistake again.
In practice, it seems that keeping gives the fastest convergence. Given the orderofmagnitude nature of , it is furthermore best to increase/decrease it via multiplication with some factor greater/smaller than as opposed to adding or subtracting fixed values.
Some special cases for very small (stuck in a local minimum or converged to the ground state?) and or have to be considered, mostly depending on the exact implementation.
It is unclear whether there is a causal relation between the optimal choice of and the ratio of or whether both simply correlate with a proceeding DMRG calculation: at the beginning, gains in energy are large and is optimally chosen large, whereas later on, energy decreases more slowly and smaller values of are more appropriate.
It is important to note that this is a tool to reach convergence more quickly. If one is primarily interested in a wavefunction representing the ground state, the calculation of a new at each iteration comes at essentially zero cost. If, however, the aim is to extrapolate in the truncation error during the calculation, then a fixed value for is of course absolutely necessary.
Vii Numerical Examples
vii.1 DMRG Stuck in a Local Minimum
In this subsection, we will give a short example of how DMRG can get stuck in a local minimum even on a very small system. Consider spins with isotropic antiferromagnetic interactions and open boundary conditions. The symmetry of the system is exploited on the MPS basis, with the overall forced to be zero. The initial state is constructed from 20 linearly independent states, all with sites on the very right at and in total. The quantum number distribution at each bond is plotted in Fig. 2 as black circles.
DMRG3S is run with subspace expansion disabled, i.e. throughout the calculation. The algorithm “converges” to some highenergy state at . The resulting quantum number distribution (red squares in Fig. 2) shows clear asymmetry both between the left and right parts of the system and the and sectors at any given bond. It is also visible that while some states are removed by DMRG3S without enrichment, it cannot add new states: the red squares only occur together with the black filled circles from the input state.
If we enable enrichment via subspace expansion, i.e. take , DMRG3S quickly converges to a much better ground state at . The quantum numbers are now evenly distributed between the left and right parts of the system and symmetry is also restored.
vii.2 Application to Physical Systems
In the following subsections, we will compare the two singlesite DMRG algorithms CWF and DMRG3S when applied to four different physical systems: a Heisenberg spin chain with periodic boundary conditions, a bosonic system with an optical lattice potential, a FermiHubbard model at and quarterfilling and a system of free fermions at halffilling.
Each algorithm is run at three different values of from the same initial state and run to convergence. This way, it is possible to both observe the behaviour of the methods at low and high accuracies.
The usual setup in DMRG calculations of starting at small and increasing slowly while the calculation progresses makes it unfortunately very difficult to compare between the three methods. This is because different methods require different configurations to converge optimally. We therefore restrict ourselves to fixed throughout an entire calculation, even though all methods could be sped up further by increasing slowly during the calculation.
Errors in energy compared to a numerically exact reference value are plotted as a function of sweeps and CPU time. It should be stressed that this error in energy is not directly comparable to the truncation error traditionally used in twoSite DMRG or the variance sometimes considered in singlesite DMRG. Even small differences in energy can lead to vastly different physical states and reaching maximal accuracy in energy is crucial to ensure that the true ground state has been reached.
Furthermore, a traditional twosite DMRG (2DMRG) calculation without perturbations is done and its error in energy and runtime to convergence is compared to the two singlesite algorithms. Here, convergence is defined as a normalised change in energy less than (for ) resp. (for ). The runtime to convergence is the CPU time used until that energy was output by the eigensolver for the first time.
All calculations were performed on a single core of a Xeon E52650.
vii.2.1 Heisenberg Chain
Firstly, we consider a Heisenberg spin chain with sites and periodic boundary conditions implemented on the level of the Hamiltonian as a simple link between the first and last site:
(40) 
symmetries are exploited and the calculations are forced in the sector.
DMRG3S Energy Error  

CWF Energy Error  
2DMRG Energy Error  
DMRG3S Runtime  
CWF Runtime  
2DMRG Runtime 
This system is of particular interest as, firstly, it is one of the standard benchmarking systems with wellknown analytic values for the groundstate energy. Secondly, it is a onedimensional system where the case of periodic boundary conditions can still be tackled by DMRG. The larger MPO bond dimension resulting from these PBC similarly arises during the simulation of quasi twodimensional systems as cylinders. The same applies to the nonnearestneighbour interactions in this system (between the first and last site) and cylindrical systems.
Fig. 3 compares the error in energy with respect to the reference value for DMRG3S and CWF for as a function of sweeps and computation time.
During the first three to four sweeps, DMRG3S exhibits a smaller convergence rate per sweep, however, compared to the first sweeps of CWF, they also cost negligible CPU time. Afterwards, DMRG3S offers comparable (at medium accuracies) or much improved (at high accuracies) convergence rate per sweep as compared to CWF together with a still reduced average runtime per sweep. Combined, these effects lead to a speedup of and for and respectively between CWF and DMRG3S when considering the runtime to convergence.
In comparison, the 2DMRG algorithm does not handle the periodic boundary conditions well and yields energies higher than the singlesite algorithms with perturbations (cf. Tab. 1). Runtime to convergence is hence not comparable.
vii.2.2 Dilute Bosons on an Optical Lattice
We carry on to study bosons in a modulated potential of unit cells, each with sites. The cutoff for local occupation numbers is , resulting in a local site dimension of . The Hamiltonian is given as
(41) 
This system should be fairly easy for DMRG to handle, as there are only nearestneighbour interactions. However, the largescale order due to the modulated potential and a very small energy penalty paid for an uneven distribution of bosons was observed to cause badly converged results.Dolfi et al. (2012) Manual checks of the states returned by each method were hence done to ensure a proper, equal distribution of bosons throughout the whole system.
The state is initialised with bosons in total. We allow states and use the energy reference value . All algorithms converge to this value at .
Fig. 4 compares CWF and DMRG3S whereas Tab. 2 additionally lists 2DMRG. Since the bond dimensions are relatively small, we do not expect a speedup from faster numerical operations. Instead, the improved convergence behaviour per sweep is responsible for the speedup of of DMRG3S over CWF at small . At larger , CWF converges better, but numerical operations also become cheaper for DMRG3S for a speedup of again.
As there are no longrange interactions, 2DMRG also fares well with regard to energy accuracy. However, it takes longer to converge than the singlesite methods especially at large , mainly because the eigenvalue problem in twosite DMRG is of dimension larger than in singlesite DMRG. A comparison between DMRG3S and 2DMRG leads to a speedup of up to for the case of .
DMRG3S Energy Error  

CWF Energy Error  
2DMRG Energy Error  
DMRG3S Runtime  
CWF Runtime  
2DMRG Runtime 
vii.2.3 FermiHubbard Model
As a third example, substantially more expensive calculations are carried out for a substantially stronger entangled FermiHubbard model of 100 sites with Hamiltonian
(42) 
Both and symmetries are employed, with fermions and enforced through the choice of initial state. Together with the free fermions from the next section, we can use this system to study how criticality and increased entanglement affect the three methods.
Calculations are done for . All methods converge to the same value at large .
DMRG3S Energy Error  

CWF Energy Error  
2DMRG Energy Error  
DMRG3S Runtime  
CWF Runtime  
2DMRG Runtime 
Fig. 5 compares the two singlesite methods while Tab. 3 summarises all three DMRG implementations. Since the system only exhibits local interactions, 2DMRG fares well and all methods generally provide comparable energies. The difference is therefore in the runtime needed to achieve these energies. Compared to CWF, DMRG3S achieves a speedup of consistently at all , as the smallest is already large enough to justify the assumption in the speedup of numerical operations. In particular, it continues to converge quickly at high accuracies whereas CWF develops a long tail of slow convergence. The speedup compared to 2DMRG is smaller at lower values of , but increases to at .
vii.2.4 Free Fermions
DMRG3S Energy Error  

CWF Energy Error  
2DMRG Energy Error  
DMRG3S Runtime  
CWF Runtime  
2DMRG Runtime 
Finally, we consider a model of free fermions on a chain of 100 sites with Hamiltonian
(43) 
The maximally delocalised wavefunction found in the groundstate of this system is notoriously difficult for MPS formats in general to reproduce faithfully. At the same time, most other parameters are identical (, , ) or very close () to those in the FermiHubbard model from Section VII.2.3. The calculation is done using and symmetries at halffilling with fermions and . The choice of is the same as for the FermiHubbard system, namely . We used as the reference value, since all methods converged to this groundstate energy at .
The results in Tab. 4 and Fig. 6 mostly follow the previous results for locally interacting systems: Accuracies of all methods are essentially identical, whereas time to convergence varies between the methods. At small , there are some speedups of DMRG3S over CWF, largely due to better convergence behaviour per sweep, whereas a significant advantage of DMRG3S becomes visible at larger , when numerical operations become cheaper compared to the CWF method. Correspondingly, the speedup from CWF to DMRG3S increases from at to at .
Similarly, the larger numerical cost of twosite DMRG becomes more noticeable at larger , with the speedup between 2DMRG and DMRG3S increasing from at to more than at .
Compared to the noncritical FermiHubbard system from Section VII.2.3, we observe larger errors in energy at fixed , as expected. Correspondingly, as more eigenvalues contribute significantly, convergence of both the eigenvalue solver and the singular value decompositions becomes slower, leading to a slowdown of all three methods.
Viii Conclusions
The new strictly singlesite DMRG (DMRG3S) algorithm results in a theoretical speedup of during the optimisation steps compared to the centermatrix wavefunction formalism (CWF), provided that is small. Further, convergence rates per sweep are improved in the important and computationally most expensive highaccuracy/large phase of the calculation. In addition, auxiliary calculations (enrichment, normalisation, etc.) are sped up and memory requirements are relaxed.
Numerical experiments confirm a speedup within the theoretical expectations compared to the CWF method. The efficiency of singlesite DMRG in general compared to the traditional twosite DMRG was substantiated further by a large speedup at comparable accuracies in energy.
Acknowledgements.
We would like to thank S. Dolgov, D. Savostyanov and I. Kuprov for very helpful discussions. C. Hubig acknowledges funding through the ExQM graduate school and the Nanosystems Initiate Munich. F. A. Wolf acknowledges support by the research unit FOR 1807 of the DFG.References
 White (1992) S. R. White, Phys. Rev. Lett. 69, 2863 (1992).
 White (1993) S. R. White, Phys. Rev. B 48, 10345 (1993).
 Schollwöck (2005) U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).
 Schollwöck (2011) U. Schollwöck, Ann. Phys. 326, 96 (2011).
 Stoudenmire and White (2013) E. M. Stoudenmire and S. R. White, Phys. Rev. B 87, 155137 (2013).
 McCulloch and Gulácsi (2002) I. P. McCulloch and M. Gulácsi, Europhys. Lett. 57, 852 (2002).
 Dolfi et al. (2012) M. Dolfi, B. Bauer, M. Troyer, and Z. Ristivojevic, Phys. Rev. Lett. 109, 020604 (2012).
 White (2005) S. R. White, Phys. Rev. B 72, 180403 (2005).
 Yan et al. (2011) S. Yan, D. A. Huse, and S. R. White, Science 332, 1173 (2011).
 Depenbrock et al. (2012) S. Depenbrock, I. P. McCulloch, and U. Schollwöck, Phys. Rev. Lett. 109, 067201 (2012).
 Stoudenmire and White (2012) E. Stoudenmire and S. R. White, Annu. Rev. Condens. Matter Phys. 3, 111 (2012).
 Vidal et al. (2003) G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. 90, 227902 (2003).
 Eisert et al. (2010) J. Eisert, M. Cramer, and M. B. Plenio, Rev. Mod. Phys. 82, 277 (2010).
 McCulloch (2007) I. P. McCulloch, J. Stat. Mech. 2007, P10014 (2007).
 Dolgov and Savostyanov (2014) S. Dolgov and D. Savostyanov, SIAM J. Sci. Comput. 36, A2248 (2014).
 Dolgov and Savostyanov (2013) S. V. Dolgov and D. V. Savostyanov, arXiv (2013), arXiv:1312.6542 [math.NA] .
 Depenbrock (2013) S. Depenbrock, “Tensor networks for the simulation of strongly correlated systems,” (2013), PhD Thesis.