Minimally entangled typical thermal states versus matrix product purifications
for the simulation of equilibrium states and time evolution
For the simulation of equilibrium states and finite-temperature response functions of strongly-correlated quantum many-body systems, we compare the efficiencies of two different approaches in the framework of the density matrix renormalization group (DMRG). The first is based on matrix product purifications. The second, more recent one, is based on so-called minimally entangled typical thermal states (METTS). For the latter, we highlight the interplay of statistical and DMRG truncation errors, discuss the use of self-averaging effects, and describe schemes for the computation of response functions. For critical as well as gapped phases of the spin- XXZ chain and the one-dimensional Bose-Hubbard model, we assess the computation costs and accuracies of the two methods at different temperatures. For almost all considered cases, we find that, for the same computation cost, purifications yield more accurate results than METTS – often by orders of magnitude. The METTS algorithm becomes more efficient only for temperatures well below the system’s energy gap. The exponential growth of the computation cost in the evaluation of response functions limits the attainable timescales in both methods and we find that in this regard, METTS do not outperform purifications.
pacs:05.30.-d, 02.70.-c, 75.10.Pq, 05.30.Jp
- I Introduction
- II Matrix product purifications
- III METTS sampling
- IV Comparison for static correlators
- V Algorithms for response functions
- VI Response in the spin- XXZ chain
- VII Conclusions and discussion
- A Truncations in our tDMRG simulations
- B Choices for the METTS collapse
- C Quantification of computation costs
- D Exact versus quasi-exact reference data
Finite-temperature correlation and response functions of quantum many-particle systems are of great interest. They provide insights into the many-body physics and allow to compare theoretical models to experimental results. However, their accurate computation remains challenging. For many relevant models, one has to rely on the development of efficient numerical techniques. The most successful method for the study of strongly correlated one-dimensional (1D) systems is the density-matrix renormalization group (DMRG), which is based on matrix product states (MPS) White1992-11 ; White1993-10 ; Schollwoeck2005 . While DMRG was originally designed to study ground states of 1D systems, its extension to the time evolution of quantum states within tDMRG Vidal2003-10 ; White2004 ; Daley2004 allows for the simulation of quenches and response functions. Based on this extension, quite different methods for simulations at finite temperatures have been developed. One of them rests on a purification 111A state is called a purification of the density matrix on if . of the density matrix Uhlmann1976 ; Uhlmann1986 ; Nielsen2000 , which can be encoded in matrix product form Verstraete2004-6 ; Zwolak2004-93 . First, this was successfully applied to study static finite-temperature properties of quantum spin chains Feiguin2005-72 ; Barthel2005 . The combination with real-time tDMRG allows for the accurate evaluation of finite-temperature response functions and can be applied to compute spectral functions and to study a variety of experimentally relevant systems Barthel2009-79b ; Feiguin2010-81 ; Karrasch2012-108 ; Barthel2012_12 ; Barthel2013-15 ; Karrasch2013-15 ; Karrasch2013-87 ; Lake2013-111 ; Huang2013-88 . For such purposes, purifications can similarly be combined with a Chebyshev expansion Tiegel2014-90 . Despite their success, simulations based on purifications are often limited with respect to the reachable inverse temperatures, times, or Krylov expansion orders in frequency domain approaches. This is due to a growth of entanglement, which is accompanied by a corresponding growth of computation costs. The search for complementary approaches led to an algorithm that avoids the direct encoding of the mixed states: Instead of purifying the density matrix, one can sample cleverly chosen pure states, so-called minimally entangled typical thermal states (METTS) White2009-102 ; Stoudenmire2010-12 . While they can be efficiently encoded in matrix product form as their entanglement is relatively low, they represent well the thermal properties of the system at hand. The METTS algorithm has been successfully applied to study static properties and quantum quenches at finite temperature Yao2012 ; Alvarez2013-87 ; Bonnes2014-113 . However, a thorough analysis of its accuracy and efficiency compared to computations using matrix product purifications of the density matrices was lacking.
The extreme cases of infinite and zero temperatures are relatively easy to understand. Purifications should prevail at higher temperatures, while METTS should become favorable at lower temperatures when the ground state is approached. The infinite temperature purification can be written exactly as an MPS of bond dimension . This is clearly simpler and more efficient than averaging over several METTS samples (each having in this case the same probability). The argument prevails for large finite temperatures. As can for example be represented exactly as a purification of bond dimension for the XXZ model, the evaluation of an arbitrary product operator in a chain of length would cost operations. In contrast, the number of METTS samples required for a precise estimate should scale exponentially with the size of the spatial support of the considered product operator. On the other hand, at zero temperature and for a system without ground-state degeneracy, (almost) every METTS is simply the ground state . So, one only needs to produce a single METTS with bond dimension . In contrast, the zero-temperature purification would correspond to the projector and have bond dimension . Thus, for , METTS are clearly more efficient.
In this work, we discuss how self-averaging can be used to moderately reduce statistical errors in the METTS algorithm and introduce schemes for the evaluation of response functions using METTS. We compare the accuracies and computation costs of the METTS and purification approaches for the evaluation of finite-temperature correlation and response functions. We focus on two paradigmatic models of interacting quantum systems, namely the spin- XXZ chain Bethe1931 ; Cloizeaux1966-7 ; Mikeska2004 and the 1D Bose-Hubbard model Kuehner1998-58 ; Jaksch1998-81 at critical as well as non-critical points of their phase diagrams. In contrast to indications and expectations expressed in the earlier literature White2009-102 ; Schollwoeck2009-2 ; Bonnes2014-113 , for almost all cases considered here, we find that, for the same computation cost, the purification approach yields more accurate results than METTS – often by orders of magnitude. METTS become more efficient only for temperatures well below the energy gap of the system. It would be interesting to investigate further whether other approaches as, for example, computations based on the ground state and a few excited states, which can be determined variationally, could outperform the METTS approach for such very low temperatures. For the comparisons, we always use equal total computation costs for both methods, ignoring that METTS simulations can be parallelized more easily than purification simulations by generating independent Markov chains on different computing nodes. This can be taken into account by keeping in mind that, to reduce the presented METTS errors by an order of magnitude, one needs to increase the number of employed computing nodes by at least a factor of 100.
The article is structured as follows. Sections II and III shortly review the algorithms for computing static observables with purifications and METTS, respectively, and discuss the interplay of statistical and truncation errors for METTS (Sec. III.2) as well as self-averaging (Sec. III.3). In section V, we introduce, for METTS, a simple scheme and two more elaborate schemes for the computation of response functions, which are analogous to corresponding schemes based on purifications Barthel2009-79b ; Karrasch2012-108 ; Barthel2012_12 . The main objective of the paper, the efficiency comparison of METTS and purifications, is presented in section IV for static correlations functions in the spin- XXZ chain and the Bose-Hubbard model, and in section VI, for response functions in the XXZ model. Some technical issues are described in appendices. We summarize and conclude in section VII.
Ii Matrix product purifications
Let us briefly review how to compute finite-temperature expectation values using matrix product purifications. Here, we work with the canonical ensemble and .
A state is called a purification of the density matrix on if
Choosing the auxiliary Hilbert space isomorphic to the physical Hilbert space , i.e., , it is simple to give a purification of the infinite-temperature state . It is
where are orthonormal basis states for lattice site , and for the corresponding lattice site of the auxiliary system. For the orthonormal basis of , let denote the vectorization of an operator on such that
Because , as given in Eq. (2), is a product state, it can be encoded as an MPS with matrices of bond dimension one (cf. appendix A). With as the initial state, one can employ imaginary-time evolution, to obtain purifications for finite-temperature states ,
To this purpose, one can employ the time-dependent DMRG algorithm (tDMRG) White2004 ; Daley2004 or the almost identical time-evolved block decimation (TEBD) Vidal2003-10 . Specifics of our simulations are summarized in appendix A. Exploiting that
thermal expectation values can be computed by
where both physical and auxiliary degrees of freedom are summed over.
Iii METTS sampling
iii.1 Algorithm for static observables
The strategy employed in the minimally entangled typical thermal states (METTS) algorithm is to approximate thermal expectation values by sampling pure quantum states that have two favorable properties. They represent well the physical properties of the system for the given temperature, and the entanglement of the states is relatively low, which makes DMRG calculations efficient. Expressing the trace for the thermal expectation value using some orthonormal basis of product states
where are (arbitrary) orthonormal basis states for lattice site , we have
Defining the METTS and their probabilities ,
the thermal average reads
Thus, by sampling the states according to the probabilities , we can approximate by averaging over . The computation cost of DMRG is directly related to the entanglement of the quantum state (see appendix C). Hence, product states (6) are a natural choice because their entanglement entropy is zero and it remains reasonably low during imaginary-time evolution. The sampling is accomplished efficiently by generating a Markov chain of METTS as illustrated in Fig. 1. An arbitrary initial product state is evolved in imaginary time to obtain the METTS . (See appendix A for details on the tDMRG evolution.) Then, the METTS is collapsed through a projective measurement with measurement basis (6), yielding a new product state with probability from which one subsequently computes and so on. The transition probabilities obey the detailed balance condition
such that the desired distribution is indeed the fixed point of this Markov process.
Note that the projective measurement, , can be carried out sequentially, site by site. Starting at some site , we go from to with probability . Measuring subsequently on site , we go to with cond. probability such that, in the end, we arrive at state indeed with probability
In order to ensure ergodicity and reduce autocorrelation times, it is useful to switch between different measurement bases during the sampling. Details on this and our corresponding choice are described in appendix B.
iii.2 Errors: Statistics, truncations, and Trotter
There are two error sources for the evolution of MPS in the framework of tDMRG as described in appendix A. The first is due to truncations of low-weight terms in the Schmidt decomposition of the wave function. This error is well-controlled by the truncation threshold (we call it for purifications and for METTS) which bounds, in every time step, the two-norm deviation of the truncated MPS from the exactly evolved state. We implement tDMRG using fourth-order Trotter-Suzuki decompositions of the evolution operators with time steps of size for purifications and for METTS (). These are the second error source. The resulting errors of order can only become relevant for very large times and we made sure that they are never dominant for the presented data. Additionally, the accuracy of the METTS sampling algorithm is influenced by a third error source – the statistical error that depends on the number of samples used for averaging.
Figure 2 illustrates the interplay of statistical errors and truncation errors in a METTS computation of the correlator for the exactly solvable 1D XX model [ in Eq. (11)] at inverse temperature , where site is at the center of the chain. The exact solution is shortly described in appendix D. The top panel shows the convergence of the METTS algorithm for different truncation thresholds as a function of the number of samples. For low sample numbers, the statistical error dominates and the truncation error is negligible. Autocorrelation times between subsequent samples are short and the statistical error is to a good approximation proportional to and independent of the truncation threshold. Once the sample number reaches a certain -dependent threshold, the statistical error has reduced to a magnitude that is comparable to the error induced by the truncations. The curves begin to level out as the relative contribution of the truncation error grows. At a certain point, further samples will not enhance the accuracy of the simulation as the truncations prevent further convergence to the correct value. Of course, the less we truncate (the lower is), the longer the -convergence persists.
The truncation affects the accuracy in two ways. As every METTS is approximated by a truncated MPS, the values one obtains for each sample are not exact. Additionally, the produced samples will not correspond exactly to the correct probability distribution of Eq. (9). This is because the transition probabilities depend on the samples and are thus also affected by the truncations.
While lowering the truncation threshold yields more accurate results, it also increases the computation cost per sample. In the lower panel of Fig. 2, we present the same errors of the METTS algorithm as in the top panel, but here as a function of the total computation cost which we quantify in an implementation-independent way as described in appendix C. This shifts the simulations with lower truncation thresholds and thus more costly samples to the right. Whereas, for a fixed number of samples (top panel), the accuracy is a monotonic function of , this is not necessarily so for fixed total computation cost (lower panel). When plotted against the total computation costs, accuracy curves of METTS simulations with different have crossings. For practical simulations, it is therefore important to choose the truncation threshold such that the two error sources are balanced. In the lower panel of Fig. 2, one can easily read off the truncation threshold that is optimal for a given computation cost. While the optimal truncation threshold depends on the specific system studied and the observable that is evaluated, it generally shifts towards lower values of with increasing total computation cost.
iii.3 Exploiting self-averaging
If the considered model is translation invariant, we are free to choose an arbitrary position in the lattice for the evaluation of an observable. While the average of these expectation values will converge to the correct result independent of the position, a single METTS sample itself is not translation invariant. We can thus exploit self-averaging to reduce statistical errors in the METTS algorithm. For a correlation length , averaging a local observable over a block of sites corresponds for high and intermediate temperatures to statistically independent samples. Therefore, we can expect that the statistical METTS errors reduce by a factor of order . For systems with open boundary conditions, one has to restrict the averaging to sites with a sufficient distance from the boundaries.
We illustrate the effect for the spin- XXZ chain in Fig. 3, by averaging the correlator over different numbers of central sites and comparing the result to quasi-exact purification data (see appendix D). The observed error reduction is of the expected order of magnitude. For and , the impact of self-averaging is rather small for short distances . This is due to the fact that, in this case, the temperature is already well below the gap (cf. Table 1) and all excitations occurring in the METTS are of long wavelength. Hence, short-range correlations in the METTS are almost translation invariant.
As the additional computation cost for the spatial averaging of time-local observables (in equilibrium or quench dynamics) is negligible, it is advisable to enhance the METTS precision through the self-averaging whenever finite-size effects are well-controlled. Spatial averaging in the evaluation of a response function would however require additional real-time evolutions and seems hence not useful.
Iv Comparison for static correlators
iv.1 Procedures to compare efficiencies
In this section, we compare the efficiencies of METTS and matrix product purifications, by studying accuracies of static thermal correlation functions for fixed computation costs. To this purpose, accuracies are quantified by the deviations of the obtained expectation values from exact or quasi-exact data as described in appendix D. For the error of METTS, we generate several sets of subsequent samples, and take the root mean square of the average deviations from the reference data in each set. The computation cost due to evolving MPS in imaginary-time (and also real-time in section V) is quantified in a fashion that is largely independent of the chosen implementation, time step, and platform, as a function of the MPS bond dimensions as detailed in appendix C. For METTS, we average the computation costs of the sample sets.
To take the interplay between statistical and truncation errors (section III.2) into account in the assessment of the performances of METTS and purifications, we proceed as follows. We choose a truncation threshold for the purification and determine the theoretical computation cost of the simulation. Then we produce METTS samples, using different truncation thresholds , and fix the sample set size for each by the quotient of the purification cost and the average cost per METTS sample such that, for each truncation threshold, the total computation costs of both methods are equal. In the figures, we only present the results for the truncation thresholds that yield the best results, i.e., those that approximately minimize the error for fixed computation cost, as well as the results for two nearby values of .
Setting the total computation costs of both methods equal, ignores that METTS simulations can be parallelized more easily than purification simulations, by generating independent Markov chains on different computing nodes. However, this can be easily taken into account. To reduce the METTS errors, as presented in the following, by an order of magnitude, one has to increase the number of employed computing nodes by at least a factor of 100. This assumes that the extent of thermalization phases at the beginnings of the Markov chains is negligible, and the factor 100 is a lower bound because one also needs to decrease (hence, increasing the computation cost per METTS) when the statistical error is being reduced.
iv.2 Spin- XXZ chain
let us consider three values of the anisotropy parameter: the exactly solvable non-interacting case and the isotropic Heisenberg antiferromagnet at , which are both critical (gapless), as well as the point in the gapped Néel phase. The lattice with open boundary conditions has size and site is at the center of the chain. We apply the METTS and purification algorithms, as described in sections II and III.1, to compute the static correlation function at inverse temperatures and .
Figure 4 displays the accuracies of both methods for fixed values of the total computation cost. The columns refer to the three values of the anisotropy parameter . The top panels present the absolute value of the quasi-exact correlation function . The panels below display the errors of the purification simulations with and and the errors of several METTS simulations with different truncation thresholds . All curves that appear within a panel and refer to the same temperature are based on simulations of equal total computation costs. For each panel, the METTS truncation thresholds are chosen such that, for the largest , the truncation error dominates; for the lowest, the statistical error dominates; and for the intermediate , statistical and truncation errors are balanced in an optimal way such that the error is (approximately) minimized for the given computation cost.
For almost all parameters, the matrix product purification simulations yield more accurate results than the best METTS computations. For , the errors of the methods differ by up to a few orders of magnitude. When lowering the temperature, entanglement and correlation lengths increase. The increased absolute value of the correlation function is reflected in a correspondingly larger absolute error for the simulations based on purifications. The efficiency of the METTS sampling can increase when lowering the temperatures, especially when it gets sufficiently below the energy gap . In this case, the dimension of the relevant state space to be sampled by METTS is strongly reduced and hence is the statistical error. This is confirmed for the lower temperature in Fig. 4. For the critical (gapless) systems, the purification approach is still more accurate. For the gapped system (), the temperature is with already substantially below the energy gap, (cf. Table 1), and METTS can in this case indeed outperform the matrix product purification.
In the introduction, we have already explained why one should expect purifications to be more efficient than METTS at higher temperatures. That METTS can become more efficient at low temperatures is most obvious for the limit . In this case, every METTS for which the initial state has nonzero overlap with a ground state will simply be this ground state, (up to truncation errors). The purification, on the other hand, evolves to the purification of the ground state density matrix . As the tensor product of two MPS of bond dimension is an MPS with bond dimension , for the same accuracy, computation costs for the METTS are reduced by roughly a factor of 222Computation costs for DMRG simulations with open boundary conditions scale as with the bond dimension ..
The error of the purification changes significantly as a function of the distance . Here, the deviation from the (quasi)-exact reference data is mainly due to the truncations. Generally, when we evaluate a correlator based on a truncated matrix product state, the error can grow at short distances, reaching an -dependent maximum, before starting to decay as the absolute value of the correlator itself becomes very small. On the other hand, METTS simulation errors often remain constant at large distances. Even for the exponentially decaying correlation functions, the METTS error stays well above zero. This is clearly a signature of the statistical error. Cases where the METTS error decays with distance, are usually situations where the truncation error dominates over the statistical error. For the computation costs chosen in our study, this is seen for temperatures well below the energy gap, but one can also observe this behavior when choosing very large truncation thresholds.
Finally, let us shortly discuss an ergodicity issue in the gapped Néel phase. For , the system is essentially in its ground state. Hence, the weight of the two degenerate Néel states in the thermal state becomes significant. The purified state still obeys the symmetry. Every METTS sample is essentially some linear combination of the ground states. Depending on the choice of the collapse basis , severe ergodicity problems can occur in the METTS algorithm. As can be shown, even for the rotation-symmetric random bases that we use (described in appendix B) and which seem to have very good ergodicity properties at first sight, the transition probabilities from one of the degenerate ground states to the other decay exponentially in the system size . Several remedies are available. One can for example avoid this issue by symmetrizing every METTS before measurements. However, in the case of response functions (see section V), this would require an additional real-time evolution and, hence, roughly a doubling of the costs. Alternatively, one could, e.g., use the and the eigenbases for the METTS collapse. Here, we decided to keep the random collapse bases and implicitly average over the two Néel states by using symmetrized observables; for , according to
such that the measurement is not sensitive to the broken symmetry.
iv.3 1D Bose-Hubbard model
with ladder operators obeying and the number operators , hopping , onsite interaction , and chemical potential .
We set and , and consider three values of the hopping parameter. corresponds to the Mott-insulating phase with one particle per site, places the system close to the phase boundary between the Mott-insulating and the superfluid phases, and lies in the superfluid phase. In Fig. 5, we show the comparison between METTS and purifications for the Bose-Hubbard model at and . Again, all curves in a given panel that refer to the same temperature share the same computation cost. In general, the results are similar to those for the XXZ model. However, the differences between the METTS and purifications errors are larger in the comparison for the Bose-Hubbard model. This can be explained with the smaller energy gaps, compared to those in the spin- XXZ model. See Table 1. For the Bose-Hubbard model at the specified points in the phase diagram, errors of the METTS simulations exceed those of purification simulations by two to three orders of magnitude.
V Algorithms for response functions
with . For purifications, several schemes have been suggested and analyzed Barthel2009-79b ; Karrasch2012-108 ; Barthel2012_12 ; Barthel2013-15 . They have different properties concerning the dependence of computation costs on and . According to the naming introduced in Refs. Barthel2012_12 ; Barthel2013-15 , we address below computation schemes A, B, and C, introduce corresponding schemes for the METTS framework, and discuss their properties.
In Refs. Barthel2012_12 ; Barthel2013-15 , further optimizable classes of schemes have been studied. In comparison to the near-optimal scheme C, they allow to substantially reduce computation costs, for example, for systems with separated energy scales. As there is probably no useful adaptation of them for METTS, they will not be addressed here. In related work, Pižorn et al. Pizorn2014-16 recently discussed the option of working, for matrix product purifications (equivalently, one can think in terms of matrix product operators Barthel2013-15 ), in the Heisenberg picture, i.e., to compute the vectorization of the evolved operator and the vectorization of the thermal state to then obtain . One may notice that this is just a special case of the class of optimizable schemes suggested in Eq. (9) of Ref. Barthel2012_12 or Eq. (18) of Ref. Barthel2013-15 . We will not discuss it further because this Heisenberg picture scheme is in general computationally suboptimal and has no direct METTS equivalent. Specifically, the cost analysis in Refs. Barthel2012_12 ; Barthel2013-15 suggests that the Heisenberg picture scheme will typically reach half the maximum times that can be reached by the optimized schemes or the near-optimal scheme C to be described below.
v.1 The simple scheme A
Starting from the matrix product purification [Eq. (4)] of the density matrix , according to evaluation scheme A Barthel2009-79b for the response function (13), we first compute matrix product representations of and using tDMRG. The response function is then given by the matrix element
The vectorization (3) of operators corresponds to the isomorphism between the space of linear maps on the physical Hilbert space and the tensor product space . It allows us to formulate equivalently (and more intuitively) scheme A in terms of matrix product operators (MPOs). Indicating MPOs with square brackets, scheme A reads in this representation simply
The METTS equivalent of scheme A is illustrated in Fig. 6. For every sample , we compute
using real-time evolution and then average over the values obtained for each sample. In practice, this means that we have to carry out two independent tDMRG simulations, using and as initial states, up to some maximum time. At any intermediate time-point that we are interested in, we can evaluate the response function – even for a set of operators if we wish.
v.2 Scheme B is not useful for METTS
In the context of matrix product purifications, we have the alternative scheme B Karrasch2012-108 which reads in the MPO representation
In comparison to scheme A (15), it corresponds to shifting from the first to the second MPO. In Ref. Barthel2013-15 , it was explained why this scheme has some advantages at higher temperatures and disadvantages at lower temperatures. The METTS equivalent would be to compute
As exemplified in Fig. 7 and explained in the following, its computation costs are unfortunately strictly higher than those of scheme A (16). Both schemes share the cost for computing . In scheme A, the evolution of for has usually about the same cost, and the entanglement in both states increases typically linearly with everywhere in the system, reaching some value . In scheme B (18), the required evolution of for starts with the high entanglement value . Due to quasi-locality Nachtergaele2007-12a ; Barthel2012-108b , the entanglement will then reduce in regions of the lattice that are at sufficient distance from the spatial support of operator . It will however remain high or even increase in the vicinity of .
v.3 Reaching longer times with scheme C
Among the classes of optimizable evaluation schemes studied in Refs. Barthel2012_12 ; Barthel2013-15 , the near-optimal scheme C was found to be very useful and reaches about twice the maximum times reachable with schemes A and B. In the MPO representation, it reads
In a similar way, we can compute the response function with METTS, by averaging over
which makes it possible to reach, for the same computational resources, times approximately twice as big as in scheme A.
v.4 Comparison of schemes A, B, and C
Due to the cyclic property of the trace and the (trivial) fact that the evolution operators commute with density matrices , the results of all three schemes converge up to differences in truncation and Trotter errors (see appendix A) to the same thermal average (13). As the truncations in the real-time evolutions are well controlled, the errors of the simulations based on different schemes do not differ significantly if all other parameters are left unchanged. Statistical errors can however differ somewhat 333For operators and Hamiltonians which are real in the chosen onsite basis, the distribution of individual METTS measurement results in schemes A and B will be the same, as the observable , evaluated in scheme A, is then just the transpose of , evaluated in scheme B. In general, the distribution of individual measurement results and hence the statistical errors of schemes A, B, and C can differ.. Figure 7 compares the computation costs of the different evaluation schemes in the METTS algorithm for an autocorrelation function in the isotropic Heisenberg antiferromagnet [ in Eq. (11)]. While scheme B proves to be the least efficient for the parameters considered here, scheme C indeed reduces the computation costs significantly and thus allows for the evaluation of longer maximum times with METTS.
Nevertheless, we choose to use scheme A in this work because it allows to evaluate response functions for a whole set of operators and all times (up to the maximum reachable time) with only two tDMRG runs per METTS sample. This is very useful for the analysis of response functions like , studied in section VI. In contrast, scheme C requires, for every METTS sample, separate tDMRG runs for every required time and operator . This is also a drawback in comparison to scheme C for purifications [Eq. (19)], where one obtains results for all times with only two tDMRG runs.
Vi Response in the spin- XXZ chain
Figure 8 compares the accuracies of the response function in the XXZ chain as computed via METTS and purification simulations for times and . The structure of the plots is as in Fig. 4, i.e., the columns again refer to the three values of the anisotropy parameter already considered in section IV.2 and all curves that appear within a panel and refer to the same temperature are based on simulations with the same total computation costs. As the thermal response functions are complex-valued, we show both the real and the imaginary part of the (quasi-)exact results.
Generally, the error curves closely resemble the results we obtained for the computation of static correlators in Fig. 4. The purifications are more efficient than METTS except for the gapped system () at the low temperature .
Figure 9 displays the errors and computation costs of both methods as a function of time in the evaluation of response functions for at inverse temperature . Each column refers to a truncation threshold for the METTS algorithm and shows the errors (top) and computation costs (bottom) for different numbers of samples. The purification errors and costs of three different truncation thresholds are shown in each column.
The convergence of the METTS errors is clearly reminding of the behavior shown in Fig. 2. As a function of the number of samples, the error is approximately proportional to , until it saturates to an -dependent threshold that can be lowered by decreasing the truncation threshold.
The exponentially growing computation costs in both the METTS approach as well as the matrix product purification approach limit the reachable maximum times, i.e., at least qualitatively, METTS seem to have no favorable properties in this respect.
Vii Conclusions and discussion
We have studied properties of the METTS algorithm for the simulation of strongly correlated systems at finite temperatures. There is an interplay of statistical and DMRG truncation errors that one should take into account for efficient simulations. While the optimal truncation threshold for METTS depends on the specific system studied and the observable that is evaluated, it generally shifts towards lower values of with increasing total computation cost. As demonstrated, one can also exploit self-averaging of (approximately) translation invariant systems to reduce statistical errors in METTS simulations for static observables.
We have presented a simple scheme for the evaluation of response functions using METTS and two more elaborate schemes, one of which gives access to longer maximum times but needs a separate simulation for every required point in time.
For spin- XXZ chains and the 1D Bose-Hubbard model, we have compared the accuracies and computation costs of the METTS and purification approaches for the evaluation of finite-temperature correlation and response functions. For almost all cases considered here, we found in contrast to indications and expectations expressed in the earlier literature that, for the same total computation cost, the purification approach yields more accurate results than METTS – often by orders of magnitude. METTS become more efficient only for temperatures well below the energy gap of the system ( in our case). For both methods, we have discussed the temperature dependence of the accuracies, and for correlators also their distance dependence.
It would be interesting to investigate further whether other DMRG approaches for the low-temperature regime, such as computations based on the ground state and a few excited states or a sampling that is restricted to the complement of the ground state space, could outperform the METTS in these cases.
Lastly, please note that also the evolution under time-dependent Hamiltonians, starting from thermal initial states, can be simulated with both, purifications and METTS, in complete analogy to the case of pure states. To this purpose, one can for example employ a Magnus expansion of the time-evolution operator Alvermann2011-230 .
Acknowledgements.We gratefully acknowledge discussions with G. Roux, U. Schollwöck, S. R. White, and M. Zvonarev.
Appendix A Truncations in our tDMRG simulations
The results for the observables and computation costs presented here are essentially independent of the chosen time-evolution algorithm. We use time-dependent DMRG (tDMRG) Vidal2003-10 ; White2004 ; Daley2004 and employ a Trotter-Suzuki decomposition Trotter1959 ; Suzuki1976 of fourth order, i.e., approximate the time-evolution operator for a (real or imaginary) time step according to
with suitable coefficients , and . In this expression, and contain all Hamiltonian terms on odd and even bonds of the lattice, respectively, such that . As all bond terms contained in are mutually commuting, it is simple to apply the unitaries (analogously for ) to matrix product states (MPS), which have the form
Here, runs over all lattice sites, and is a matrix. The dimensions are the so-called bond dimensions and the bond dimensions for the left and right ends of the chain are such that the matrix product evaluates to a scalar.
When a time-step evolution operator such as is applied to the MPS, the bond dimensions increase to – partly for technical reasons, because the resulting state needs to be brought to MPS form, and partly because the entanglement in the state can grow. States with higher entanglement generally require larger bond dimensions. Hence, it is necessary to compress the matrix product representation, i.e., to make a controlled approximation such that the bond dimensions are again reduced to some extent. This can be achieved through a Schmidt decomposition of the state Nielsen2000 , which boils down to doing singular value decompositions of the tensors . The corresponding reduced density matrices for the left and right parts of the system are and , respectively. The bond dimension for the given bipartition of the lattice is then reduced from to some value by retaining only the largest Schmidt coefficients and truncating all smaller ones.
The resulting two-norm error is
In our simulations, we control errors due to truncations by discarding in every step of the algorithm only components with density matrix eigenvalues below a predefined truncation threshold. In this way, the bond dimensions become functions of (inverse) temperature and time, . For purifications, the truncation threshold is denoted by and by for the METTS. The employed values are specified in the results sections and corresponding figures. For all imaginary- and real-time evolutions of purifications, we choose time steps of size and for the METTS they are .
Appendix B Choices for the METTS collapse
In section III.1, we reviewed the METTS algorithm for a fixed orthonormal basis , used in the projective measurements to generate a new product state from the current METTS state . In order to ensure ergodicity and reduce autocorrelation times, it is useful to switch between different measurement bases during the sampling Stoudenmire2010-12 . This is possible, because it simply corresponds to using an overcomplete basis in the representation of the trace in Eq. (7), i.e., to starting instead from the expression
where are probabilities () according to which we choose basis to collapse a METTS. The rest of the derivation remains unchanged except for extending the detailed balance to the choice of the measurement bases.
In our simulations, we found the following procedure to be robust and efficient. Before every collapse of the wavefunction, we randomly determine measurement bases for the site Hilbert spaces and then choose . For every site, the local basis states are obtained by first choosing with and drawn from the standard normal distribution. Using the Gram-Schmidt process, these states are subsequently orthonormalized to obtain .
Appendix C Quantification of computation costs
The main contribution of this work is to compare the efficiency of the METTS and matrix product purification algorithms. In order to make this comparison as independent as possible from specific details of the implementation and computer architecture, we approximate the computation costs associated with the imaginary- and real-time tDMRG evolutions by integrals over a function of the bond dimensions , which were described in detail in appendix A. As the elementary steps in the employed DMRG algorithms are matrix multiplications and singular value decompositions for the MPS matrices [Eq. (22)], the computation cost scales in leading order with the third power of the bond dimensions. However, small bond dimensions can lead to substantial subleading contributions corresponding to copying matrices etc. In a realistic assessment of the actual computation cost, these need to be taken into account, especially as optimal METTS bond dimensions can be quite small in some cases. Therefore, we model the computation cost , associated with a single local tDMRG step as a function of the bond dimension , by
As shown in Fig. 10, a fit to actual measured computation times yields coefficients and . While the exact values of and depend to some extent on the particular implementation at hand and the employed computer architecture, deviations of the coefficients do not decisively change the results of our study. In essence, the coefficients only depend on the cost scaling of the employed linear algebra routines that are implemented in the highly optimized LAPACK library LAPACK1999 for the given computer architecture.
Based on this, we model the computation costs for an imaginary-time evolution from to and a subsequent real-time evolution to time , as they occur in the different algorithms (see sections II, III.1, and V), by
where runs over all bonds of the lattice. For schemes that require a certain sequence of imaginary- and real-time evolutions, we sum the corresponding contributions to quantify the total cost. For METTS, we use the average computation cost per METTS and multiply for comparisons by the considered number of METTS.
Figure 11 shows that the theoretical computation cost (27) provides a realistic description of the DMRG computation cost for all simulations, independent of the specific system parameters. Over several orders of magnitude, the agreement is very good; only for extremely short computation times, one obtains slight deviations. This additional overhead is mainly due to the computation time needed for the collapse procedure and the evaluation of observables.