Expectations from the Liouvillevon Neumann Equation Using Chebyshev Expansion
Abstract
We consider a natural dimension reduction technique for the Liouvillevon Neumann equation for a mixed quantum system based on evaluation of a trace formula combined with a direct expansion in modified Chebyshev polynomials. This reduction is highly efficient and does not destroy any information. We demonstrate the practical application of the scheme with a model problem and compare with popular alternatives. This method can be applied to autonomous quantum problems where the desired outcome of quantum simulation is the expectation of an observable.
I The Liouville Von–Neumann Equation
An ensemble of quantum systems each described by a wave function may be expressed in terms of a density operator defined by:
(1) 
The temporal evolution of this quantity is characterized by the Liouvillevon Neumann equation:
(2) 
where we have set . If is expanded using a (finite) approximate basis set , (2) may be viewed as an ordinary differential equation in a matrix argument. In the time independent case the solution for (2) may be written:
(3) 
It is possible and sometimes preferable to rewrite (3) by introduction of the Liouvillian , where is the identity matrix, allowing us to recast as a vector:
(4) 
The main issue is then to evaluate the exponential of the matrix . In the typical case, the size of grows exponentially in the number of particles; at the same time, for a large system and using a typical basis set, (and ) will be a structured sparse matrix.
In the past decades many methods have been developed for numerical evaluation of the matrix exponential (1). However for large sparse matrices, the methods usually applied are those based on expansion in Krylov subspace (2); (3); (4). The main idea is to project (4) onto the subspace:
(5) 
To get a suitable basis set of (5) we may use the Lanczos algorithm (5), as is Hermitian. The Lanczos method is an iterative method, a very desirable feature in the context of large sparse matrices. For specific details see Ref. (6) and Ref. (7). However, because of the fact that all these methods involve the propagation of the matrix , they suffer from requiring that matrix operations (or matrixvector operations) be performed at each step of calculation.
While the evolution of the system is described by the density matrix, the outputs we are interested in obtaining from quantum simulations are the expectations of observables, these being the only quantities we can compare with the experiments. In the density matrix formalism the expectation value of an observable , associated with an operator is written as:
(6) 
Whereas in general quantum simulations the equation of motion is solved for a quantity, , which has dimension , the types of outputs we are generally interested in are just one dimensional objects (6). In this paper we exploit this fact and design an algorithm that computes (almost) directly the evolution of the expectation value (6), instead of the evolution of the density matrix (3). This approach does not lose any information of the original system (the only errors arise due to truncation), but at the same time the method provides a powerful computational tool, with potential dramatic reduction in the computational cost, especially when dealing with large matrices. The main idea of this approach is to exploit features of a Chebyshev expansion for the matrix exponential in (4). It is important to remark that Chebyshev polynomials have been widely used to get polynomial approximations of function of matrices, and especially for the matrix exponential in (3) (8). The terms of the Chebyshev expansion can be constructed iteratively with the price being a sequence of matrix–matrix multiplications.
Several methods to evaluate (4) are discussed in a recent monograph, see Ref. (9); however our proposed Direct evaluation of the Expectation values via Chebyshev polynomial (DEC) method is different because it does not solve the evolution for the density matrix. Instead it exploits the trace evaluation in (6) and with the evaluation of just one Chebyshev expansion it allows the solution of (6) at any time. DEC can be extremely powerful when we are only interested in the expectation values; if instead it is necessary to evaluate the evolution of the density matrix itself (4), then a traditional approach might be the best choice.
Ii The Direct Expectation values via Chebyshev
The preliminary step of our method is to rescale the matrix within the interval , as outside this interval the Chebyshev polynomials grow rapidly, and the expansion becomes unstable; to do that we need to evaluate the two extremes of the spectrum of . In order to obtain extreme values we propose, as already mentioned in the literature (10), to perform a few steps of Lanczos iteration, as this provides a good approximation for the extreme eigenvalues, for small computational cost. If we define these two values as and , i.e. , we may rewrite as , where , , and . We may then expand the exponential of in the Chebyshev polynomials and we arrive at the following equation for :
(7) 
with . Both and can be calculated iteratively:
(8a)  
(8b) 
with initial values , . is the th Bessel function of the first kind.
If we insert (7) into (6) we find:
(9) 
By exploiting the linearity of the trace operation we can pull out of the trace all time dependent parts, and evaluate once for all the coefficients . In fact we may rewrite (9) as:
(10) 
This is the key equation of the DEC method as it is possible to store an array of scalar values . All the time dependent terms are just scalar values that have to be multiplied by to get the evolution of at any time:
(11) 
If more than one observable is required it is still possible to use DEC. The only difference with the single expectation case is that we need to store different sets of , one for each operator .
ii.1 Stopping Criterion
The number of terms for the polynomial expansion in (7) depends on a prescribed tolerance , and on the time . In other methods based on Chebyshev approximation (16), the following has been suggested as a stopping criterion:
(12) 
Due to the zeros of the Bessel function , at fixed time , (12) may hold for some , even though the expansion has not yet reached the convergence regime; it may happen that for we have that . To avoid this effect it is enough to use as a stopping criterion a combination of two Bessel functions; the cost of such a stopping criterion is that at most we need to perform an extra iteration step (8a). In our numerical tests we have used the following:
(13) 
The total time plays a role here, since the larger the more terms will be needed to get below the threshold .
ii.2 Computation of the Expansion
In order to optimise the number of terms we evaluate, but without having to check at each step whether we have already evaluated enough terms , we propose to evaluate first , at the final time , and to store the values of .
From equation (8a) it is clear that depends on the Bessel functions. If we look at the asymptotic behavior of the Bessel function of fist kind, for any , we have that, for fixed (11):
(14) 
where is the Euler– and for we have that . Equation (14) shows that for any , in a neighbourhood of , is increasing monotonically with respect to . This behavior is maintained for the whole interval where is the first zero of the derivative of . It is possible to show (see Ref. (11), Eq.), that ; consequently we can say that if (13) holds for a given at and , then we are in the monotonically increasing region for and . In this case, equation (13) holds also for any .
The Bessel Functions of the First Kind of integer order may be evaluated directly by using a threeterm recurrence relation
(15) 
It is well known that (15) becomes numerically unstable for , see Ref.(12). To improve the method, we may exploit the linear nature of the iterative algorithm. It is possible to use Miller’s algorithm, and to solve an inverted form of (15), i.e. to solve for given , (12). When using Miller’s Algorithm it is suggested to expand the number of terms (providing a sort of buffer), i.e. to start the backward iteration process from , where is the actual order of the function we are interested on and is some small expansion. In this case we need to know already from an a priori error analysis how many iterations need to be performed to to get below the threshold .
It is possible to prove that for the rescaled Hermitian matrix , when applied to a vector of unit Euclidian norm we have (9):
(16) 
where is the order expansion in Chebyshev polynomials. This equation indicates that there is a superlinear decay of the error when . The formula may be derived by examining the asymptotic behavior of the Bessel functions (14). We may then use the relation to approximate .
ii.3 Efficient Implementation
The cost of DEC is all in the first step. Note that the cost of the evaluation of any itself is roughly equivalent to that of a matrix–matrix multiplication, as per the iteration (8b). But what is actually needed in all our calculations is . Because of the linearity of the iterative expression, we may multiply and by and then use (8b) directly on . The iterated operation in then just a matrix–vector multiplication.
Iii Numerical Experiments
Nuclear Magnetic Resonance (NMR) is a spectroscopy technique that exploits the interaction between nuclear spins and electromagnetic fields in order to analyse the samples. The temporal evolution of such a system is described via a density matrix that has size where is the spin and the number of nuclei. The exponential growth of the size of with respect to impedes the use of simulations when dealing with systems involving more than few () spins. Many attempts have been made to solve this (see e.g. Ref.(15), Ref.(17) for recent approaches), even using Chebyshev polynomials (16). These algorithms have been developed to simulate both liquid systems, where the Hamiltonian is generally time independent, and for Solid–State NMR. In the last case the Hamiltonian is time dependent due to the non averaging out of anisotropic interactions during the motion of the sample. When the Hamiltonian is time dependent it is not possible to apply DEC, as it is not possible anymore to isolate the time dependent part out of the trace.
As in many other physical systems, nuclear spin dynamics provides a perfect example to test DEC, because the final outcome of the simulations is an observable, the free induction decay (FID) signal, and this result is the sole important quantity, as it is the only data available from experiment.
We have used DEC to evaluate this quantity:
(18) 
can be written as combination of Pauli matrices, and is the shift up operator: . As Hamiltonian we assumed a sum of isotropic chemical shift and the isotropic term of a pair interaction called Homonuclear J–couplings, that depends on the inner product (13):
(19) 
For the initial density matrix we set , that is the result of the application of a so called pulse to a sample already under the effect of a strong constant magnetic field along the direction (13). This is the usual initial condition when the acquisition of the signal starts.
An illustration of the structure of the Liouvillian matrix is presented in Fig.2. The sparsity depends on the number of interactions among the spins. In most cases the J–coupling interaction matrix is relatively sparse. In our numerical simulations each spin interacts with about half of the other spins.
Due to the fact that our implementation involves only matrix–vector multiplication, techniques developed both for structured and unstructured sparse matrices may be exploited.
For comparison of computational costs we tested this method with an increasing number of spin particles using different methods to evaluate the exponential. In particular to examine the error we compared DEC with the expm function of MATLAB, that uses a scaling and squaring algorithm with Pade’ approximation. In this way we evaluate once for all where is the step size of the simulation, and then at each time–step we propagate :
(20) 
It is well known that in terms of computational costs this simplistic approach performs poorly, so we considered two more realistic model reduction algorithms, both based on a Krylov subspace expansion.
iii.1 First Alternate Method: Lanczos Iteration
The first one is well known in the literature (2); (3). It evaluates, through a Lanczos algorithm, an orthonormal basis of the Krylov subspace .
An approximation for is:
(21) 
where and come from the Lanczos algorithm. The Lanczos algorithm provides an orthonormal basis set for the Krylov subspace via a threeterm recursion (5); (7):
(22) 
is a tridiagonal matrix of size , and is the first vector of the canonical basis of size . This technique is very powerful for short time simulations, because with few iterations it is possible to have remarkably good approximations, but for longer times larger Krylov subspaces would be needed to stay close to the real solution. On the other hand if we do not consider enough terms in the Lanczos algorithm for longer times, (21) is no longer a reliable approximation.
It is possible to set a stopping criterion for the Lanczos iterations (3); for a given we can find such that:
(23) 
To assure a good approximation throughout the whole simulation we set to verify (23) for the total time of the simulation. In this way we have the certainty that the Krylov approximation error is below for all time . The equation (23) involves the evaluation of the exponential of a tridiagonal matrix at each step of the Lanczos method; when is large this operation may become a serious bottleneck for the whole simulation. However in our numerical tests we found out that the order of magnitude of the required was roughly , where is the size of the Liouvillian. For this reason although (23) is the proper way to stop the iterations, due to the fact that we were more interested in costs comparison than in error analysis, we chose arbitrarily .
iii.2 Second Alternate Method: Zero Track Elimination
The second method considered for comparison is a technique recently developed especially for NMR simulation called “Zero Track Elimination ”, (ZTE) (17). This technique is based on the idea of pruning out the elements of which do not belong to . In order to reduce the steps needed to evolve the full system, we monitor the elements of that stay below a chosen threshold during this first evolution steps and introduce structural zeros based on these observations. The evolution is then performed in this reduced state space .
The idea is extremely appealing, as once the propagator for is evaluated all the subsequent steps have the cost of a reduced matrix–vector, and it is possible to use (20) for the reduced system. It is claimed (17) that the error of such an approximation is similar to what would be obtained by considering in the Krylov expansion the contributions coming from high values of in (17). There are however some drawbacks:

for this method there is no available convergence theory;

the performance depends strongly from the initial condition , and on . As expected, in our tests the size of the reduced system could change by a factor depending on the number of interacting spins.
iii.3 Summary of Results
The errortocost (measured in CPU time) diagrams are shown in Figure 3 for all the methods described. It is clear in this example that DEC is more than an order of magnitude more efficient than the alternatives. Obviously especially when the system is over–reduced to slash the computational costs, paying a price in terms of error, DEC still mantains the error below the expected tolerance. To avoid instabilities coming form the evaluation of the Bessel functions in these numerical tests we set the tolerance to be .
DEC performs at its best for short time simulations (i.e. when total time is small), so that we do not need to evaluate a large number of , and when at the same time the use of small time step is required, as the cost for any step after the first is negligible. For instance, while for all the other methods the cost of a step simulation with , is comparable with a simulation of steps with , for DEC there is a gain of an order of magnitude, see Fig.4.
full size  expm  Lanczos  Reduced 
ZTE  Reduced 
DEC 

16  0.12  0.15  4  0.14  8  1.99 
64  0.14  0.16  8  0.16  24  2.00 
256  0.31  0.20  25  0.20  64  3.17 
1024  4.27  0.93  102  0.56  160  4.91 
4096  191.4  12.66  409  4.43  432  7.15 
16384  309.02  1638  298.57  3296  22.34  
65536  138.14  
262144  1064.7 
Size of for the reduced system
,
Iv Conclusion
In this article we have presented a new method for simulation of observable in a mixed quantum system. By expanding the exponential of the Hamiltonian in Chebyshev polynomials, and exploiting the trace operation performed when evaluating the expectation value of an observable, it is possible to reduce the evolution of any observable to a one–dimension function that can be evaluated directly.
We also presented an optimal algorithm to perform such a calculation, and show how this new method can easily compete in term of computational costs with a variety of model reduction approaches, whilst maintaining the approximation errors below a chosen threshold.
G.M. is very grateful to Arieh Iserles for useful suggestions at the starting of this work.
*
Appendix A The DEC algorithm
We provide here a detailed description of the algorithm.
inputs: hermitian matrix , vector
outputs: expectation value evaluated at , .

evaluate , via Lanczos s.t. , ;

scale and get

evaluate ,

while

, total time


store

end

for

reevaluate the at different time


end
References
Footnotes
 footnotemark:
 footnotemark:
 footnotemark:
References
 C. Moler and C. Van Loan, SIAM Review, 45(1), 3–49 (2003).
 Y. Saad, SIAM J. Num. Anal., 29(1), 209–228 (1992).
 M. Hochbruck and C. Lubich, SIAM J. Num. Anal. 34(5), 1911–1925 (1997).
 J.C. Schulze, P.J. Schmid and J.L. Sesterhenn, Int. J. Numer. Meth. Fluids, 60, 591–609 (2009).
 C. Lanczos, J. Res. Nat. Bur. Standards, 45, 255–282 (1950).
 G.H. Golub and C.F. Van Loan, Matrix Computations, (The John Hopkins University Press, Baltimore, 1996).
 J. Cullum and R.A. Willoughby, Lanczos Algorithms for large symmetric eigenvalue computations, (Birkhäuser, Boston, 1985).
 H. Tal–Ezer and R. Kosloff, J. Chem. Phys. 81, 3967–3971 (1984).
 C. Lubich From Quantum to Classical Molecular Dynamics: Reduced Models and Numerical Analysis, (Zurich Lectures in Advanced Mathematics, Zurich, 2008).
 Y. Zhou, Y. Saad, M.L. Tiago and J.R. Chelikowsky, J. Comp. Phys., 219(1), 172–184, (2006).
 M. Abramowitz and I.A. Stegun Handbook of mathematical functions, (Dover, New York, 1964).
 F.W.J. Olver and D.J. Sookne, Math. of Comp., 26(120), 941–947, (1972).
 M.H. Levitt, Spin Dynamics, (Wiley, Chichester, 2001).
 I. Kuprov, J. Magn. Reson., 195, 45–51, (2008).
 M. Bak, J.T. Rasmussen and N.C. Nielsen, J. Magn. Reson. 147, 296–330, (2000).
 M.Veshtort, R.G.Griffin, J. Magn. Reson., 178, 248–282, (2006).
 I. Kuprov, J. Magn. Reson., 195, 45–51, (2008).