The accurate numerical solution of the Schrödinger equation with an explicitly time-dependent Hamiltonian
We show how the highly accurate and efficient Constant Perturbation (CP) technique for steady-state Schrödinger problems can be used in the solution of time-dependent Schrödinger problems with explicitly time-dependent Hamiltonians, following a technique suggested by Ixaru in . By introducing a sectorwise spatial discretization using bases of accurately CP-computed eigenfunctions of carefully-chosen stationary problems, we deal with the possible highy oscillatory behaviour of the wave function while keeping the dimension of the resulting ODE system low. Also for the time-integration of the ODE system a very effective CP-based approach can be used.
The constant perturbation methods form a class of numerical methods which were originally devised for the solution of the time-independent one-dimensional Schrödinger equation. The first ideas were described in the seventies (see  for references) but the CP methods still prove to be very efficient for not only the one-dimensional time-independent Schrödinger equation  but also for Sturm-Liouville problems [4, 5], coupled channel Schrödinger equations [6, 7] and two-dimensional Schrödinger problems . As noted in , the CP approach is very closely related to the application of a modified Neumann method. Modified Neumann methods, as well as CP methods, are very effective for highly oscillatory ordinary differential equations, see . In the current paper we extend the techniques to the solution of time-dependent Schrödinger problems involving explicitly time-dependent Hamiltonians.
We consider the time-dependent Schrödinger equation (in units where ):
with the one-dimensional time-dependent Hamiltonian
as the wave function at initial time . is the reduced mass.
Standard techniques found in literate reduce (1) to the solution of a linear ordinary differential equation in of the form
where is a column vector with complex components and is an hermitian matrix associated to the Hamiltonian. Equation (2) arises when the spatial variable is discretized. Then the entries of are values of the wave function at the nodes of the spatial grid. Equation (2) can also result from a spectral decomposition in which the solution is expanded over eigenfunctions of a time-independent Hamiltonian, e.g. a harmonic oscillator, and the represent the coefficients in this basis set expansion.
Instead of taking a fixed set of basis eigenfunctions of a time-independent Hamiltonian over the entire -range, we will use similar ideas as those proposed in  to tackle 2D stationary equations in an efficient way. In  Ixaru introduced a sectorwise expansion of the 2D solution over eigenfunctions from conveniently tuned 1D Hamiltonians. In a similar way, we divide here the -domain in sectors and use in each sector a different time-independent Hamiltonian to form the set of eigenfunctions. By choosing in each sector a time-independent Hamiltonian which takes into account the form of the potential over this sector, the number of time-independent stationary solutions needed for the expansion can be kept low. The efficient solution of stationary Schrödinger problems forms a crucial part in the proposed spatial discretization. To compute the eigenvalues and the corresponding eigenfunctions, we make use of the highly accurate and efficient CP methods and codes which are available for solving time-independent Schrödinger problems. Also for the propagation of the time-dependent solution over a sector, we will use CP-based techniques. By using a CP approach we are able to tackle the oscillations which can be present both in time and space without the necessity of small steps.
2 The procedure
We take the time-dependent equation (1) and assume that the -range can be restricted to and that Dirichlet boundary conditions can be imposed in the endpoints and .
First the -domain is divided in a succession of rectangular sectors as illustrated in Figure 1. The mesh points on the -axis need not to be equidistant. On each rectangular subdomain the original time-dependent potential function is approximated by the time-independent function . When doing this approximation over each sector, we obtain an approximation of the original with a staircase shape in the -direction, see Figure 2.
Let us focus on the propagation of the solution over one sector. Assume the solution is known, either through the initial condition when or through the computation of the solution over the previous sector. First we solve the time-independent Schrödinger eigenvalue problem with potential function :
where the upper label refers to the specific sector over which we are propagating. The eigenfunctions are normalized such that
The set of eigenfunctions form an orthonormal basis. We write the desired over the th sector as an expansion over this set, with -dependent coefficients:
The propagation over sector is then reduced to the solution of the linear system (6) over the interval with initial values which are derived from the data obtained on the previous sector. Continuity is imposed in , that is
After multiplying this equation with and integrating over , we obtain the initial conditions:
where measures the overlap between the eigenfunctions in sector and the ones in sector :
In practice, some upper value must be imposed for the number of eigenfunctions included in the expansion (5), and thus for the indices and . This means that over each sector, integrals need to be computed to obtain for , and integrals to obtain each . Also the system (6) can be written in matrix form as
3 Numerical methods
The procedure described in the previous section reduces the solution of the time-dependent problem to the solution of time-independent Schrödinger problems (3), linear ODE systems of the form (10), and the computation of the integrals (7) and (9).
3.1 The time-independent 1D Schrödinger eigenvalue problem
It is crucial to have a computationially efficient approximation technique for the solution of the time-independent Schrödinger problems (3), and to obtain a uniform high accuracy approximation to the eigenvalues and eigenfunctions. It is well known that the oscillatory behavior of the solutions of time-independent Schrödinger problems (3), force naive integrators to take increasingly smaller steps. As shown in , highly accurate and efficient techniques are available in the form of a shooting method employing Constant based Perturbation (CP) methods for the solution of the initial value problem. CP methods are based on “approximation of the potential”. In particular, constant reference potential approximations are used and some perturbation corrections are computed from the difference between the constant reference potential and the original one.
In our numerical experiments, we used a CP method of order 12 as it was implemented in the Matlab package Matslise  (and earlier in Fortran form in ) to compute accurate approximations of the eigenvalues and evaluations of the eigenfunctions and their first order derivative in a fixed set of meshpoints. The mesh is formed by subdividing the -range in a number of equidistant steps and taking the 4 Lobatto nodes over each such step as the meshpoints. Choosing the meshpoints in this way, is convenient for the fast and accurate approximation of the integrals (7) and (9) over each sector (see further).
3.2 Linear system of first order ODEs
For Schrödinger equations with an explicitly time-dependent Hamiltonian, a difficulty when constructing accurate and efficient propagators is that fast oscillations in the time-dependent fields apparently necessitate small time steps. We propose here an effective discretization method for the solution of the linear system (10), which takes into account the oscillatory nature of the solution. We again use a CP approach, which is strongly related to the application of a modified Neumann scheme, see . An alternative is to employ a modified Magnus approach, however the computation of the matrix exponential of the Magnus series terms is exceedingly expensive, especially when one wants to employ higher-order Magnus approximations.
First step is to approximate the matrix in equation (10) by a constant (time-independent) matrix . For the construction of a second order algorithm, we can use as a constant approximation. Note that in this case is a diagonal matrix with and that the solution can be easily propagated by
When constructing higher order CP methods, we approximate by a higher order polynomial approximation, i.e. by a truncated series over the shifted Legendre polynomials:
where the matrix weights are calculated by quadrature (see [3, 10]). The symmetric matrix is then diagonalized and let be the orthogonal diagonalization matrix, i.e. . Our propagation algorithm for the solution takes then the following form
where are perturbation corrections derived from the perturbation . In fact, by using perturbation theory (as in ) one can show that the perturbations satisfy
where . This means that has the following form
corresponds exactly to the first term in a modified Neumann integral series. Also the expressions for the higher order perturbations are equal to the corresponding term in the modified Neumann series. Since has a polynomial form, the integrals in the Neumann series terms can be computed analytically. For a fourth order algorithm for instance, it is sufficient to take and one perturbation correction where is given by
with and .
In our implementations we used this simple fourth order scheme and applied it once over the full sector-length or (if needed to ensure accuracy) on a subdivision of the sector .
It is often important to discretize Schrödinger equations while preserving unitarity so that the conservation of the norm is ensured. The second order scheme always preserves the norm and non-unitarity appears only through the correction terms which are added. The loss of norm conservation is consequently likely to be small, moreover including correction terms improves the accuracy of the method, i.e. it leads to an approximate solution which is closer to the exact one. We illustrate this in the numerical experiments section.
3.3 Oscillatory integrals
Also for the quadrature of the integrals involved in the procedure it is important to select the appropriate rules. The integrands of (7) and (9) contain a product of two wavefunctions of a time-independent Schrödinger equation. These wavefunctions have an oscillatory character in that part of the integration domain where the corresponding energy eigenvalue is larger than the potential. An alternative to the standard quadrature methods is provided by contemporary methods for highly oscillatory quadrature, an area that has undergone significant developments over the last ten years. Many of these methods were, however, specially devised for integrals of the form where and the nonoscillatory functions and are known [11, 12, 13, 14]. Eqs. (7) and (9) do not fit in this framework. Moreover, it may be more convenient for the present application to use the same set of quadrature nodes for each integral instead of frequency-dependent ones, in order to minimize the number of CP eigenfunction evaluations. However, we do have some information available which we can use in the formulation of a specially tuned quadrature rule: we have some knowledge about the frequency of each wavefunction, moreover the CP approach does not only provide us with the values of the wavefunctions at the meshpoints but also those of their first derivative. We will use this information in the construction of some adapted quadrature rules based on exponential fitting (EF). These EF rules are not only very effective in the oscillatory part of the integration domain but also in the region where an eigenfunction displays an exponential behavior. Details about EF quadrature can be found in .
We first consider integral (9). The integrand can be generically written as the product , where is an eigenfunction of the Schrödinger problem and an eigenfunction of a different Schrödinger problem . As described in , p. 47, the eigenfunction corresponding to an eigenvalue , can be written over the meshinterval as
where is a constant approximation of the potential on the current meshinterval. Note that we have such a constant approximation of the potential available in the CP algorithm. Similarly,
Based on this structure for both and , we choose to use a 4-point Lobatto-EF algorithm of the form
which is exact for the functions
where and . See  for the construction of such an EF rule.
The integrand of integral (7) has a bit different form which we generically write as . and are again eigenfunctions for which we have the evaluations of the first derivative available. In many cases the form of the potential function is explicitly known and the first derivative w.r.t. of the function can be evaluated. In this case, the EF method (13) is used. Otherwise, we propose an EF scheme which does not use derivative information and is exact for .
4 Numerical results
As a first test problem, we consider an equation with a known analytical solution. The potential is given by and . For the initial wave function we choose an eigenfunction of the harmonic oscillator where is the Hermite polynomial of degree . The analytic solution is . The -range we used was . We applied our CP-based procedure to approximate for . Figure 3 gives an idea about the shape of when . A CP approximation, obtained with the procedure presented in this paper, is shown. Table 1 shows some results for different sets of parameter values: is the number of nodes of the initial wave function, is the number of terms in the expansion (5), in the size of the equidistant steps used in the -direction (note that 2 extra inner Lobatto nodes are used in each such step), while is the size of the equidistant time steps. denotes the number of rectangular sectors. This means that the propagation over each sector requires modified Neumann steps. The conservation of norm and the error in the approximations are measured by the following two quantities which are evaluated in
where (14) is evaluated by applying a classical Lobatto-quadrature rule with the -mesh forming the quadrature nodes. The results in the table illustrate that high accuracies are reached with small , large sector widths and large step sizes. For a comparison in efficiency, we included some results obtained with the one-step implicit Crank-Nicolson scheme (CN), which is one of the widespread numerical schemes for solving time-dependent Schrödinger problems. The CN scheme requires the solution of a system of algebraic equations of size at each time-step. It is clear that the CN scheme cannot be used to obtain high accuracies, similar to the ones obtained with the CP-based scheme, within a reasonable time. All computations were performed in Matlab on a standard desktop computer.
As a second test problem we used the 1D harmonic oscillator with an explicitly time-dependent frequency . The Hamiltonian of this system has the form and the frequency was chosen as . The value of the oscillator frequency is quickly doubled over time. The coherent state
was chosen as the initial state of the oscillator. Again we choose the segment on sufficiently large to suppress the influence of the inaccuracy of the boundary conditions on the approximate solution: . The error is evaluated in . Some results are listed in Tables 2 and 3.
As third test problem, we consider the Walker and Preston model of a diatomic molecule in a strong laser field . This system is described by the one-dimensional Schrödinger equation
with . Here is the Morse potential and accounts for the laser field. This problem is used as a test bench for the time propagation methods presented in e.g.  and . As in  we take values for the parameters (in atomic units: a.u.) corresponding to the HF molecule: a.u., a.u., a.u., a.u. and laser frequency . We assume the system is defined in the interval . As initial conditions we take the ground state of the Morse potential , with , and the normalizing constant, i.e. .
A natural time unit for the problem is . In order to illustrate that the approach can also be used over longer time ranges, the integration was carried out over the time range . The exact solution is accurately approximated using a sufficiently small time step. As in [18, 19], we choose an equidistant grid for the spatial coordinate with 64 steps. The results for different time-steps are shown in Table 4. These results were obtained with . When taking a fixed set of basis functions, e.g. eigenfunctions of the time-independent harmonic oscillator, over the entire -range, similar accuracies can only be reached by using values for .
To accurately simulate the outcome of quantum dynamical experiments with time-dependent Hamiltonians, accurate and efficient numerical techniques should be used. In this paper, we showed how the succesful CP technique can be used in the spatial discretization of a time-dependent Schrödinger equation following an approach suggested by Ixaru in , but also for efficient and accurate time stepping in the resulting ODE system. The effectiveness of the approach was illustrated by some numerical examples.
-  L. Gr. Ixaru, Comput. Phys. Commun. 181 (2010) 1738.
-  L. Gr. Ixaru, Numerical Methods for Differential Equations and Applications (Reidel, 1984).
-  L. Gr. Ixaru, H. De Meyer, G. Vanden Berghe, J. Comput. Appl. Math. 88 (1997) 289.
-  L. Gr. Ixaru, H. De Meyer, G. Vanden Berghe, Comput. Phys. Commun. 118 (1999) 259.
-  V. Ledoux, M. Van Daele, G. Vanden Berghe, ACM Trans. Math. Software 31 (2005) 532.
-  L. Gr. Ixaru, Comput. Phys. Commun. 147 (2002) 834.
-  V. Ledoux, M. Van Daele, Comput. Phys. Commun. 184 (2013) 1287.
-  I. Degani, J. Schiff, J. Comput. Appl. Math. 193 (2006) 413.
-  A. Iserles, BIT 44 (2004) 473.
-  V. Ledoux, M. Van Daele, SIAM J. Sci. Comput. 32 (2010) 563.
-  D. Huybrechs, S. Vandewalle, SIAM J. Numer. Anal. 44 (2006) 1026.
-  A. Iserles, S.P Nørsett, BIT 44 (2004) 755.
-  S. Olver, IMA J. Numer. Anal. 26 (2006) 213.
-  D. Huybrechs, S. Olver, Found. Comput. Maths. 12 (2012) 203.
-  L. Gr. Ixaru, G. Vanden Berghe, Exponential Fitting (Kluwer, 2004).
-  I.V. Puzynin, A.V. Selin, S.I. Vinitsky, Comput. Phys. Commun. 123 (1999) 1.
-  R. B. Walker, K. Preston, J. Chem. Phys. 67 (1977) 2017.
-  S. Gray, J.M. Verosky, J. Chem. Phys. 100 (1994) 5011.
-  J.M. Sanz-Serna, A. Portillo, J. Chem. Phys. 104 (1996) 2349.