# Relation of exact Gaussian basis methods to the dephasing representation: Theory and application to time-resolved electronic spectra

## Abstract

We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that further acceleration is possible with cellularization [M. Šulc and J. Vaníček, Mol. Phys. **110**, 945 (2012)]. Here we focus on increasing the accuracy of this approximation by first implementing an exact Gaussian basis method, which benefits from the accuracy of quantum dynamics and efficiency of classical dynamics. Starting from this exact method, the DR is derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These methods include the Gaussian DR, an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussian basis functions evolving classically with an average Hamiltonian. The newly obtained methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine model, and quartic oscillator. Numerical results confirm that both the Gaussian basis method and the Gaussian DR increase the accuracy of the DR. Surprisingly, in chaotic systems the Gaussian DR can outperform the presumably more accurate Gaussian basis method, in which the two bases are evolved separately.

## 1Introduction

High time resolution (such as s) is essential for understanding many quantum dynamical processes in chemical physics and has been the main challenge of ultrafast spectroscopy for over two decades.[1] In theoretical studies, in contrast, short time scales should simplify matters by requiring shorter simulations. Still, solving the time-dependent Schrödinger equation (TDSE) is challenging even for short times due to the exponential scaling with dimensionality. In practice, one must seek a compromise between accuracy and computational efficiency, which is provided, e.g., by semiclassical[4] or time-dependent finite-basis[8] methods. Both approaches benefit from the ultrafast character of the dynamics not only thanks to a lower computational cost, but also because their accuracy deteriorates at longer times. Among these methods, semiclassical initial value representation[12] and methods employing Gaussian bases[11] were employed successfully for direct dynamics in which the electronic structure is evaluated on the fly.

In this paper, we propose an, in-principle, exact Gaussian basis method that generalizes and increases the accuracy of the *Dephasing Representation*[19] (DR), an efficient semiclassical approximation particularly fitted for calculations of time-resolved electronic spectra.[21] In electronic spectroscopy, the DR and closely related approximations are known as *phase averaging* ,[23] Wigner-averaged classical limit, or linearized semiclassical initial value representation[25] (LSC-IVR, in the generalized sense[28]), and have been used by several authors.[29]

Although the original formulation of the DR pertains to a single electronic potential energy surface, a generalization to multiple surfaces exists.[34] The DR has many other applications: e.g., in inelastic neutron scattering,[36] as a measure of the dynamical importance of diabatic,[37] nonadiabatic,[34] or spin-orbit couplings,[35] and as a measure of the accuracy of quantum molecular dynamics on an approximate potential energy surface.[38] In the field of quantum chaos, DR successfully describes the local density of states and transition from the Fermi-Golden-Rule to the Lyapunov regime of fidelity decay.[40]

The most attractive feature of the DR is its efficiency, which is, as in the forward-backward semiclassical dynamics of Makri and co-workers,[44] partially due to the reduction of the sign problem. Motivated by numerical comparisons with other semiclassical methods,[21] we recently proved analytically[46] that the number of trajectories required for convergence of the DR is independent of the system’s dimensionality, Hamiltonian, or total evolution time. Inspired by Heller’s cellular dynamics,[47] we have further increased computational efficiency of the DR by formulating a *cellular DR.*[22]

Unlike its efficiency, the accuracy of the DR is not always sufficient. The DR is exact in displaced harmonic oscillators[23] and often accurate in chaotic systems,[19] but due to its perturbative nature, the DR breaks down, e.g., in harmonic oscillators with significantly different force constants. Whereas Zambrano and Ozorio de Almeida proposed to correct DR with a prefactor,[48] in this paper the accuracy of DR is increased with a Gaussian basis approach.

Since any quantum dynamics can be performed in a Gaussian basis, methods employing Gaussian bases should be useful also for time-resolved spectroscopy. Any basis-set approach is, in principle, exact; the only inexactness stems from the incompleteness of the basis. As a result, the goal is to find the smallest basis giving sufficiently converged result. A useful way to reduce basis size is to employ time-dependent bases that explore all dynamically important regions of phase space. Such an approach has been explored extensively in the Multi-Configuration Time-Dependent Hartree (MCTDH),[8] Gaussian MCTDH,[10] and Multiple Spawning[11] methods. Here we propose two exact methods that are closely related to these[8] and several other[52] methods employing Gaussian bases, yet are specific to time-resolved spectroscopy. One of the two methods, which we call the Gaussian Basis Method, uses two bases evolving classically with two Hamiltonians corresponding to the two electronic states. The other method employs a single basis evolved classically with the average Hamiltonian. Because of its relation to the DR, we call it the *Gaussian DR*. Our results show that both the Gaussian Basis Method and the Gaussian DR improve the accuracy of DR in time-resolved stimulated emission spectra calculations in a harmonic potential, pyrazine model, and chaotic quartic oscillator.

Moreover, we show that the DR emerges naturally from the exact Gaussian Basis Method by a sequence of three approximations: propagating the basis with the average Hamiltonian (which gives the Gaussian DR), using independent Gaussians, and assuming local approximation for the potential. Since the three approximations may be taken in arbitrary order and the local approximation relaxed to a local harmonic approximation, we derive ten intermediate approximations, potentially useful for future applications. We observe a remarkable property that using the average Hamiltonian for propagating the basis, which is seemingly an approximation, can sometimes outperform the original Gaussian Basis Method. This occurs particularly in chaotic systems and parallels a property of the semiclassical DR.

The rest of this paper is organized as follows: Section ? describes the central theoretical concepts. Section ? uses methods from Sec. ? to compute time correlation functions and time-resolved stimulated emission spectra, and Section ? provides conclusions. An essential part of the paper is the Appendix describing an efficient numerical implementation of the methods from Sec. ?.

## 2Theory

### 2.1Time-resolved stimulated emission: spectrum, time correlation function, and Dephasing Representation

To be specific, we restrict the discussion to time-resolved stimulated emission (TRSE). Within the electric dipole approximation, time-dependent perturbation theory, and ultrashort pulse approximation, this spectrum can be computed as a Fourier transform of the following correlation function:[21]

Above, and are the amplitudes of the pump and probe laser pulses, represents the nuclear density operator in the electronic ground state at temperature , is the transition dipole moment operator coupling electronic states and , stands for the time delay between the pump and probe pulses, and is time after the probe pulse. Finally, denotes the nuclear quantum evolution operator

with Hamiltonian , where is the kinetic energy operator and is the th potential energy surface. In all expressions, the hat denotes operators in the Hilbert space of nuclei.

Within the Franck-Condon approximation and in the zero temperature limit, correlation function (Equation 1) simplifies to

where

is a specific time correlation function and the initial state is generally the vibrational ground state of the ground potential energy surface. The TRSE spectrum, given by[59]

is proportional to the wavepacket spectrum obtained[60] via the Fourier transform of :

Correlation function (Equation 3) specific to the stimulated emission is a special case of a more general concept of *fidelity amplitude* ,[61] defined as

where , I, II, is the time evolution operator for a time-dependent Hamiltonian :

Correlation function (Equation 3) for TRSE is obtained from the general fidelity amplitude (Equation 5) if the time-dependent Hamiltonians in Eq. (Equation 6) are

Besides applications in electronic spectroscopy,[32] correlation function (Equation 5) proved useful, e.g., in NMR spin echo experiments[63] and theories of quantum computation,[61] decoherence,[61] and inelastic neutron scattering.[36] The fidelity amplitude was also used as a measure of the dynamical importance of diabatic,[37] nonadiabatic,[34] or spin-orbit couplings,[35] and of the accuracy of quantum molecular dynamics on an approximate potential energy surface.[38]

In practical calculations, correlation function (Equation 5) is usually approximated, and DR provides an efficient semiclassical approximation.[19] If we denote by the phase-space coordinates at time of a point along a classical trajectory of the *average* [23] Hamiltonian , the DR of fidelity amplitude (Equation 5) is written as

with

Here is the number of degrees of freedom, represents the Wigner transform of the initial density operator , and denotes the action due to the difference along trajectory :

For TRSE (Equation 3),

Throughout this paper, time dependence is denoted by as a superscript or argument in parentheses. Italics subscripts label either nuclear () or electronic () degrees of freedom. Vectors and matrices in the -dimensional vector space of nuclei are denoted by italics: e.g., or . The inner product and contraction of tensors in this space are denoted by , as in .

The DR (Equation 7) can be derived[19] by linearization of the semiclassical propagator and improves on a previous method[65] inspired by the semiclassical perturbation theory of Miller and co-workers.[66] Shi and Geva[25] derived the DR without invoking the semiclassical propagator—by linearizing[68] the path integral quantum propagator.

### 2.2Quantum dynamics and time correlation functions in a classically evolving Gaussian basis

Since our main objective is to improve the accuracy of the DR (Equation 7) by evaluating correlation function (Equation 3) without invoking the semiclassical perturbation approximation, we solve the TDSE in a classically evolving Gaussian basis. Following Heller and co-workers,[71] the initial state and the state at time are expanded as

where the time-dependent basis consists of normalized Gaussians

with

The real symmetric width matrix in Eq. (Equation 13) is assumed to be time-independent as in Heller’s *Frozen Gaussians Approximation* (FGA).[73] The time-dependent parameters and in Eq. (Equation 12) denote the position and momentum of the center of the Gaussian state , which evolves classically:

As in the MCTDH method,[74] the evolution of the basis compensates for its incompleteness. Above and throughout this paper, Greek subscripts, such as , label basis functions or components of vectors in the -dimensional vector space spanned by . Vectors and matrices in this space are denoted with the bold Roman font (e.g., and below); the inner product and contraction of tensors are expressed by juxtaposition of matrices, as in .

Inserting ansatz ( ?) into the TDSE yields a first-order differential equation for the expansion coefficients:

where denotes the time-dependent overlap matrix

and the Hamiltonian matrix

The non-Hermitian time-derivative matrix , defined as

satisfies

Our frozen Gaussian basis functions depend on time only via the classically evolving coordinates :

Analytical formulae for , , , and matrix elements are derived in Appendix ?, while the numerical implementation of the propagation algorithm is described in Appendix ?.

Propagation equations (Equation 15) can be obtained also from the Dirac-Frenkel variational principle applied to the ansatz ( ?) with the coefficients playing the role of variational parameters. Note, however, that the centers of individual Gaussians propagate along classical trajectories of the original, classical Hamiltonian. This fact does not follow from the variational principle but is enforced as in Heller’s work.[71] Hence the propagation of the Gaussians is uncoupled,[8] although–in contrast to the Independent Gaussian Approximation[52] of [52]–the propagation of the expansion coefficients does require communication between the Gaussians [see Eq. (Equation 15)]. The Gaussian MCTDH[10] and Minimum Error[52] methods, on the other hand, treat both the expansion coefficients and all Gaussian parameters variationally.[76] The concept of classical trajectories is modified also in the Coupled Coherent States,[55] where individual Gaussians evolve according to the reordered Hamiltonian, i.e., on a potential energy surface that is averaged over the width of the Gaussian basis function. Finally, the propagation Eq. (Equation 15) is a special case of the central equation of Multiple Spawning,[11] which also considers couplings between electronic states and allows the basis to change its size during dynamics.

Up to this point, the presentation applied to general quantum dynamics. Now we describe how to use the Gaussian basis formalism to evaluate the correlation function (Equation 3) for TRSE. The initial state (Equation 11) must be propagated with the two different propagators to obtain the two final states ( ?). In analogy to Eq. ( ?), these final states[78] are expanded as

where and . The two bases are propagated classically with two separate equations (Equation 14): one for , the other for . Using Eq. (Equation 19 ), fidelity amplitude (Equation 3) becomes

with

Matrix elements differ from overlaps since and in denote basis functions evolved with two different Hamiltonians. We refer to the method specified by Eq. (Equation 20) simply as the *Gaussian Basis Method* (GBM).

### 2.3Several approximations and derivation of Dephasing Representation from the Gaussian Basis Method

It is often necessary to treat the propagation Eq. (Equation 15) approximately. This is especially true in *ab initio* applications, where the evaluation of the potential becomes expensive. Another reason is the implicit matrix inversion in Eq. (Equation 15). We first discuss approximations relevant for general quantum dynamics in a Gaussian basis.

*Independent Gaussians (IG).* This approximation avoids the inversion problem as well as matrix multiplication by assuming that

which is justified if the basis is sparse enough so that different basis functions have a negligible overlap. As derived in Appendix ?, if Eq. (Equation 21) is satisfied exactly then the and matrices are diagonal. For and matrices this statement follows directly from Eqs. (Equation 39) and (Equation 40). As for , Eqs. (Equation 42)-(Equation 44) demonstrate that the first three moments of the potential are also diagonal under the IG assumption of Eq. (Equation 21). More generally, the text following Eq. (Equation 41) shows that each term of the Taylor expansion of contains a factor of ; hence if then for any well-behaved potential . In practice, however, Eq. (Equation 21) is satisfied only approximately and higher order terms in may lead to significant couplings even when the overlaps are small. Approximate diagonality of must therefore be taken as an additional assumption, which we consider to be an inherent part of the IG approximation.

Employing the IG approximation in Eq. (Equation 15) thus switches off communication between basis functions as in the Independent Gaussian Approximation.[52] More explicitly, since elements of satisfy

propagation equation (Equation 15) decouples,

and one can formally write the solution as

where the kinetic contribution [Eq. (Equation 40)] to is

and denotes the diagonal mass matrix. This result is equivalent to the central equation of Heller’s FGA.[73]

*Approximating* . In *ab initio* applications, the most challenging part of the propagation (Equation 15) is evaluating potential matrix elements . Even for analytical potentials, it is generally impossible to obtain in closed form. Thanks to the local nature of Gaussian basis states (Equation 13), a useful approximation is provided by expanding the potential in a Taylor series and evaluating the resulting integrals analytically. Most frequently used are the following two approximations:

1. *Local Approximation* (LA). Taylor expansion (Equation 41) is truncated after the zeroth order:

where . The LA is equivalent to the zeroth-order saddle-point approximation used in Multiple Spawning.[11]

2. *Local Harmonic Approximation* (LHA). Taylor expansion (Equation 41) is truncated after the second order, using Eqs. (Equation 42) and (Equation 43) for the first and second moments. The diagonal elements, in particular, become

As a result, the LHA requires the expensive Hessian matrix. [52] observed that the LHA in conjunction with the variational principle decouples the Gaussian basis functions. In our setting, this decoupling effect[52] does not occur both because the Gaussians are not treated fully variationally (in particular, we use frozen and not thawed Gaussians) and because the potential is expanded about the average coordinate instead of or [see Eq. (Equation 41)]. Therefore we distinguish between the Independent Gaussian Approximation of [52] and IG of Eq. (Equation 21).

Our numerical calculations also exploited the fourth-order expansion, permitting exact treatment of the potential in all systems discussed in Sec. ?.

*Evolving the basis with the average Hamiltonian* (AH). Unlike IG or LHA, this approximation is specific to the fidelity amplitude. In analogy to the DR, a single basis was employed for the two propagations and evolved classically with the *average Hamiltonian* according to Eq. (Equation 14). With this assumption and fidelity amplitude (Equation 20) simplifies to

where the subscripts on ’s must be retained since the Hamiltonian matrix elements in Eq. (Equation 15) still depend on the electronic state. Note that if the basis were complete at all times, using the AH would not constitute any approximation. There is an important difference, however, between the GBM (Equation 20) and AH method (Equation 26). In GBM, decays partially due to decreasing overlap between corresponding basis functions, i.e., due to decreasing diagonal elements of . In the AH method (Equation 26), in contrast, a single basis is used, and the diagonal elements of the overlap matrix remain unity at all times. Hence decays exclusively due to interference and not due to basis overlaps. A similar interpretation gave the name to the semiclassical Dephasing Representation (Equation 7), in which decays solely due to dephasing and not due to decreasing classical overlaps.[19] Because of this analogy, we refer to the AH method specified by Eq. (Equation 26) as the *Gaussian Dephasing Representation* (GDR). Note that the idea of using a common basis was also exploited in the single-set version of the MCTDH method [8] and in Shalashilin’s multiconfigurational Ehrenfest method,[79] where the common basis is propagated with a Hamiltonian given by an Ehrenfest-weighted average instead of an arithmetic average of and as in the GDR.

*Derivation of the DR from GBM.* We now derive the DR from the exact GBM (Equation 20) by a sequence of four approximations: AH, IG, LHA, and LA (see Fig. ?). As shown above, AH approximation yields the GDR (Equation 26), which, together with the IG approximation (Equation 21), gives

The LHA (Equation 25) implies that and

Finally, using the cruder LA (Equation 24) instead of the LHA implies that and

which is a discretized version of the DR (Equation 7) with the square coefficients playing the role of the Monte Carlo sampling weights. Note that the last result could also be obtained by assuming infinitesimally narrow Gaussians, i.e., , in Eq. (Equation 25).

Derivation of the DR from the GBM is summarized in Fig. ?, showing the four elementary approximations involved. Since three approximations may be taken in arbitrary order, ten intermediate methods exist between the GBM and DR, all together giving twelve methods ranging from the exact GBM to the semiclassical DR. Above, final expressions were presented for six of the twelve methods: GBM (Equation 20), FGA = IG [Eqs. (Equation 20) and (Equation 23)], GDR =AH (Equation 26), AH+IG (Equation 27), AH+IG+LHA (Equation 28), and DR = AH+IG+LA (Equation 29). The remaining six methods are easily obtained by applying a subset of the four elementary steps (AH, IG, LHA, or LA) to the original GBM (Equation 20).

## 3Numerical examples

In this section, the methods from Sec. ? are used to compute time correlation functions required for TRSE spectra. While the efficiency of the original DR is pronounced especially in high-dimensional systems (due to its -independent convergence rate mentioned in the Introduction), here we focus on few-dimensional systems which permit benchmark exact quantum calculations using the *Thawed Gaussian Approximation* [71] (TGA) or split-operator methods. As pointed out above, the GDR and GBM constitute a conceptual bridge between computational efficiency and formal accuracy; while both GDR and GBM converge to the exact quantum result, the number of trajectories required for convergence will certainly increase with , yet—as will be clear from the examples below—this growth is typically much slower than the exponential scaling with for fixed-grid methods.

### 3.1Test systems

*Harmonic potential*. In this model, the potential energy surface is represented by a one-dimensional harmonic potential

where is the displacement and the force constant. (For convenience, we set , since nonzero values only shift the spectrum, but do not change its shape.)

*Pyrazine model.* This system is a simplified version of the four-dimensional vibronic coupling model, which takes into account normal modes , , , and of pyrazine.[82] The and surfaces from Ref. are used, but the nonadiabatic coupling between states and is neglected since this coupling is much less important for the excitation than for the often studied excitation. This approximation was justified by two independent exact quantum calculations, with and without the coupling, which yielded only marginally different spectra (not shown). However, even this simplified model requires a nontrivial Duschinsky rotation[83] to transform between normal modes of the ground and excited states.

*Quartic oscillator.* This two-dimensional system, chosen because of its chaotic dynamics,[84] consists of two potential energy surfaces

Chaotic behavior is due to the coupling term since for the Hamiltonian becomes separable and hence integrable.

### 3.2Computational details

The initial state was a Gaussian representing the ground vibrational state of the ground PES [in the harmonic potential (Equation 30)] or (in the pyrazine model). Initial states used in the quartic oscillator are specified in the figure captions. The Gaussian basis was generated with the Monte Carlo technique (with and , see Appendix ?) except in one-dimensional applications, where an equidistant phase-space grid was used (with , see Appendix ?). The width matrix from Eq. (Equation 12) was always equal to the width matrix of the initial state.

In the quartic oscillator, exact quantum-mechanical (QM) benchmark results were obtained with a fourth-order split-operator method,[21] whereas in the harmonic and pyrazine models, QM results were obtained with Heller’s TGA (Refs. ), which is exact in quadratic potentials. Classical trajectories were evolved with a fourth-order symplectic integrator,[21] while the propagation (Equation 46) of the vector was performed with the exponential method (Equation 48). This was feasible because of the limited size of the basis. The propagation time steps used for the harmonic oscillator, pyrazine, and quartic oscillator were a.u., a.u., and , respectively. Unit mass was used unless stated otherwise.

For Gaussian initial states in quadratic potentials, the DR results were computed efficiently with the recently published[22] *Cellular Dephasing Representation* (CDR). This was possible since under these conditions CDR based on a single trajectory is equivalent to the fully converged DR, which would otherwise require thousands of trajectories.[89]

### 3.3Time-resolved stimulated emission: time correlation functions and spectra

Now we turn to the main results comparing the approximations discussed in Subsec. ?. Three overall conclusions can be drawn from these results: First, the GBM corrects the inaccuracies of the DR. Second, in chaotic systems, a finite basis evolving with the average Hamiltonian can, surprisingly, provide more accurate results than two bases evolved separately. Third, despite its simplicity, even the original DR is useful for computing TRSE spectra. The results are presented in four groups according to whether the DR works and whether the GBM (Equation 20) converges faster than the GDR (Equation 26).

*GBM outperforms GDR and DR works.* This occurs, e.g., in the harmonic model (Equation 30) with a large displacement and only a small change in the force constant [see Figure 1(a)-(b)]. The figure shows that both GBM and GDR converge to the exact QM result. As expected in this simple system, GBM converges faster than GDR, in which the basis evolves with the average Hamiltonian. Convergence of GDR is accelerated by moving the Gaussians closer to one another [compare panels (a) and (b)]. While our grid is regular, a similar effect was observed in compressed Monte Carlo sampling.[90] Even the DR is rather accurate and would be exact[23] for .

*GBM outperforms GDR and DR breaks down.* The DR breaks down in simple systems such as the harmonic surfaces (Equation 30) when the force constants differ significantly. Figure 1(c) shows that the DR captures the initial decay of but not its revivals. Methods employing Gaussian bases fix this failure. GBM converges faster than GDR, although the performance of GDR is, again, improved if the basis functions are closer to one another [compare panels (c) and (d)]. Another way to partially correct the breakdown of DR is to multiply the contributions to the DR by trajectory-dependent prefactors.[48] Fortunately, in real systems with dissipation the recurrences in are damped, which improves the credibility of the DR.

As shown in Fig. ?(a), in the pyrazine model the GBM again converges to the exact QM result faster (with ) than the GDR. Although the DR does not yield correct amplitudes of the peaks, their positions are reproduced remarkably well. This is further confirmed in the TRSE spectrum in Fig. ?(b), which was computed by Fourier transforming multiplied by a phenomenological damping function[8]

As the positions of peaks are well reproduced, one could consider this a success rather than a failure of DR. Note that the negative values present even in the exact QM spectrum in Fig. ?(b) are not numerical artifacts—unlike a continuous-wave spectrum, the TRSE spectrum defined by Eq. (Equation 4) is not guaranteed to be positive for all frequencies, although its integral over all frequencies is positive. The appearance of negative values is due to the nonstationary character of the initial state prepared by the pump pulse on the excited surface, and is discussed in detail in Ref. .

*GDR outperforms GBM and DR works.* Unexpectedly, in the chaotic quartic oscillator (Equation 31) the seemingly more approximate GDR converges faster than the GBM [see Fig. ?(a)]. Although the rapid divergence of classical trajectories aggravates the incompleteness of the Gaussian basis, this problem is much less severe in GDR. In GBM, the two bases diverge rapidly even for a small change in Eq. (Equation 31). Unless both bases cover essentially the entire available phase space, will decay artificially fast due to the decay of overlaps between the two bases. In contrast, GDR avoids this decay by using a single basis for dynamics on both surfaces. Unlike the GBM, which would only converge when the two bases approached completeness, the GDR converges with a very small basis since the main contribution to the decay of in the chaotic quartic oscillator comes from the decay of the scalar product and hence is relatively insensitive to the off-diagonal elements of the overlap matrix . Similarly, the success of the original DR in chaotic systems relies on the use of a single Hamiltonian for propagating classical trajectories.[19] This explanation is confirmed in Fig. ?(a), in which GBM exhibits a spurious decay, whereas GDR (with ) agrees with the quantum result. Remarkably, due to chaotic motion, increasing up to improves the GBM result only marginally (not shown). Thanks to using the average Hamiltonian, DR matches the quantum dependence as well as the GDR, albeit at the cost of more trajectories. Comparing GDR with and without the LHA for the potential, Fig. ?(b) demonstrates that the widely used LHA breaks down completely in the quartic oscillator. In this system, matrix elements of must be treated exactly.

*GDR outperforms GBM and DR breaks down.* The success of the DR in chaotic systems is not universal. Figure 2 shows the time correlation function for several choices of the parameter , controlling chaoticity, and of the difference between the two surfaces. In Figure 2, the perturbation strength is measured by parameter , defined as

In eight of the nine Figure 2 panels, GDR converges faster than GBM. In accordance with the heuristic arguments presented above, superiority of GDR over GBM increases with increasing chaoticity, i.e., decreasing . This superiority disappears gradually with increasing perturbation due to the increasing error inherent in the propagation of the finite basis with the AH.

## 4Conclusions

We have shown how the DR (Equation 7), an efficient semiclassical method for computing ultrafast electronic spectra, emerges naturally from the formulation of quantum dynamics in a classically evolving Gaussian basis. This was achieved by a series of three elementary approximations: evolving the basis with the AH, using IG, and applying LA for the potential. Along with the derivation based on linearizing the path integral,[25] this result puts the DR on strong theoretical footing and justifies its place among efficient semiclassical methods for computing specific time correlation functions. Moreover, the accuracy of the DR has been increased by presenting two in-principle exact generalizations of the DR: the GBM and GDR. The GBM is a straightforward application of the concept of a Gaussian basis to time correlation functions of time-resolved spectroscopy. The GDR, in contrast, is a natural generalization of the DR since (i) GDR utilizes a single basis for propagating the quantum state with both Hamiltonians, (ii) this basis propagates classically with the average Hamiltonian, and (iii) the decay of the time correlation function is due to interference and not due to decay of basis overlaps.

As expected, in many situations the GBM converges faster than the GDR. Surprisingly, in chaotic systems the GDR can outperform the GBM in which the two bases evolve separately with the correct Hamiltonians. Numerical results presented in Sec. ? confirm that both methods achieve our main goal of increasing the accuracy of the DR in calculations of ultrafast electronic spectra. As a by-product, ten intermediate methods between the GBM and DR have been obtained, which may be useful for future applications. Relationships between all twelve methods are shown in Fig. ?, which also represents the ubiquitous balancing between formal exactness (achieved typically at a high computational cost) on one hand and computational efficiency on the other. In summary, we believe that our results provide additional insight into the connections between various exact and semiclassical methods, and demonstrate the practical value of semiclassical and Gaussian basis approaches based on classical trajectories.

## AGaussian integrals

Here we derive formulae for the , , and matrix elements required in Eq. (Equation 15). All of these matrix elements can be expressed in terms of three basic integrals

where and denote positive definite symmetric complex matrices, while , , and are -dimensional complex vectors. The integral is well known:[91]

Since , differentiation of Eq.(Equation 34) with respect to the components of gives

Similarly one obtains

where , by noting that

As mentioned in Sec. ?, the Gaussian basis functions labeled by index have the form

where the superscript denoting time dependence is omitted for simplicity. The constant real matrix is assumed to be independent of and to be symmetric positive definite in order that the basis functions (Equation 37) be square normalizable.

Calculation of the overlap matrix is simplified by introducing vectors

Special case of the integral [Eq. (Equation 34)] yields

Application of the identity

for the time-derivative matrix elements (Equation 18) to Eq. (Equation 38) for gives

As for the kinetic operator , we assume a slightly generalized form

where represents matrix elements of a symmetric positive definite matrix . (Nevertheless, all numerical calculations employed a diagonal corresponding to , i.e., was the inverse of the diagonal mass matrix .) Matrix elements of are given by

with .

It is impossible to write the potential energy matrix elements in a closed form for a general potential . One can, however, obtain a useful approximation by expanding the potential in a truncated Taylor series about the coordinate , at which the expression attains its maximum, and by evaluating the resulting integral analytically. Adopting the multi-index notation, the potential is approximated up to the th order as

Contributions to from individual terms in Eq. (Equation 41) are obtained by a repeated application of the differential operator to the overlap matrix . As a result, one obtains the first moment

the second moment

and the third moment

where . The fourth moment is a bit complicated to reproduce here. Nevertheless, it is easily evaluated by using Eq. (Equation 42) together with

## BEfficient numerical implementation

### b.1 Numerical algorithm for the propagation equation (18)

### b.2 Factoring out the semiclassical phase factor in Eq. (18)

Propagation (Equation 15) can be accelerated by evaluating the dominant oscillatory behavior of semiclassically, which is achieved by factoring out the semiclassical phase factor in expansion ( ?):

where is the classical action. New coefficients are propagated according to the equation

The modified matrices can be expressed as

where stands for , , or . A similar factorization was employed by Martínez and co-workers in the Multiple Spawning[11] and by Shalashilin and co-workers in the Coupled Coherent States.[55]

### b.3 Solving the propagation Eqs. (18) or (B2)

Classical propagation of the basis (i.e., of and ) and the action is performed with a symplectic algorithm.[92] Quantum propagation (Equation 15) of the basis coefficients is a more involved process: While quite sophisticated algorithms[93] exist for similar problems, here we employed two simple methods based on dividing the propagation range into equal intervals of size .

Both the *implicit Euler method,*

and the *exact exponential method,*

are clear in the matrix notation, with subscript denoting the th propagation step. In addition, the wave function was renormalized by rescaling by after each step. While it was essential only for the implicit Euler method, the renormalization was always performed since for sufficiently large , the ) cost of renormalization is negligible compared to the overall ) cost of both methods. In the algorithm of Subsec. ?, steps 4, 5, and 6 must be reordered as 6, 4, 5 for the implicit Euler method (Equation 47) since the basis must first be propagated in order to evaluate matrices at the th step.

In practice, it is neither necessary nor desirable to compute the inverse matrix . For example, rather than computing as indicated, it is preferable to solve a system of linear equations for using any standard method. In this context, Martínez and co-workers suggested[50] to use singular value decomposition;[94] theoretical justification was given[96] by [96]. A different approach, consisting in inverting by an iterative algebraic procedure was explored[97] by [97].

Due to matrix exponentiation, the exponential method (Equation 48) is feasible only for smaller basis sets. Although presumably exact, this method does not necessarily permit a larger time step than the first-order implicit Euler method with renormalization. The reason is that for badly conditioned , eigenvalues of the matrix become large except for very small time step . Since most numerical methods for matrix exponentiation require eigenvalues of the exponentiated matrix to be small,[98] lowering is required.

[52] proposed[52] to monitor eigenvalues of during propagation, concluding that the basis size was insufficient if all eigenvalues were (in absolute value) close to . In contrast, if some eigenvalues are very small, some functions should be removed in order to restore regularity of . Such features, however, have not been implemented here.

In our calculations, the exponential method allows a much larger time step than the implicit Euler method (both in the pyrazine model and in the quartic oscillator, see Fig. ?). Figure ? also suggests that the modified propagation Eq. (Equation 46) from Appendix ? permits increasing the time step in comparison with the original propagation Eq. (Equation 15). This improvement is more pronounced in the implicit Euler method (Equation 47) than in the exponential method (Equation 48).

### b.4 Choice of the basis in Eq. (14)

Another numerical issue is the choice of basis in Eq. ( ?), which must represent properly. A small approximation error (in the sense) in Eq. ( ?), however, does not guarantee quality of the basis at later times. In general, a compromise is required between the size of the basis and its acceptability from the dynamical point of view.

Two methods used for the Sec. ? calculations are described below, with more thorough discussions available elsewhere.[99] To keep notation simple, we assume that and consider the initial state to be a Gaussian (Equation 12):

*Equidistant phase-space basis* functions are constructed as

where

Symbols and represent the numbers of points in the corresponding phase-space coordinates. The size of basis is .

Grid spacings and are chosen to ensure a given number of basis functions per phase-space area and a constant absolute value of overlap between neighboring functions with the same index or . These requirements imply

Since the basis is nonorthogonal, ensuring fixed overlap between neighboring functions does not guarantee constant linear independence of the basis with increasing , as illustrated in Figure 3, which shows the dependence of the condition number of the overlap matrix on .

*Monte Carlo basis* is generated with an algorithm[100] sampling the coherent-state basis functions from the absolute value of the overlap of the initial state (Equation 49) with a basis state of Eq. (Equation 12). The absolute value of this overlap, understood as a function of and , is

The overall procedure is as follows:

The conditional statement in step is added to improve the condition number of the resulting overlap matrix.[11] A modified approach based on orthogonal projections was proposed[102] by [102] in their Matching-Pursuit Split-Operator Fourier-Transform technique.

### References

- , , and , eds., , ed. (, , )bibitemNoStop
- , ed. (, , )bibitemNoStop
- bibitemStop
- (, , )bibitemNoStop
- bibitemStop
- bibitemNoStop
- bibitemNoStop
- , ed. (, , )bibitemNoStop
- (, )bibitemNoStop
- , ed. (, )bibitemNoStop