Efficient methods for time-dependence in semiclassical Schrödinger equations

# Efficient methods for time-dependence in semiclassical Schrödinger equations

Philipp Bader,111La Trobe University, Department of Mathematics, Kingsbury Dr, Melbourne 3086 VIC, Australia  Arieh Iserles,222Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Rd, Cambridge CB3 0WA, UK.  Karolina Kropielnicka333Institute of Mathematics, University of Gdańsk, Wit Stwosz Str. 57, 90-952 Gdańsk, Poland.  & Pranav Singh444Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Rd, Cambridge CB3 0WA, UK.
###### Abstract

We build an efficient and unitary (hence stable) method for the solution of the semi-classical Schrödinger equation subject with explicitly time-dependent potentials. The method is based on a combination of the Zassenhaus decomposition (?) with the Magnus expansion of the time-dependent Hamiltonian. We conclude with numerical experiments.

## 1 Introduction

Rapid advances in laser technologies over the recent years have led to a significant progress in the control of systems at the molecular level (?). Pioneering work in the control of chemical systems at the quantum level was done in the study of photo-dissociation and bimolecular reactions. Various control techniques such as the pump-dump quantum control scheme (?) and the coherent control schemes (?) have had numerous experimental validations and applications (??).

These experimental successes and a dramatic improvement in our ability to shape femtosecond laser pulses over the recent years has led to a great deal of interest in the development of a systematic way of designing controls (shaping laser pulses) and a requirement for rigourous mathematical analysis of issues such as controllability (?).

In the case of laser induced breakdown (photo-dissociation) of a molecule, for instance, there is a great deal of interest in designing lasers that achieve efficient breakdown. The fact that the dissociation timescales are often themselves in femtoseconds means that it cannot generally be assumed that the laser pulse causes near-instantaneous and efficient excitation of a molecule sitting in the ground state, having no other influence thereafter – the correct dynamics require taking into account the time-dependent nature of the electric potential (laser) throughout the evolution of the wavefunction.

To analyse the control exerted by these lasers we need efficient means of computing the Schrödinger equation featuring time-dependent Hamiltonians, existing strategies for which are either low accuracy or become prohibitively expensive with higher orders of accuracy.

Optimal control schemes for designing laser pulses is often posed as an inverse problem that is solved via optimisation schemes requiring repeated solutions of Schrödinger equations with modified time-dependent Hamitlonians. An ability to efficiently solve these Schrödinger equations with moderately large time steps and high accuracy becomes crucial here, creating a need for high-order methods (?).

In this paper, we are interested in the numerical computation of the linear, time-dependent Schrödinger equation in a semiclassical regime for a nucleus moving in a time-dependent electric field,

 ∂tu(x,t)=−iεHu(x,t)=i[εΔ−ε−1V(x,t)]u(x,t), x∈[−1,1], t≥0, (1.1)

equipped with an initial condition , and periodic boundary conditions. We assume, that the potential is periodic, for .

The equation (1.1) is posed on a Hilbert space , and the squared modulus of the solution is the probability density of finding the particle in state at time . For this reason, the initial condition is normalised to one and it is easy to see that the norm of the solution is an invariant,

 ∥u(x,t)∥2L2=∫1−1|u(x,t)|2dx=∥u(x,0)∥2L2.

The wave function undergoes unitary evolution, which we wish to preserve under discretisation – both because of physical significance, and since, as we mention in Section 4, it implies stability of the numerical method.

Here the semiclassical parameter may arise out of a Born–Oppenheimer approximation, via spatio-temporal scaling, or a combination of the two, depending upon the physical system under consideration. The regularity of depends on the order of desired accuracy, but for convenience we have assumed that it is smooth in its domain. The initial condition is usually a high-frequency wave packet, but even if it is non-oscillatory it can be shown, cf. the analysis in (?), that the solution to this Schrödinger equation is highly oscillatory, with frequency of at least . This, as a matter of fact, is the main reason why finding an effective numerical method for (1.1) is such a challenging task. Obviously, the naive approach of finite differences is out of consideration, but instead the usual methodology consists of a semidiscretisation in space via spectral methods followed by an exponential splitting.

The first step in approximating (1.1) usually is spatial discretisation, which yields the following system of ODEs

 \boldmathu\unboldmath′(t)=i(εK2−ε−1DV(⋅,t))\boldmathu\unboldmath(t),t≥0, (1.2)

where and are matrices representing the discretisation of second derivative and the multiplication by , respectively. We understand, that is a vector representing an approximation to the solution (1.1) at time and is derived from the initial conditions. A second order method can be obtained by freezing the matrix in the middle of interval and applying the Strang splitting

 u(t)=e12tεK2e−tε−1DV(⋅,t/2)e12tεK2\boldmathu\unboldmath(0)+O(t3).

This splitting has the advantage of separating scales ( and ) as well as easily computable exponentials. Using spectral collocation or spectral spatial discretisation methods, the matrices and are either diagonal (thus exponentiated directly), or circulant (thus approximated by FFT). Higher order methods can be obtained by representing the solution to (1.2) through a Magnus expansion,

 \boldmathu\unboldmath(t)=e\boldmathΘ% \unboldmath(t)\boldmathu\unboldmath(0),

where is a time-dependent skew-Hermitian matrix obtained as an infinite series with each composed of nested integrals and commutators of the matrices and .

This approach was exploited in ?, where authors conclude that the Magnus expansion is convergent if is such, that for some constant the inequality holds. This, as explained later, forces us to time step of order . Another serious drawback of this approach lies in the costly approximation of the exponential . As it occurs, the exponent ends up to be of a large size (both: spectral and dimensional), and neither diagonal nor circulant. Indeed, observe, that the highly oscillatory nature of the solution to (1.1) requires a large number of degrees of freedom in the spatial discretisation, . Since the differentiation matrix scales as , the operator , as a sum of nested commutators of and , occurs to be a large matrix which does not posses any favourable structure that could allow an effective approximation of the exponential .

Powerful tools like Zassenhaus splitting or Baker–Campbell–Hausdorff formula were historically avoided in splitting methods due to the large computational cost of nested commutators. However as it happens, choosing the correct, infinite–dimensional Lie algebra in case of the Schrödinger vector field, these commutators lose their unwelcome features and enable the derivation of effective, asymptotic splittings.

In (?), the current authors established a new framework for a numerical approach to the linear time-dependent problem with an autonomous potential,

 ∂tu(x,t)=i[εΔ−ε−1V(x)]u(x,t), x∈[−1,1], t≥0,

where the underlying problem is considered to evolve in a certain Lie group, and the splitting of the linear operator on the right is followed by semidiscretisation. Due to the choice of a suitable Lie algebra, the authors were able to derive a new exponential splitting, that is the asymptotic exponential splitting of the following form:

 eih(εK−ε−1D)=e12W[0]e12W[1]⋯e12W[s]eW[s+1]e12W[s]⋯e12W[1]e12W[0]+O(ε2s+2), (1.3)

where

 W[0] = W[0](h,ε,K,D)=O(ε0), W[k] = W[k](h,ε,K,D)=O(ε2k−2),k=1,…,s, W[k+1] = W[k+1](h,ε,K,D)=O(ε2s).

Here and are matrices that approximate the differential operator and multiplication by the potential , respectively. Such asymptotic exponential splittings derived in (?) are superior to standard exponential splittings in a number of ways.

First of all, instead of quantifying the errors in terms of the step size, , which could have been misleading due to large hidden constants, the errors are quantified in terms of the inherent semiclassical parameter , taking into account the oscillations characteristic of the semiclassical Schrödinger equation.

Secondly, these require far fewer exponentials than classical splittings to attain a given order. To be precise, the number of exponentials is shown to grow linearly, rather than exponentially, with the order. Moreover, the exponents decay increasingly more rapidly in powers of , yielding an asymptotic splitting.

Thirdly, each of these exponentials can be computed fairly easily. The exponents and are either diagonal or circulant matrices and their exponentials can be computed either directly or through FFT, respectively. Remaining exponents are very small and their exponentials can be computed cheaply using low-dimensional Lanczos methods.

The overall cost is quadratic in the desired order, in contrast to the exponential costs of Yošida type splittings which becomes increasingly prohibitive once the Hamiltonian to be split features more than two terms.

The aim of the paper is to derive asymptotic exponential splittings for Schrödinger equations with time-varying potentials. To develop such a splitting, we must first resort to the Magnus expansion.We follow the approach of (?) in Section 3, discretising the integrals in the Magnus expansion using Gauss-Legendre quadratures. However, unlike the traditional Magnus expansion for ODEs, we work with infinite dimensional operators to evaluate the commutators. To arrive at such a commutator-free expression, we work in the free Lie algebra of the infinite dimensional operators and discussed in Section 2. Following the framework of (?), a symmetric Zassenhaus splitting is carried out on the commutator-free Magnus expansion, to present, eventually, the asymptotic exponential splitting of the fifth order (3.28). Obviously, following this derivation one can obtain the method of any desired order, see Table 1. Implementation and numerical examples are discussed in Section 4.

Convergence and unitarity of our method follows from exactly the same argument that was presented in (?). Namely it can be easily shown that all the exponents appearing in the derived splitting (3.28) are skew-Hermitian, hence the exponentials are unitary, which suffices for the stability of the method. Now, given consistency of our method (indeed, our scheme will be shown to be of local accuracy much higher than the order one required for the method to be consistent), we can use Lax-equivalence theorem and conclude the convergence of the method.

Realistic systems in quantum chemistry could involve time-dependent matrix-valued, highly oscillatory and stochastic potentials, among others. The first of these will require an extension of our Lie algebraic framework and is under active investigation, while extensions of an alternative scheme that was developed in a recent work (?) could prove promising for oscillatory and low regularity potentials. In this approach the integrals appearing in the Magnus expansion are discretised at the very last stage, following a symmetric Zassenhaus splitting.

## 2 Lie-group setting

Following the established framework in (?), we suppress the dependence on in (1.1) and analyse the following abstract ODE

 ∂tu(t)=A(t)u(t), u(0)=u0, (2.4)

where . Because the operator belongs to , the Lie algebra of (infinite-dimensional) skew-Hermitian operators acting on the Hilbert space , its flow is unitary and resides in – the Lie group corresponding to .

The vector field in the semiclassical Schrödinger equation is a linear combination of the action of two operators, and multiplication by the interaction potential . Since our main tools, Magnus expansion and exponential splitting methods, entail nested commutation, we consider the free Lie algebra,

 \gothic F=FLA{∂2x,V},

i.e., the linear-space closure of all nested commutators generated by and . Following (?), we describe their action on sufficiently smooth functions, e.g.

 [V,∂2x]u=V(∂2xu)−∂2x(Vu)=−(∂2xV)u−2(∂xV)∂xu

which means that . In general, we note that all terms in F belong to the set

 \gothic G={n∑k=0yk(x)∂kx:n∈% \Bbb Z+,y0,…,yn∈C∞p[−1,1]},

where the subscript means periodicity in . It is trivial to observe that G is itself a Lie algebra with the commutator

 [n∑i=0fi(x)∂ix,m∑j=0gj(x)∂jx]= n∑i=0m∑j=0i∑ℓ=0(iℓ)fi(x)(∂i−ℓxgj(x))∂ℓ+jx −m∑j=0n∑i=0j∑ℓ=0(jℓ)gj(x)(∂j−ℓxfi(x))∂ℓ+ix. (2.5)

In similar vein to (?), we proceed in the pursuit of stability to replace all odd powers of that are accompanied by . The identities,

 y∂x = −12[∫x0y(ξ)dξ]∂2x−12∂xy+12∂2x[∫x0y(ξ)dξ ⋅], y∂3x = −(∂xy)∂2x−14[∫x0y(ξ)dξ]∂4x+14∂3xy−12∂2x[(∂xy)⋅]+14∂4x[∫x0y(ξ)dξ ⋅], y∂5x = 43(∂3xy)∂2x−53(∂xy)∂4x−16[∫x0y(ξ)dξ]∂6x−12∂5xy+76∂2x[(∂3xy)⋅] −56∂4x[(∂xy)⋅]+16∂6x[∫x0y(ξ)dξ ⋅],

where is a function, suffice for our presentation. The general form for expressing as a linear combination of even derivatives is reported in (?).

In the Zassenhaus splitting for time-independent potentials (?), the commutators arise solely from the symmetric Baker–Campbell–Hausdorff formula where each commutator has an odd number of letters. In the case of the Schrödinger equation, where our operators and are each multiplied by , this translates into an odd power of for each commutator.

The Magnus expansion, however, does not posses such a desirable structure – it has commutators with odd as well as even number of letters. As a consequence, we have odd and even powers of accompanying our terms and it is not enough to blindly replace odd powers of . Instead, we replace all odd powers of when accompanied by an odd power of and all even powers of when accompanied by an even power of . A general formula for the replacement of even derivatives by odd derivatives can be proven along similar lines as (?). For all practical purposes, however, we only require the identities

 y = −[∫x0y(ξ)dξ]∂x+∂x[∫x0y(ξ)dξ⋅], y∂2x = −13[∫x0y(ξ)dξ]∂3x−23(∂xy)∂x−13∂x[(∂xy)⋅]+13∂3x[∫x0y(ξ)dξ⋅], y∂4x = −15[∫x0y(ξ)dξ]∂5x−43(∂xy)∂3x+815(∂3xy)∂x+715∂x[(∂3xy)⋅] −23∂3x[(∂xy)⋅]+15∂5x[∫x0y(ξ)dξ⋅],

which can be easily verified directly.

Once appropriate odd and even differential operators are replaced, operators of the form start appearing ubiquitously in our analysis. Far from being unique to the Magnus expansion, they are characteristic of the free Lie algebra of and – these algebraic forms also appear in Zassenhaus splittings for time-independent potentials (?). We introduce a convenient notation,

 ⟨f⟩k:=f∙ ∂kx=12{f∘∂kx+∂kx∘f}=12{f∂kx+∂kx[f ⋅ ]},f∈C∞p([−1,1];\Bbb R),

where is the Jordan product on the associative algebra of (operatorial composition). In this notation and .

It is worth noting that there is rich algebraic theory behind these structures which will feature in another publication, but not much is lost here by considering these as merely a notational convenience. For the purpose of this work we make observations which can be verified using the machinery of (2.5) in conjunction with the odd and even derivative replacement rules. We present identities which suffice for simplifying all commutators appearing in this work,

 [⟨f⟩4,⟨g⟩0] = 4⟨f(∂xg)⟩3−2⟨3(∂xf)(∂2xg)+f(∂3xg)⟩1, (2.6) [⟨f⟩3,⟨g⟩0] = 3⟨f(∂xg)⟩2−12⟨3(∂xf)(∂2xg)+f(∂3xg)⟩0, [⟨f⟩2,⟨g⟩2] = 2⟨f(∂xg)−(∂xf)g⟩3+⟨2(∂2xf)(∂xg)−2(∂xf)(∂2xg)+(∂3xf)g−f(∂3xg)⟩1, [⟨f⟩2,⟨g⟩1] = ⟨2f(∂xg)−(∂xf)g⟩2−12⟨2(∂xf)(∂2xg)+f(∂3xg)⟩0, [⟨f⟩2,⟨g⟩0] = 2⟨f(∂xg)⟩1, [⟨f⟩1,⟨g⟩1] = ⟨f(∂xg)−(∂xf)g⟩1, [⟨f⟩1,⟨g⟩0] = ⟨f(∂xg)⟩0.

The terms and reside in

 H={ik+1⟨f⟩k : f∈C∞p([−1,1];\Bbb R), k≥0}

and, as evident through a few examples in (2.6), all commutators of elements of also reside in . In other words, is a Lie algebra such that

and it suffices to work directly in using the rules (2.6) instead of proceeding via (2.5) followed by the odd-even derivative replacement rules.

For a real valued , is symmetric if is even and skew-symmetric otherwise. This property is preserved under discretisation once we use spectral collocation on a uniform grid. In that case is discretised as a skew-symmetric matrix and is discretised as a diagonal matrix . The term is discretised as which is clearly symmetric when is even and skew-symmetric otherwise. Consequently, elements of such as , which are skew-Hermitian operators, discretise to skew-Hermitian matrices of the form .

This structural property of is responsible for unitary evolution and numerical stability of our schemes since exponentials of skew-Hermitian matrices are unitary.

###### Definition 1

The height of a term is defined as

 ht(n∑i=0αi⟨fi⟩ki)=max{k1,…,kn}.

These terms benefit from a remarkable property of height reduction which is stated here without proof,

For the commutators relevant to this work, this property can be verified by a quick inspection of the identities (2.6).

For the largest part, our work will proceed in the language of the undiscretised operators introduced in this section. At the very last stage we will resort to spectral collocation on the uniform grid over for spatial discretisation. For this purpose we will need at least points since (regardless of initial conditions) the solution of the Schrödinger equation develops spatial oscillations of order (??). Consequently, scales like and . Keeping eventual discretisation in mind, we abuse notation and write .

More formally, following (?) we assume that the solution , which is known to feature oscillations, obeys the bounds,

 ∥∥∂mt∂kxu(x,t)∥∥≤Cm,kε−m−k,t∈[0,T]. (2.7)

In this context

 ∥⟨f⟩ku∥=12∥(KkDf+DfKk)u∥≤∥V∥∞Ckε−k=O(ε−k).

Although it is possible to work in a more rigourous language throughout, the shorthand is indeed seen to be based on firm theoretical grounds while simplifying exposition greatly. We also remind the reader that the growth of derivatives of the potential, while certainly effecting error constants in our splittings (and therefore of concern in the context of moderately small values of ), are irrelevant in the asymptotic limit of since they don’t scale with and don’t effect the asymptotic analysis carried out here.

The property of height reduction leads to a systematic decrease in the size of terms with commutation,

 [⟨f⟩k,⟨g⟩l]=O(ε−k−l+1).

Going further, we want to analyse all terms in the common currency of the inherent semiclassical parameter and assume that our choice of the time-step, , is governed by , for some . Larger values of correspond to very small time steps and are best avoided.

## 3 The solution

### 3.1 The Magnus expansion

To look for the solution of (2.4) one needs to take into account some features of the operator . First of all it depends on time and it cannot be assumed that its values in different points of time commute, i.e. we assume that and give up the hope that the solution is of the simple form . Secondly evolves in a Lie algebra so the solution of (2.4) resides in a corresponding Lie group. Both properties can be dealt with elegantly using the famous result from (?) by writing the solution as single exponential,

 u(t)=eΘ(t)u(0), (3.8)

where the infinite series , also called as Magnus expansion, is an element of the underlying Lie algebra. Its convergence has been shown in (?), (?), (?) for sufficiently small time–steps. Obviously we truncate this series and advance with adequately small time step

 u(t+h)=eΘ(t+h,t)u(t), (3.9)

starting from the initial step,

 u(h)=eΘ(h,0)u(0), (3.10)

where we understand that the operator is a flow evolving the solution from to . Let us observe now, that the aim of the paper consists in a derivation of the asymptotic exponential splitting for a certain function of type . This means, that the algorithm we are going to present will advance in small time steps (exactly like the method (1.3) does). For the clarity of exposition, however, we will focus on the first step of Magnus expansion, i.e. (3.10), noting that (3.9), when required for any time window , is easily recovered from (3.10) by a straightforward translation of the vector field to . For convenience we shorten the notation, writing instead of .

Simple differentiation of the ansatz in (3.10) together with elementary algebra, see (?) or (?) for details, lead to the conclusion that the exponent satisfies the dexpinv equation,

where are Bernoulli numbers () and the adjoint representation is defined recursively by and . The solution of (3.11) is an infinite series and can be obtained using Picard iterations. It was proposed in (?) and widely analysed in (???).

The first few terms of the Magnus expansion ordered by size in are

 Θ(h)=∫h0A(ξ)dξ−12∫h0∫ξ10[A(ξ2),A(ξ1)]dξ+112∫h0∫ξ10∫ξ10[A(ξ2),[A(ξ3),A(ξ1)]]dξ+14∫h0∫ξ10∫ξ20[[A(ξ3),A(ξ2)],A(ξ1)]dξ+⋯. (3.12)

We say that a multivariate integral of a nested commutator, , is of grade if for every smooth . Truncating the Magnus expansion at grade to , preserves time symmetry (?), (?). Time symmetry means that not only the exact flow , but also the numerical flow , satisfy

 φ(h,0)∘φ(0,h)=\boldmathI\unboldmath,Φh,0∘Φ0,h=\boldmathI\unboldmath. (3.13)

As one can observe, the time symmetry of the numerical flow is equivalent to the fact that

 Ωp(0,h)=−Ωp(h,0). (3.14)

Time symmetry is a desirable feature because truncation by power with odd leads to a gain of an extra unit of order, see (?). This means that if we aim for a numerical method of order six it suffices to consider the truncation of the Magnus expansion only to the terms listed in (3.12).

### 3.2 Magnus expansion in practice

It turns out that the multivariate integrals can be efficiently computed using simple univariate quadrature rules of ?. We will follow their approach and evaluate the potential at the Gauss–Legendre quadrature points (, , ) which is then transformed (?) to obtain a far less costly quadrature. As a result, to obtain order six approximation, all the effort of approximation of the solution boils down to the following formula

 Θ(h)=B1+112B3−112[B1,B2]+1240[B2,B3]+1360[B1,[B1,B3]]−1240[B2,[B1,B2]]+1720[B1,[B1,[B1,B2]]]+O(h7), (3.15)

where

 B1=hA(t2), B2=√153h(A(t3)−A(t1)), B3=103h(A(t3)−2A(t2)+A(t1)). (3.16)

See (?) and (?) for comprehensive information and ways to approximate the Magnus expansion using different quadrature rules and to higher orders. The former could be relevant if the time-dependent potential is only known at certain grid-points as might be the case in some control setups.

Substituting with the given Hamiltonian as and working in the free Lie algebra H, we can derive a commutator free expansion using the identities (2.6). Keeping the notation of the previous section in mind, we approximate the time derivatives of the potential by central differences, cf. (3.16),

 V0=V(t2), V1=√153h(V(t3)−V(t1)), V2=103h2(V(t3)−2V(t2)+V(t1)),

so that

 B1=ihε∂2x−ihε−1V0,B2=ih2ε−1V1,B3=ih3ε−1V2.

Once these are substituted in (3.15), we use the identities (2.6) along with the observation that and to arrive at a Magnus expansion in the format with and .

The grade one commutators of the self-adjoint basis appearing in (3.15), for instance, can be simplified as follows,

 [B1,B2] = [ihε⟨1⟩2−ihε−1⟨V0⟩0,−ih2ε−1⟨V1⟩0] (3.17) = h3[⟨1⟩2,⟨V1⟩0]=2h3⟨∂xV1⟩1, [B1,B3] = [ihε⟨1⟩2−ihε−1⟨V0⟩0,−ih3ε−1⟨V2⟩0] (3.18) = h4[⟨1⟩2,⟨V2⟩0]=2h4⟨∂xV2⟩1, [B2,B3] = (3.19) = 0.

Consequently, the grade two commutators appearing in (3.15) are,

 [B1,[B1,B3]] = = = 2ih5ε(2⟨∂2xV2⟩2−12⟨∂4xV2⟩0)+2ih5ε−1⟨(∂xV2)(∂xV0)⟩0 = 4ih5ε⟨∂2xV2⟩2−ih5ε⟨∂4xV2⟩0+2ih5ε−1⟨(∂xV2)(∂xV0)⟩0, [B2,[B1,B2]] = = 2ih5ε−1[⟨∂xV1⟩1,⟨V1⟩0] = 2ih5ε−1⟨(∂xV1)2⟩0, [B1,[B1,B2]] = (3.22) = = 2ih4ε(2⟨∂2xV1⟩2−12⟨∂4xV1⟩0)+2ih4ε−1⟨(∂xV1)(∂xV0)⟩0 = 4ih4ε⟨∂2xV1⟩2−ih4ε⟨∂4xV1⟩0+2ih4ε−1⟨(∂xV1)(∂xV0)⟩0.

The only grade three commutator that we need is

 [B1,[B1,[B1,B2]]] = [ihε⟨1⟩2−ihε−1⟨V0⟩0, = = −4h5ε2(2⟨∂3xV1⟩3−⟨∂5xV1⟩1)+2h5ε2⟨∂5xV1⟩1 −4h5⟨(∂2xV1)(∂xV0)+(∂xV1)(∂2xV0)⟩1−8h5⟨(∂2xV1)(∂xV0)⟩1 =

Substituting (3.173.2) in (3.15) gives us a truncated Magnus expansion for the Schrödinger equation (1.1) in the Lie algebra H,

 Ω5 = ihε⟨1⟩2−ihε−1⟨V0⟩0−112ih3ε−1⟨V2⟩0−16h3⟨∂xV1⟩1 −1120ih5ε−1⟨(∂xV1)2⟩0+1720(−8h5ε2⟨∂3xV1⟩3+3h5ε2⟨∂5xV1⟩1 −h5⟨12(∂2xV1)(∂xV0)+4(∂xV1)(∂2xV0)⟩1) = O(εσ−1)ihε∂2x−ihε−1V0−O(ε3σ−1)112ih3ε−1V2−16h3⟨∂xV1⟩1 +O(ε5σ−1)1360ih5ε−1(2(∂xV2)(∂xV0)−3(∂xV1)2)−O(ε5σ−1)1180h5⟨(∂xV1)(∂2xV0)+3(∂xV0)(∂2xV1)⟩1 = Θ+O(ε7σ−1).

For , the last two terms in , which are , become and can be discarded. After discarding these terms, the Magnus expansion reduces to

 Ω5 = O(εσ−1)ihε∂2x−ihε−1V0−O(ε3σ−1)112ih3ε−1V2−16h3⟨∂xV1⟩1 +O(ε5σ−1)1360ih5ε−1(2(∂xV2)(∂xV0)−3(∂xV1)2)

We note that, due to the property of height reduction discussed in Section 2, a grade commutator in the Magnus expansion of should be . This can indeed be verified in the above expansion. Asymptotically speaking, in terms of , the terms in the expansion are decreasing in size with increasing for any , so that convergence of the Magnus expansion also occurs for much larger time steps such as