A Solution to the \theta_{2}(s) case

Beyond the Quantum Adiabatic Approximation: Adiabatic Perturbation Theory

Abstract

We introduce a perturbative approach to solving the time dependent Schrödinger equation, named adiabatic perturbation theory (APT), whose zeroth order term is the quantum adiabatic approximation. The small parameter in the power series expansion of the time-dependent wave function is the inverse of the time it takes to drive the system’s Hamiltonian from the initial to its final form. We review other standard perturbative and non-perturbative ways of going beyond the adiabatic approximation, extending and finding exact relations among them, and also compare the efficiency of those methods against the APT. Most importantly, we determine APT corrections to the Berry phase by use of the Aharonov-Anandan geometric phase. We then solve several time dependent problems allowing us to illustrate that the APT is the only perturbative method that gives the right corrections to the adiabatic approximation. Finally, we propose an experiment to measure the APT corrections to the Berry phase and show, for a particular spin-1/2 problem, that to first order in APT the geometric phase should be two and a half times the (adiabatic) Berry phase.

Perturbation theory; Phases: geometric; dynamic or topological
pacs:
31.15.xp, 03.65.Vf

I Introduction

Aside from interpretation, Quantum Mechanics (QM) is undoubtedly one of the most successful and useful theories of modern Physics. Its practical importance is evidenced at microscopic and nano scales where Schrödinger’s Equation (SE) dictates the evolution of the system’s state, i.e., its wave function, from which all the properties of the system can be calculated and confronted against experimental data. However, SE can only be exactly solved for a few problems. Indeed, there are many reasons that make the solution of such a differential equation a difficult task, such as the large number of degrees of freedom associated with the system one wants to study. Another reason, the one we want to address in this paper, is related to an important property of the system’s Hamiltonian: its time dependence.

For time independent Hamiltonians the solution to SE can be cast as an eigenvalue/eigenvector problem. This allows us to solve SE in many cases exactly, in particular when we deal with systems described by finite dimensional Hilbert spaces. For time dependent Hamiltonians, on the other hand, things are more mathematically involved. Even for a two-level system (a qubit) we do not, in general, obtain a closed-form solution given an arbitrary time dependent Hamiltonian, although a general statement can be made for slowly varying Hamiltonians. If a system’s Hamiltonian changes slowly during the course of time, say from to , and the system is prepared in an eigenstate of at , it will remain in the instantaneous (snapshot) eigenstate of during the interval . This is the content of the well-known adiabatic theorem (1).

But what happens if is not slowly enough varied? For how long can we still consider the system to be in a snapshot eigenstate of , i.e., for how long the adiabatic approximation is reliable? What are the corrections to the adiabatic approximation? One of our goals in this manuscript is to provide practical and useful answers to these questions. We introduce a perturbative expansion about the adiabatic approximation, named adiabatic perturbation theory (APT), using the quantity as our small parameter. This power series expansion in is subsequently used to calculate corrections to the adiabatic approximation for several time dependent two-level systems. It is worth noting that answers to previous questions can also be seen, under certain provisos, as a way of solving perturbatively any time dependent problem. We should stress that the APT is not related to the time-ordered Dyson series method since the latter is not a perturbative expansion about the adiabatic approximation, in terms of the small parameter . Rather, it is an iterative way of getting the unitary operator governing the evolution of a system, in terms of a small perturbative potential in the Hamiltonian.

Another goal is to present an exhaustive comparison of all the approximation methods developed so far to solving SE. In particular, we show the exact equivalence between Garrison’s multi-variable expansion method (2) (which solves an extended set of partial differential equations) and APT. However, it is important to stress that the APT, being an algebraic method, is straightforward to use while Garrison’s approach is very hard to extend beyond first order. We also provide an extension to Berry’s iterative method (3) where, contrary to the original approach, we keep all terms of the new Hamiltonian obtained after each iteration. We then discuss the possibility to choose other types of iteration (unitary transformations) to potentially do better than Berry’s prescription.

Furthermore, it is known that if the conditions of the adiabatic theorem are satisfied and , it follows that the state describing the system at is given by , where is the initial state and is a phase that can be split into dynamical and geometrical parts (4). This raises another question we address here and which is not independent from the ones above: what are the corrections to the Berry phase (4) as the system deviates from the adiabatic approximation? To provide an answer we make use of the Aharonov-Anandan (AA) geometric phase (5), which is a natural extension of the Berry phase having a geometric meaning whenever the initial state returns to itself, even for a non-adiabatic evolution. We thus compute the AA phase for the corrections to the adiabatic approximation which, therefore, possess the geometrical and gauge invariance properties of any AA phase. We then show, for a particular spin-1/2 example, that whenever and the evolving state corrected up to first order returns to itself (up to a phase) at , we obtain a geometric phase that is two and a half Berry’s phase value.

In order to provide a clear and complete analysis of the questions raised above we structure our paper as follows. (See Fig. 1 for a structural flowchart of the paper.)

Figure 1: Different approximation methods to solving the time-dependent Schödinger equation. APT: Adiabatic perturbation theory (Garrison, Ponce, this paper); IRBM: Iterative rotating-basis method (Kato, Garrido, Nenciu, Berry); TDPT: Time-dependent perturbation theory (Dirac); SA: Sudden approximation (Messiah); AA: Adiabatic approximation (Born and Fock).

In Sec. II we review the adiabatic approximation, highlighting the conditions that the snapshot eigenvectors and eigenvalues of must satisfy for this approximation to be valid. In Sec. III we review many strategies that may be employed to find corrections to the adiabatic approximation as well as to the Berry phase. As shown later, those methods are unsatisfactory since either they do not furnish all the terms that correct the geometrical phase and the adiabatic approximation or they cannot be seen as a perturbation in terms of the small parameter . In Sec. IV we present our perturbation method, i.e. APT, in its full generality and provide explicit corrections to the adiabatic approximation up to second order. In Sec. V we deal with corrections to the geometric phase using the previous method, presenting its first order correction. In Sec. VI we compare all other methods with the APT, emphasizing the main differences among them. In Sec. VII we review the exact and analytical solution of a time dependent problem and expand it in terms of the small parameter . Then we show that our perturbative method is the only one that gives all the terms obtained from the expansion of the exact solution. We also propose an experiment where APT corrections to the Berry can be measured. In Sec. VIII we solve numerically three other time dependent problems and compare them with our perturbative method. Finally, in Sec. IX we provide our concluding remarks.

Ii The adiabatic approximation

Let us start rewriting the time dependent SE in terms of the rescaled time , where is the relevant time scale of our Hamiltonian . We then formally solve the SE, emphasizing the assumptions imposed on the spectrum of , and show the conditions the instantaneous (snapshot) eigenvectors of must satisfy for the adiabatic approximation to be valid.

The time dependent SE is written as

(1)

where is the state describing our system at time . Since we want to work with the rescaled time and it results

(2)

Building on the knowledge that the adiabatic phase can be split into a geometrical () and a dynamical () part (4) we may write down the solution as

(3)

in which are time dependent coefficients to be determined later on. The sum over includes all snapshot eigenvectors of ,

(4)

with eigenvalue ( represents its ground state (GS)). The Berry phase associated to the eigenvector is

(5)

while

(6)

defines its dynamical phase. Let us start assuming that has a non-degenerate spectrum during the whole evolution. Note that the initial () conditions on are encoded in . Therefore, if the initial state is we will have , where is the Kronecker delta. In this case, as we will see below, the spectrum needs to satisfy the less restrictive condition , , for our perturbation method to work. In other words, our method will work whenever one starts the evolution at the GS and there is no level crossing between and any other (even though the excited state part of the spectrum may display level crossings). Similar type of conditions can be shown to apply to states living in subspaces spectrally separated from the rest.

Replacing Eq. (3) into (2) using Eq. (4) and left multiplying it by leads to

(7)

where the dot means and the indices were exchanged. Here , , and

(8)

So far no approximation was invoked and in principle the time dependence can be found by solving the system of coupled differential equations given in (7). General numerical methods to solve such equations will face the computational difficulty of integrating highly oscillatory terms such as , making the approach numerically unstable. Later on we show that our perturbative method gets rid of this problem.

The adiabatic approximation consists in neglecting the coupling terms (7), i.e., setting ,

(9)

Replacing Eq. (9) into (3) we obtain,

(10)

where we used instead of since the adiabatic approximation will be the zeroth order term in the perturbative method developed later. In the case the system starts at the GS,

(11)

For the sake of completeness, let us analyze some general properties of . Since the eigenvectors of are orthonormal we have . Taking the derivative with respect to we get , which implies that is a purely imaginary number, as it should be since is real. When , by taking the derivative of Eq. (4) with respect to and left multiplying by one gets

(12)

where This last expression indicates that the adiabaticity condition is related to the existence of a gap. A spectrum of discussions on the validity of the adiabatic approximation can be found in Refs. (6); (7); (8); (9); (10).

Iii Corrections to the adiabatic approximation

We can classify all the strategies to find corrections to the adiabatic approximation into two groups. The first one includes those methods that perform a series expansion of the wave function in terms of the small parameter , with representing the time scale for adiabaticity. In this group we include the pioneering approach of Garrison (2) and the seminal work of Ponce et al. (11). The second group includes those methods that intend to approximate the solution to the time dependent SE without relying on a formal series expansion of the wave function (3); (12); (13); (14) but using the adiabiatic approximation as their zeroth-order step. In this section we review two methods belonging to the first group and one to the second, called adiabatic iteration by Berry (3). We then comment on a possible extension of the latter.

iii.1 Examples of the first group

We first show how to manipulate Eq. (7) in order to get a series expansion in terms of the small parameter , which we call the standard (textbook) approach. We then discuss the multi-variable expansion method of Garrison (2), who also dubbed it APT.

The standard approach

One can formally integrate Eq. (7) to obtain

(13)

where

(14)

The integral inside the sum in Eq. (13) can be written as

(15)

in which . Our goal here is to expand in powers of . This can be done by using the mathematical identity

(16)

Replacing Eq. (16) into (15) we arrive at

(17)

One can apply the identity (16) again to the integrand of the last term by substituting for ,

(18)

with the symbol standing for the term

One can similarly continue the iteration to obtain higher order terms but the first two are already enough for our purposes. We should note that, strictly speaking, the procedure just described is not a genuine power series expansion in terms of the small parameter . This is because to all orders we have a phase contribution ( is purely imaginary) of the form . This term is related to the dynamical phase of our system and together with the Berry phase will play an important role in the APT developed in Sec. IV.

Using Eq. (18) in (13) and keeping terms up to first order in we obtain after substituting the values of and

Note that we have to solve this equation iteratively keeping terms up to first order in . This is equivalent to replacing at the right-hand side of (LABEL:iterative0),

Finally, substituting Eq. (LABEL:iterative1) into (3) we get the (unnormalized; normalization introduces higher order corrections in ) state that corrects the adiabatic approximation up to first order via the standard approach,

(21)

where is given by Eq. (10) and

with . If the system is at the GS at , , and Eq. (LABEL:standard1) reduces to

(23)

which displays no linear in correction to the component (the sum starts at ). As shown in Sec. IV, there is a missing term correcting the coefficient multiplying the GS that naturally appears in the APT. Also, , as we would expect since we must recover the initial state at .

Multi-variable expansion method

To obtain a time dependent multi-variable SE we consider the quantities as independent variables, i.e. (2). They are called fast variables in contrast to the rescaled time , which is the slow variable. In this language the differential operator is replaced by , where

and the modified SE is written as,

(24)

To solve Eq. (24) we write the wave function as follows

(25)

where represents all the variables and

(26)

Note that is written as a power series in and our goal is to obtain to all orders. Using Eq. (26) we can rewrite (25) as

(27)

Substituting Eq. (27) in the modified SE (Eq. (24)), carrying out the derivatives, and taking the scalar product with we get

(28)

Noting that the last term of the previous equality can be written as

we can rewrite Eq. (28) in the following form

(29)

where we have exchanged . A sufficient condition for the validity of Eq. (29) is obtained when we set

(30)

and

(31)

Hence, we can calculate the coefficients by solving the partial differential Eqs. (30) and (31). Note that to seek for the solution of order we need to have the previous, , order solution. Furthermore, as we increase the order, the partial differential equations become more cumbersome constituting a practical limitation of this method. The APT developed in Sec. IV, on the other hand, does not rely on any differential equations whatsoever. All corrections to the adiabatic approximation of order are obtained via algebraic recursive relations that involve coefficients of order . This will allow us to derive in a relative straightforward manner explicit expressions up to second order in the small parameter .

In what follows we derive explicit expressions for and . To zeroth-order Eq. (30) tells us that does not depend on the variables , i.e., . Moreover, since at we have the initial condition then it immediately follows that and

(32)

To have the adiabatic approximation as the zeroth order term in the power series solution we must have (cf. Eq. (3) with (27))

(33)

which according to Eq. (31) leads to

(34)

But Eq. (33) together with (5) imply that . Thus, Eq. (34) becomes

(35)

and we now want to solve this equation.

Following Garrison (2) we write

(36)

with the assumption that (average over )

(37)

In other words, we have separated out the and dependence of into two contributions; the first depends only on , and is called the average term; the second one depends on both and , but with the additional condition that its average over the fast variables is zero. Thus, . Substituting Eq. (36) into (35) we get

(38)

and solving for we obtain

(39)

Note that , with independent of the variables , is also a solution of Eq. (38). However, since we imposed that , the only possible value for is zero.

If the initial state is () one gets

(40)

and since and the only dependence on in Eq. (39) is in we get

(41)

We are now able to determine the average term . Inserting Eq. (36) into (31) we get for ,

where we have used that . Averaging over , and noticing that , , and using Eq. (41) we obtain

(42)

We can recast the average (using Eq. (39)) as

(43)

in which we have used that . Equation (43) plus imply that Eq. (42) can be written as

(44)

where

(45)
(46)

and whose well known general solution is

(47)

It is interesting to note that the integrating factor is related to the Berry phase . Inserting Eqs. (45) and (46) into (III.1.2) we get

(48)

We can now write down the expression for given (Eq. (39)) and (Eq. (48)),

(49)

To determine we use Eq. (32), which guarantees that the adiabatic approximation is obtained as zeroth order,

(50)

Finally, expressing as given in Eq. (21) and using Eqs. (27), (49), and (50) we get for the first order correction to the adiabatic approximation,

in which

(52)

Note that now we are writing again explicitly the dependence of on time, i.e., . For completeness, we write down the first order correction when we start at the GS ()

where we have replaced in the first sum.

Comparing Eqs. (LABEL:generalC1) and (LABEL:groundC1) with Eqs. (LABEL:standard1) and (23) we immediately see that now we have a new extra term for the first order correction, the one proportional to . We would like to remark, though, that in Garrison’s original work (2) he only obtained the first line in Eq. (LABEL:groundC1), and thus our presentation constitutes an elaboration on his general idea. Going beyond first order in within Garrison’s approach is an extraordinary tour de force. Fortunately, we will see in Sec. IV that not only the extra term appears in our APT but, moreover, it is quite easy to obtain higher order corrections. Indeed, we will prove the mathematical equivalence between the two methods.

iii.2 Example of the second group

The iterative method proposed by Berry (3) consists of successive unitary operations that hopefully rotate the original basis or axes (the eigenvectors of the original Hamiltonian) closer and closer to the evolving state. In the most optimistic scenario a finite number of rotations would bring us to a moving frame in which the Hamiltonian, as seen from this new frame, becomes time independent (this is the case in the simple single spin problem of Ref. (21)). Then we can solve the transformed Hamiltonian using well developed time independent techniques and, by reversing the transformations, we would have the answer to the original problem.

Berry (3) was only interested in corrections to the geometric phase that can be obtained by such a procedure. He showed that this strategy leads to successive corrections to the Berry phase although only in an asymptotic sense, i.e., after, let us say, the -th rotation, the next following terms cannot improve the result achieved up to this iteration; rather, they spoil any possible useful correction. In Ref. (3) it was also shown, and we will review it here, that this iterative process is not an expansion in the small parameter since every iteration contains to infinite orders. We should also note that, as stated in Ref. (14), Berry’s iterative method is equivalent to the ones of Refs. (12); (13); (14).

In what follows we will extend Berry’s approach to include corrections to the wave functions. For the ease of notation, and since we will be dealing with successive iterations, we will denote the original Hamiltonian, its eigenvalues, and eigenvectors as , , and , respectively; after iterations we will have , , and . Also, as in previous sections, the initial state is written as .

The main idea behind Berry’s approach lies in the realization that the unitary operator () that gives the snapshot eigenvector of , i.e.,

(54)

can be used to construct the state

(55)

whose time evolution is determined to be

(56)

with

(57)

Repeating the previous argument with a new unitary operator , which gives the snapshot eigenvectors of ,