The variational-relaxation algorithm for finding quantum bound states

The variational-relaxation algorithm for finding quantum bound states

Abstract

I describe a simple algorithm for numerically finding the ground state and low-lying excited states of a quantum system. The algorithm is an adaptation of the relaxation method for solving Poisson’s equation, and is fundamentally based on the variational principle. It is especially useful for two-dimensional systems with nonseparable potentials, for which simpler techniques are inapplicable yet the computation time is minimal. (To be published in the American Journal of Physics.)

I Introduction

Solving the time-independent Schrödinger equation for an arbitrary potential energy function is difficult. There are no generally applicable analytical methods. In one dimension it is straightforward to integrate the equation numerically, starting at one end of the region of interest and working across to the other. For bound-state problems for which the energy is not known in advance, the integration must be repeated for different energies until the correct boundary condition at the other end is satisfied; this algorithm is called the shooting method.(1); (2); (3); (4)

For a nonseparable(5) potential in two or more dimensions, however, the shooting method does not work because there are boundary conditions that must be satisfied on all sides. One can still use matrix methods,(6); (7); (8) but the amount of computation required can be considerable and the diagonalization routines are mysterious black boxes to most students.

This paper describes a numerical method for obtaining the ground state and low-lying excited states of a bound system in any reasonably small number of dimensions. The algorithm is closely related to the relaxation method(9); (10); (11) for solving Poisson’s equation, with the complication that the equation being solved depends on the energy, which is not known in advance. The algorithm does not require any sophisticated background in quantum mechanics or numerical analysis. It is reasonably intuitive and easy to code.

The following section explains the most basic version of the algorithm, while Sec. III derives the key formula using the variational method. Section IV presents a two-dimensional implementation of the algorithm in Mathematica. Section V generalizes the algorithm to find low-lying excited states, and Sec. VI presents two nontrivial examples. The last two sections briefly discuss other related algorithms and how such calculations can be incorporated into the undergraduate physics curriculum.

Ii The algorithm

A standard exercise in computational physics(9); (10); (11) is to solve Poisson’s equation,

 ∇2ϕ(→r)=−ρ(→r), (1)

where is a known function, by the method of relaxation: Discretize space with a rectangular grid, start with an arbitrary function that matches the desired boundary conditions, and repeatedly loop over all the grid points that are not on the boundaries, adjusting each value in relation to its nearest neighbors to satisfy a discretized version of Poisson’s equation. To obtain that discretized version, write each term of the Laplacian operator in the form

 ∂2ϕ∂x2≈ϕ(→r+δ^x)+ϕ(→r−δ^x)−2ϕ(→r)δ2, (2)

where is the grid spacing and is a unit vector in the direction. Solving the discretized Poisson equation for then gives the needed formula,

 ϕ0=¯¯¯ϕnn+12dρ0δ2, (3)

where and are the values of and at (the current grid location), is the dimension of space, and is the average of the values at the nearest-neighbor grid locations. As this formula is applied repeatedly at all grid locations, the array of values “relaxes” to the desired self-consistent solution of Poisson’s equation that matches the fixed boundary conditions, to an accuracy determined by the grid resolution.

What is far less familiar is that this method can be adapted to solve the time-independent Schrödinger equation. To see the correspondence, write Schrödinger’s equation with only the Laplacian operator term on the left-hand side:

 ∇2ψ(→r)=−2(E−V(→r))ψ(→r), (4)

where is the energy eigenvalue, is the given potential energy function, and I am using natural units in which and the particle mass are equal to 1. Discretizing the Laplacian gives a formula of the same form as Eq. (3),

 ψ0=¯¯¯¯ψnn+1d(E−V0)ψ0δ2, (5)

where the subscripts carry the same meanings as in Eq. (3). The appearance of on the right-hand side creates no difficulty at all, because we can solve algebraically for :

 ψ0=¯¯¯¯ψnn1−(E−V0)δ2/d. (6)

The more pressing question is what to do with , the energy eigenvalue that we do not yet know. The answer is that we can replace it with the energy expectation value

 ⟨E⟩=⟨ψ|H|ψ⟩⟨ψ|ψ⟩, (7)

where is the Hamiltonian operator. We then update this expectation value after each step in the calculation. (The denominator in Eq. (7) is needed because the algorithm does not maintain the normalization of .) As the relaxation process proceeds will steadily decrease, and we will eventually obtain a self-consistent solution for the ground-state energy and wave function.

The inner products in Eq. (7) are integrals, but we can compute them to sufficient accuracy as ordinary sums over the grid locations. The denominator is simply

 ⟨ψ|ψ⟩=∑iψ2iδd, (8)

where the index runs over all grid locations and I have assumed that is real. Similarly, the potential energy contribution to the numerator is

 ⟨ψ|V|ψ⟩=∑iViψ2iδd. (9)

To obtain the kinetic energy () contribution we again discretize the derivatives as in Eq. (2), arriving at the expression

 ⟨ψ|K|ψ⟩=−d∑i(ψi¯¯¯¯ψnn−ψ2i)δd−2. (10)

Each of these inner products must be updated after every change to one of the values, but there is no need to evaluate them from scratch. When we change to , the corresponding changes to the inner products are

 Δ⟨ψ|ψ⟩ =(ψ20,new−ψ20,old)δd, (11) Δ⟨ψ|V|ψ⟩ =V0(ψ20,new−ψ20,old)δd, (12) Δ⟨ψ|K|ψ⟩ =−2d(ψ0,new−ψ0,old)¯¯¯¯ψnnδd−2 +d(ψ20,new−ψ20,old)δd−2, (13)

where the factor of 2 in the first term of Eq. (13) arises because there is an identical contribution of this form from the terms in the sum of Eq. (10) in which is one of the neighboring grid locations.

The algorithm, then, is as follows:

1. Discretize space into a rectangular grid, placing the boundaries far enough from the region of interest that the ground-state wave function will be negligible there.

2. Initialize the array of values to represent a smooth, nodeless function such as the ground state of an infinite square well or a harmonic oscillator. All the values on the boundaries should be zero and will remain unchanged.

3. Use Eqs. (8)–(10) to calculate , , and for the initial array.

4. Loop over all interior grid locations, setting the value at each location to

 ψ0=¯¯¯¯ψnn1−(⟨E⟩−V0)δ2/d. (14)

Also use Eqs. (11)–(13) to compute the changes to and that result from this change to , and use these quantities to update the value of before proceeding to the next grid location.

5. Repeat step 4 until and no longer change, within the desired accuracy.

The simplest procedure, as just described, is to update each value “in place,” so that a change at one grid location immediately affects the calculation for the next grid location. In the terminology of relaxation methods, this approach is called the Gauss-Seidel algorithm.(9); (10); (11)

Iii Variational interpretation

In the previous section I asserted, but did not prove, that will steadily decrease during the relaxation process. To see why this happens, it is instructive to derive Eq. (14) using the variational method of quantum mechanics.(12) The idea is to treat each local value as a parameter on which the function depends, and repeatedly adjust these parameters, one at a time, to minimize the energy expectation value . So let us consider how the expression for in Eq. (7) depends on .

Focusing first on the denominator of Eq. (7), we discretize the integral as in Eq. (8), but rewrite the sum as

 ⟨ψ|ψ⟩=ψ20δd+s, (15)

where is an abbreviation for the terms in the sum that do not depend on . Similarly, the discretization of Eqs. (9) and (10) allows us to write the numerator of Eq. (7) as

 ⟨ψ|H|ψ⟩=−2dψ0¯¯¯¯ψnnδd−2+dψ20δd−2+V0ψ20δd+h, (16)

where the factor of 2 in the first term is the same as in Eq. (13) and is an abbreviation for all the terms that do not depend on . Figure 1: Mathematica code to implement the basic variational-relaxation algorithm for a two-dimensional quantum system. Here the potential energy function is for a harmonic oscillator, for which the solutions are known analytically.

Inserting Eqs. (15) and (16) into Eq. (7) gives

 ⟨E⟩=h−2dδd−2ψ0¯¯¯¯ψnn+(dδd−2+V0δd)ψ20s+δdψ20, (17)

where I have written the and terms first because they are larger than the others by a factor on the order of the total number of lattice points.(13) We are looking for the value of that minimizes this expression. Differentiating with respect to and setting the result equal to 0 gives a complicated equation, but in the limit of a large lattice it is a valid approximation to keep only the leading terms in and . With that approximation, after some algebra, the extremization condition reduces to

 ψ0=¯¯¯¯ψnn1−[(h/s)−V0]δ2/d. (18)

The ratio is equal to in the limit of an infinite lattice, so this result is effectively the same as Eq. (14). By similarly focusing on the leading nontrivial terms in powers of and it is straightforward to show that this extremum is a minimum, if the lattice spacing is sufficiently small.

We can therefore be confident that each step of the algorithm will reduce the value of . This result suggests, but does not prove, that the algorithm will converge to the system’s ground state. In fact every energy eigenfunction is a stationary point of the energy functional,(12) so there can be situations in which the algorithm converges (or almost converges) to an excited state instead of the ground state. But the excited states are unstable to small perturbations, and they can be avoided entirely by choosing an initial wave function that is sufficiently similar to the ground state. Once the algorithm brings below every excited-state energy, the ground state is the only possible result after sufficiently many iterations.(14)

Iv An implementation

Figure 1 shows a basic implementation of the variational-relaxation algorithm in Mathematica,(15) for a two-dimensional potential well. Translating this example to other computer languages should be straightforward.

The first four lines of the code define the resolution of the lattice (here ), initialize the wave function to the ground state of an infinite square well, and then plot the initial wave function using a custom plotting function that maps the array of lattice points to a square in the plane extending from 0 to 1 in each direction. Notice that the array size is one element larger in each dimension than the nominal lattice size ( in this case), so that the edges can be mapped to exactly 0 and 1, where the wave function will be held fixed throughout the calculation. Notice also that an offset of 1 is required when indexing into the array, because Mathematica array indices start at 1 rather than 0.

Lines 5 and 6 define and plot an array of values to represent the potential energy function. Here, for testing purposes, this function is a harmonic oscillator potential with a classical angular frequency of 40 (in natural units) in the direction and 60 in the direction. The rest of the code is sufficiently versatile, however, that almost any potential energy function can be used, as long as its discrete representation is reasonably accurate.

Lines 7–13 compute the inner products and according to Eqs. (8) through (10). Because the width of the two-dimensional space is one unit, the lattice spacing is the reciprocal of the lattice size (1/50). Line 14 displays the initial value of .

The algorithm itself is implemented in the relax function (lines 15–24), whose argument is the number of times to iterate the algorithm for each lattice site. For each iteration step we loop over all the lattice sites and for each site, save the old wave function value, calculate the new value from Eq. (14), and update the inner products and using Eqs. (11)–(13). When everything is finished we display the final value of and then plot the final wave function. To actually execute this function we type something like relax for 100 iteration steps. We can do this repeatedly, checking the results for convergence.

For this harmonic oscillator example using a lattice, 100 iteration steps results in an energy value of 49.97, within less than 0.1% of the analytically known value of 50 (that is, times the sum of the and frequencies). After another 100 steps the energy converges to 49.94, slightly below the analytical value due to the lattice discretization. The calculated wave function has the familiar Gaussian shape. On a typical laptop computer, Mathematica can execute 100 iteration steps for a lattice in just a few seconds. This execution speed, along with the brevity of the code, brings two-dimensional calculations of this type well within the reach of a typical undergraduate homework assignment.

V Extensions

An easy trick for speeding up the algorithm is to use over-relaxation,(9); (10); (11) in which we try to anticipate subsequent iterations by “stretching” each change to a value by a factor between 1 and 2. If we call the value of expression (14) , then the formula to update becomes

 ψ0,new=ψ0,old+ω(ψ0,% nominal−ψ0,old), (19)

where the “stretch factor” is called the over-relaxation parameter. Figure 2 shows how the rate of convergence depends on , for the two-dimensional harmonic oscillator example described in the preceding section. Figure 2: The energy expectation value ⟨E⟩ as a function of the iteration number, for the two-dimensional harmonic oscillator example used in Sec. IV. The different data sets are for different values of the over-relaxation parameter ω. The basic algorithm without over-relaxation corresponds to ω=1. In this example, with a lattice size of 50×50, the optimum ω is about 1.8.

After finding the ground state of a particular system, we can go on to find its first excited state with only a minor modification of the algorithm. The idea is the same as with other variational solutions,(12) namely, to restrict the trial wave function to be orthogonal to the ground state. To do this, we periodically project out any contribution of the ground state to the trial function during the course of the calculation. More explicitly, the procedure is as follows:

1. Normalize and save the just-determined ground-state wave function as .

2. Initialize a new trial wave function that crudely resembles the first excited state, with a single node. The first excited state of an infinite square well or a harmonic oscillator would be a reasonable choice. It may be necessary to try different orientations for the node of this function.

3. Proceed as in the basic algorithm described in Sec. II, but after each loop through all the grid locations, calculate the component of the trial function as the inner product

 ⟨ψgs|ψ⟩=∑iψgs,iψiδd. (20)

Multiply this inner product by and subtract the result from (point by point). Then recalculate the inner products and before proceeding to the next iteration.

The orientation of the initial state’s node matters because we want it to resemble the first excited state more than the second. For example, the first excited state of the anisotropic harmonic oscillator potential used in Sec. IV has a node line parallel to the  axis, so a good choice for the initial state would be , rather than the orthogonal state . If the latter state is used the algorithm will become stuck, for a rather long time, on the second excited state (with energy 110) before finally converging to the first excited state (with energy 90).

After finding the first excited state we can find the second excited state in a similar way, this time projecting out both the ground-state contribution and the first-excited-state contribution after each loop through all the grid locations. We could then go on to find the third excited state and so on, but if many states are needed it may be easier to use matrix methods.(6); (7); (8)

Vi Examples Figure 3: A “Flaming W” potential energy well (upper left) and the three lowest-energy wave functions and corresponding energies for a particle trapped in this well. The potential energy is zero inside the W and +200 (in natural units) in the flat area surrounding it. The grid resolution is 64×64.

To illustrate the versatility of the variational-relaxation algorithm, Fig. 3 shows results for an intricate but contrived potential energy well based on an image of the Weber State University “Flaming W” logo.(16) As before, the units are such that , the particle mass, and the width of the square grid region are all equal to 1. In these units the sine-wave starting function (that is, the ground state of an infinitely deep two-dimensional box of this size) has a kinetic energy of , so the well depth of 200 is substantially larger than this natural energy scale. All three of the states shown are bound, with energies less than 200. As expected, the ground-state wave function spreads to fill the oddly shaped potential well, but is peaked near the center. The two lowest excited states are relatively close together in energy, with nodal curves that are roughly orthogonal to each other. Figure 4: The interparticle potential energy (upper left) and the three lowest-energy wave functions and corresponding energies for a pair of equal-mass but distinguishable particles trapped in a one-dimensional infinite square well, repelling each other according to Eq. (21). The grid resolution is 50×50 and the maximum potential energy is 80 in natural units.

For a second example, note that the Schrödinger equation for a single particle in two dimensions is mathematically equivalent to that for two equal-mass particles in one dimension. We can therefore adapt our results to the latter system by renaming . Consider, then, a pair of equal-mass (but distinguishable) particles trapped in a one-dimensional infinite square well, exerting a repulsive force on each other.(17) A smooth potential for modeling such a force is a Gaussian,(18)

 V(x1,x2)=Vmaxe−(x1−x2)2/a2, (21)

and for illustration purposes I will take and in natural units. This potential and its three lowest-energy stationary states are shown in Fig. 4. Interpreting these two-particle wave function plots takes a little practice; for example, the two peaks in the ground-state wave function correspond not to the two particles, but rather to two equally probable configurations for both particles, one with particle 1 near and particle 2 near , and the other with the particles interchanged. This is an “entangled” state, because a measurement of one particle’s position changes the probability distribution for the other particle’s position. Notice that the first excited state, with a node along , has an almost identical probability density and only slightly more energy, as is typical of double-well potentials. In contrast, the second excited state tends to put one particle or the other near the middle of the well and has considerably more energy.

These two examples are merely meant to suggest the wide range of possible uses of the variational-relaxation algorithm. The algorithm should be applicable to real-world systems such as quantum dots(7); (19) and other nano-structures that can be modeled as two-dimensional or three-dimensional potential wells. For a system of two particles in one dimension, one could investigate other interaction potentials, repulsive or attractive, as well as other confining potentials.

Vii Related algorithms

The algorithm described in this paper cannot possibly be new, because it is such a minor adaptation of the familiar relaxation algorithm for Poisson’s equation. However, I have been unable to find a published description of it.(20); (21)

Giordano and Nakanishi(22) describe a closely related algorithm that also uses a rectangular lattice and the variational principle, but takes a Monte Carlo approach. Instead of looping over all lattice points in order, they choose successive lattice points at random. And instead of computing the value of that minimizes using Eq. (14), they consider a random change to , compute the effect of this change on , and then accept the change if will decrease but reject it if would increase. This Monte Carlo approach inspired the algorithm described in this paper. However, the Monte Carlo algorithm is much less efficient, at least when fixed, uniform distributions are used for the random numbers.

Koonin and Meredith(23) describe an alternative algorithm that evolves an initial trial function in imaginary time, according to the Schrödinger-like equation

 ∂ψ∂τ=−Hψ, (22)

whose formal solution is

 ψ(τ)=e−Hτψ(0). (23)

If we imagine expanding as a linear combination of eigenfunctions of the Hamiltonian , then we see that the ground-state term in the expansion decreases the most slowly (or grows the most rapidly if its energy is negative), so eventually this evolution in “fake time” will suppress all the remaining terms in the expansion and yield a good approximation to the ground state.(24) An advantage of the imaginary-time approach is that its validity rests on the fundamental argument just given, rather than on the more subtle variational principle.

The speed and coding complexity of an imaginary-time algorithm depend on the specific method used for the imaginary-time evolution. Koonin and Meredith use a basic first-order forward-time Euler integration, resulting in an algorithm that is just as easy to code as the variational-relaxation algorithm, and that requires about the same execution time as the latter without over-relaxation. Their algorithm is therefore a strong candidate for use in an undergraduate course, especially if students are more familiar with time-evolution algorithms than with relaxation algorithms (and if teaching relaxation algorithms is not a course goal).

Faster imaginary-time algorithms also exist, but may be too sophisticated for many educational settings. Simply switching to a centered-difference approximation for the time derivative, which is quite effective for the actual time-dependent Schrödinger equation,(25) yields an algorithm that is unstable no matter how small the time step.(26) Implicit algorithms(27) would solve the stability problem, but these require working with large matrices. One reviewer of early drafts of this paper strongly recommends an imaginary-time adaptation of the split-operator algorithm described in Sec. 16.6 of Ref. (2), which uses a fast Fourier transform to switch back and forth between position space and momentum space during each time step.

Viii Classroom use

Students in a computational physics course should be able to code the variational-relaxation algorithm themselves, perhaps after practicing on the ordinary relaxation algorithm for Poisson’s or Laplace’s equation. Coding the algorithm in just one spatial dimension can also be a good warm-up exercise, keeping in mind that it is usually easier to solve one-dimensional problems by the shooting method.

In an upper-division undergraduate course in quantum mechanics, it may be better to provide students with the basic code shown in Fig. 1 (or its equivalent in whatever programming language they will use). Typing the code into the computer gives students a chance to think about each computational step. After verifying that the algorithm works for a familiar example such as the two-dimensional harmonic oscillator, students can be asked to modify it to handle other potential energy functions, over-relaxation, and low-lying excited states.

Even in a lower-division “modern physics” course or the equivalent, I think there is some benefit in showing students that the two-dimensional time-independent Schrödinger equation, for an arbitrary potential energy function, can be solved. For the benefit of students and others who are not ready to code the algorithm themselves, and for anyone who wishes to quickly explore some nontrivial two-dimensional stationary states, the electronic supplement(28) to this paper provides a JavaScript implementation of the algorithm with a graphical user interface, runnable in any modern web browser.

In any of these settings, and in any other physics course, introducing general-purpose numerical algorithms can help put the focus on the laws of physics themselves, avoiding an over-emphasis on idealized problems and specialized analytical tricks.

Acknowledgements.
I am grateful to Nicholas Giordano, Hisao Nakanishi, and Saul Teukolsky for helpful correspondence, to Colin Inglefield for bringing Ref. (7) to my attention, to the anonymous reviewers for many constructive suggestions, and to Weber State University for providing a sabbatical leave that facilitated this work.

References

1. N. J. Giordano and H. Nakanishi, Computational Physics, 2nd ed. (Pearson Prentice Hall, Upper Saddle River, NJ, 2006), Sec. 10.2.
2. H. Gould, J. Tobochnik, and W. Christian, An Introduction to Computer Simulation Methods, 3rd ed. (Pearson, San Francisco, 2007), http://www.compadre.org/osp/items/detail.cfm?ID=7375, Sec. 16.3.
3. M. Newman, Computational Physics, revised edition (CreateSpace, Seattle, 2013), Sec. 8.6.
4. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes, 3rd ed. (Cambridge University Press, Cambridge, 2007), Sec. 18.1.
5. When the potential energy function can be written as a sum of one-dimensional potentials, e.g., , the time-independent Schrödinger equation can be solved by separation of variables, reducing its solution to the one-dimensional case. However, a generic multidimensional potential does not have this property.
6. F. Marsiglio, “The harmonic oscillator in quantum mechanics: A third way,” Am. J. Phys. 77(3), 253–258 (2009); R. L. Pavelich and F. Marsiglio, “Calculation of 2D electronic band structure using matrix mechanics,” Am. J. Phys. 84(12), 924–935 (2016).
7. P. Harrison and A. Valavanis, Quantum Wells, Wires and Dots, 4th ed. (John Wiley & Sons, Chichester, UK, 2016).
8. R. Schmied, Introduction to Computational Quantum Mechanics (unpublished lecture notes, 2016), http://atom.physik.unibas.ch/teaching/CompQM.pdf.
9. See Giordano and Nakanishi, Ref. (1), Sec. 5.2; Gould, Tobochnik, and Christian, Ref. (2), Sec. 10.5; Newman, Ref. (3), Secs. 9.1–9.2; and Press, et al., Ref. (4), Sec. 20.5.
10. S. E. Koonin and D. C. Meredith, Computational Physics: Fortran Version (Addison-Wesley, Reading, MA, 1990), Sec. 6.2.
11. A. L. Garcia, Numerical Methods for Physics, 2nd ed., revised (CreateSpace, Seattle, 2015), Sec. 8.1.
12. The variational method is discussed in most quantum mechanics textbooks. Especially thorough treatments can be found in C. Cohen-Tannoudji, B. Diu, and F. Laloë, Quantum Mechanics (John Wiley & Sons, New York, 1977), Vol. II, Complement E; E. Merzbacher, Quantum Mechanics, 3rd ed. (John Wiley & Sons, New York, 1998), Sec. 8.1; and R. Shankar, Principles of Quantum Mechanics, 2nd ed. (Springer, New York, 1994), Sec. 16.1. Each of these texts shows more generally that every discrete stationary state is an extremum of the energy functional .
13. The claim that is much larger than the other terms in the numerator of Eq. (17) is valid if is always positive. The potential energy can always be shifted so that this condition holds.
14. The trivial function is also a solution of Eq. (14), but is unstable under small perturbations so the algorithm will never converge to this solution.
15. Wolfram Mathematica, http://www.wolfram.com/mathematica/.
17. When the two-particle potential consists only of an interaction term of the form , the Schrödinger equation is separable in terms of center-of-mass and relative coordinates. Adding a confining potential, such as the infinite square well used here, makes the problem irreducibly two-dimensional.
18. For simulation results of scattering interactions between two particles in one dimension interacting via a Gaussian potential, see J. J. V. Maestri, R. H. Landau, and M. J. Páez, “Two-particle Schrödinger equation animations of wave packet-wave packet scattering,” Am. J. Phys. 68(12), 1113–1119 (2000).
19. For other approaches to analyzing quantum dots, see Z. M. Schultz and J. M. Essick, “Investigation of exciton ground state in quantum dots via Hamiltonian diagonalization method,” Am. J. Phys. 76(3), 241–249 (2008); B. J. Riel, “An introduction to self-assembled quantum dots,” Am. J. Phys. 76(8), 750–757 (2008); D. Zhou and A. Lorke, “Wave functions of elliptical quantum dots in a magnetic field,” Am. J. Phys. 83(3), 205–209 (2015).
20. Section 18.3 of Numerical Recipes, Ref. (4), presents a general relaxation algorithm for systems of ordinary differential equations, and Sec. 8.0.1 shows how to treat eigenvalue problems by adding another equation to the system. For the special case of the one-dimensional time-independent Schrödinger equation the approach taken in this paper is much simpler. Reference (4) does not discuss eigenvalue problems for partial differential equations.
21. See, for example, W. F. Ames, Numerical Methods for Partial Differential Equations, 2nd ed. (Academic Press, New York, 1977). This standard reference includes a section on eigenvalue problems for partial differential equations, but does not describe any algorithm that I can recognize as equivalent to the one in this paper.
22. Giordano and Nakanishi, Ref. (1), Sec. 10.4.
23. Koonin and Meredith, Ref. (10), Sec. 7.4.
24. We can interpret the imaginary time parameter as an inverse temperature, and the exponential factor as a Boltzmann probability factor in the canonical ensemble. Then the limit corresponds to lowering the reservoir temperature to zero, putting the system into its ground state.
25. P. B. Visscher, “A fast explicit algorithm for the time-dependent Schrödinger equation,” Computers in Physics 5 (6), 596–598 (1991). This algorithm is also described in Ref. (1), Sec. 10.5, and Ref. (2), Sec. 16.5; I learned it from T. A. Moore in 1982.
26. See Ames, Ref. (21), Sec. 2-4. The argument given there is easily adapted to Eq. (22).
27. See Koonin and Meredith, Ref. (10), Secs. 7.2–7.3; Garcia, Ref. (11), Chap. 9; or Press et al., Ref. (4), Secs. 20.2–20.3.
28. See the “Quantum Bound States in Two Dimensions” web app at http://physics.weber.edu/schroeder/software/BoundStates2D.html.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   