\thechapter Tools

Universitat Politècnica de Catalunya

Departament de Física i Enginyeria Nuclear

Tesi Doctoral en Física

MONTE CARLO STUDY

OF QUANTUM PHASE TRANSITIONS

AT ZERO TEMPERATURE

 \tabularcell@hboxDirectors:\tabularcell@hbox\tabularcell@hboxCandidate:\tabularcell@hboxProf. Dr. JordiBoronat,\tabularcell@hbox\tabularcell@hboxO. N. Osychenko\tabularcell@hboxDr. G. E. Astrakharchik\tabularcell@hbox\tabularcell@hbox\par

Doctorat de Recerca en Física Computacional i Aplicada

23 de octubre de 2012

## Introduction

Phase transition is a common term for a wide range of phenomena generally described as a transition between different states of matter with the variation of one or more physical parameters of the system. The transition is accompanied by an abrupt change of some of its physical quantities or its derivatives, whereas the relevant physical magnitudes change continuously within a given phase. The simplest example of a phase transition is the melting of a crystal to form a fluid when its temperature is increased, which produces a discontinuous behavior of its density and some other physical properties. Historically, the first classification of the phase transitions was given by P. Ehrenfest [Ehr33] and relies on a definition of a phase as a state with the minimum thermodynamic free energy. The first-order transition in this framework is a transition with an abrupt change of the first derivative of the system’s free energy with respect to a certain parameter. The second-order transitions are those when the first derivative has a cusp when the parameter is changed, that is when a finite discontinuity appears in the second derivative of the free energy. Ehrenfest’s original proposal was later extended to the cases of infinite discontinuities of physical parameters. The higher-order transitions are defined similarly, as the ones possessing a discontinuity in -th () derivative of the free energy with respect to the parameter. Landau theory [LL80] describes the second-order phase transitions as a result of a symmetry breaking, with a rapid change of a so-called order parameter, characterizing the symmetry properties of a phase. Well-known examples of second-order phase transitions are the transitions between a normal fluid and a superfluid, with the superfluid fraction being the order parameter, or the ferromagnetic-paramagnetic transition with the magnetization as order parameter.

Quantum phase transitions are a broad subclass of these phenomena related to quantum matter, most generally described as a transition between phases at zero or low enough temperature, where quantum effects play an important role. The profound difference of quantum phase transitions from the classic phase transitions lay in the absence of entropy due to the Nernst heat theorem [Ner07]. A classical description of a zero-temperature system can prescribe only one phase (an ideal crystal), whilst a quantum system is capable to undergo a transition, but only with the change of a certain non-thermal parameter, as for instance its density. The role of entropy in classical systems is played in quantum phases by quantum fluctuations. One of the first experimental evidences of a quantum phase transition was the solidification to hcp solid He at low temperatures with a growth of pressure, made by Keesom [Kee42]. The recent advances in methods of manipulation of ultracold matter, especially in the topics of cooling and trapping of atoms [Chu98, CT98, Phi98] and Feshbach resonances [TTHK99, CGJT10], demonstrated possibilities to produce systems with unique and highly tunable interparticle potentials [CTGO11]. The tunability of the interaction in terms of non-thermal parameters, which was achieved in a number of experiments, plays a key role for quantum phase transitions. One of the first theoretical proposals for a quantum phase transition was the bosonic superfluid-Mott insulator transition, based on the Bose-Hubbard model [FWGF89, JBC98], that was finally experimentally confirmed in the work of Greiner et al. [GME02] and a number of subsequent experimental set-ups [OTF01, TOPK06, TCF11]. Recently, the phase diagram and essential thermodynamics of the three-dimensional Bose-Hubbard model was obtained in quantum Monte Carlo simulations by Capogrosso-Sansone et al. [CSPS07].

The energy of a quantum system, described by the Schrödinger equation in a state of a certain symmetry can be obtained with the help of Quantum Monte Carlo methods. From the equations of state, corresponding to different phases, one can find the pressure as a function of the relevant parameters. The double-tangent Maxwell construction, based on the equality of pressures and chemical potential along the transition line, allows to obtain the first-order transition point and the width of the transition zone.

Quantum Monte Carlo (QMC) techniques are ab initio quantum calculation algorithms that might provide deep insight into the design of quantum matter, with a capability to describe a multitude of relevant properties and phenomena of the system. Among them the possibility to locate quantum and temperature phase transitions, and to quantify correlations in the system (e.g. pair correction function, structure factors, and even non-local properties, such as superfluid fraction and Bose–Einstein condensate). Bose-Einstein condensation (BEC), i.e., a macroscopic occupation of the zero-momentum quantum state of a system, despite being proposed by A. Einstein and S. N. Bose in the mid-twenties of the previous century [Ein24][Ein25], used to be considered for many decades more as a mathematical abstraction than an achievable state of matter. The superfluid properties of He at low temperatures, found in the experiments of Kapitza, Allen and Misener [Kap38, AM38] are believed to be related to the presence of a Bose–Einstein condensate. The BEC-like phase transitions have also been observed in excitonic systems [LW93]. Long-lasting efforts of numerous experimental groups to actually observe a signature of a condensed phase in ultracold gases finally gave a positive result: in 1995 Bose-Einstein condensate was found by E. Cornell and C. Wieman [AEM95] in gaseous Rb and later the same year in the other alkali vapours of Na (W. Ketterle et al. [DMA95]) and Li (R. Hulet et al. [BSTH95]). The experimental set-ups to produce a condensate are generally quite complex, partially due to the strict temperature requirements ( K). From the theoretical sight, QMC simulations can yield accurate predictions about the properties of Bose–Einstein condensate, provided the interaction in the system is known.

Let us explain in more detail the techniques, challenges and results that we present in this thesis. We are usually concerned with the properties of a bulk system in its thermodynamic limit, but its QMC description of a bulk is generally performed with periodic boundary conditions (p.b.c.) applied to finite size system. Therefore any Quantum Monte Carlo method yields results with a certain error, related to the size of a simulated system and one has to study the limit , with the number of particles. The properties of a system in the thermodynamic limit are therefore found out by extrapolating the data for limited system sizes to infinity. In the condensed systems that we consider, the convergence in the energy goes as . The dependence of a certain physical quantity is then found for a set of different numbers of particles, and the result is extrapolated to infinity. Nevertheless the lowest number of particles, for which the asymptotic behavior is reached within an acceptable precision is a priori unknown. In practice, a number of probe simulations is performed in order to observe the needed linear dependence, but sometimes the accessible system sizes are too small to obtain the thermodynamic limit correctly. In some cases this problem can be greatly relieved by using the Ewald summation technique. In the framework of the Ewald method the potential energy is calculated for the infinite system, consisting of periodically replicated copies of the original simulation box. Although the method makes the calculation more time-consuming, its use shows that in particular systems, especially when the long-range interactions are present, the dependence of the correction is substantially reduced, making the extrapolation to the thermodynamic limit possible.

The Ewald technique is a well-known simulation tool, often used with some modifications in a number of applications, involving molecular dynamics, Monte Carlo and other algorithms. Despite its popularity, the scope of its utilization is mostly limited to the summation of the Coulomb interactions, where its use is essential due to the long-range nature of the potential. Nonetheless, conceptually the Ewald technique may be applied to a broad variety of various pairwise potentials, for example, of the generic power-law type . In this Thesis we present a detailed step-by-step derivation of the Ewald sums for power-law interaction potentials and for all the terms we give explicit formulas, ready to be used straightforwardly in actual simulations. The derived expressions have been used in the simulations of systems consisting of Rydberg atoms and particles interacting via the Yukawa interaction potential.

The Yukawa potential has been used in the past as the simplest model interaction in atomic nuclei, in dusty plasmas and other systems, but recently this interaction appeared in the field of ultracold gases. In recent experiments [TVA08, HTY11] ultracold systems made up of two kinds of fermions, one heavy and another light, have been realized in actual setups, with an effective cooling achieved by means of an additional bosonic component. Theoretic treatment of quasi-two-dimensional systems with this kind of fermionic mixtures has been done in Ref. [PAP07]. It is argued that the effective interaction potential between light-heavy pairs of fermions is of Yukawa (screened Coulomb) type, with a feasibility of reaching a gas-crystal phase transition in two-dimensional geometry. In the present Thesis we extend the study of crystallization to a fully three-dimensional case at zero temperature. A similar problem was pursued before [CCK76], but unfortunately this problem was not solved entirely, and an approximate Lindemann criterion was used instead of full scale quantum simulations. We find by means of the diffusion Monte Carlo method the exact phase diagram as a function of the relevant parameters, that is density and mass ratio between the two Fermi species. The obtained diagram provides valuable information on the minimum requirements for the mass ratio to achieve a phase transition in actual experiments. Thanks to advances in the field of optical lattices there arises a possibility to produce particle mixtures with extremely high ratios of effective masses. In this Thesis we argue that certain existing setups, involving optical lattices, allow to increase the effective mass ratio enough for potentially reaching the crystallization.

In the last decade, there is a new wave of interest in ultracold systems consisting of Rydberg atoms, and a number of interesting experiments has been performed [HRB07, LWK09, HLW11]. A Rydberg atom is a neutral atom with a single electron excited to a high orbital. The important properties of this quantum object are its simplicity and similarity to a hydrogen atom. Furthermore, its unique properties of possessing very strong and controllable interactions over long distances, together with the novel techniques of ultracold atom manipulation, attracted a great deal of attention due to a prospectively rich behavior of mixtures of excited and unexcited atoms. In a typical experiment a trapped cloud of cold atoms is exposed to a laser field, exciting a small fraction of atoms to a particular Rydberg state. These excited states interact in a much stronger way among themselves than with the unexcited background. The possibility of tuning, turning on, and off, the large magnitude of the forces, as well as a number of other advantageous properties, suggest its use as a quantum gate, that is, a basic element of quantum circuits. Currently, there is a wide range of proposals for physical systems to realize quantum information processing units: trapped ions [BW08], linear optics [KMN07], superconductors [CW08], quantum dots [LWS03], and so on. The one, based on systems made up of Rydberg atoms is unique in terms of the range and the amplitude of the interaction, the working frequency and other advantageous properties [SWM10]. The basic principles of a trapped Rydberg system as a quantum gate stem from the idea of a so-called Rydberg blockade, that is when a single excited atom shifts the energy levels of the nearby unexcited atoms out of the resonance with the driving laser pulse. Further excitations, injected into the cloud, can bring a macroscopic fraction of the cloud to a blockaded mode, allowing for a partial or complete saturation. In actual experiments, the fraction of unexcited atoms permits over excitations before the suppression of the new ones appear  [HRB07, TFS04, SRLA04]. The actual physical phase of the excited atoms cannot be accessed directly in the reported experiments, although their arrangement is considered as a relevant information, both as a standalone physical problem and for the implementation of the quantum gate. A direct observation of a quantum phase transition and a presence of long-range ordering is argued to be a feasible task in similar systems [WLPB08, LWK09]. In the field of quantum computations, Pohl et al. [PDL10] proposed that the presence of a crystal-like ordering could provide a better control over the quantum states. An insight to the spatial ordering in a cloud of Rydberg atoms may also shed light to the phenomenon of the so-called “antiblockade” [APPR07].

As mentioned before, the behavior of an ultracold mixture of excited Rydberg atoms and unexcited background is profoundly rich and complex. It can also depend substantially on particular experimental conditions, like the cloud geometry, laser field properties, etc. We perform a study of a model system, in which we neglect the interactions related to the unexcited background, and using the pairwise repulsive van der Waals for the excited atoms. The general aim of this study is to fully understand the phase diagram of the system. A perspective comparison with future experimental results can demonstrate, how well the properties of the system can be derived from this simple model. Since the number of Rydberg atoms, typically present in the current experimental works, is of the order of thousands or greater, in our simulations we look for all the relevant results in the thermodynamic limit. There is also a variety of possible crystal packings, which might be realized in the solid phase, hence we give a discussion on which of them are energetically preferable.

Another physically relevant system, considered in the Thesis, is bulk molecular para-hydrogen (p-H). This system in the quantum regime (at low temperature) was proposed theoretically as a possible candidate for superfluidity, but it crystallizes at the temperature substantially higher than transition temperature, making it impossible to observe a transition to the superfluid phase. In this work, our Group has studied a metastable non crystalline phase of bulk p-Hby means of Quantum Monte Carlo methods in order to find out the temperature at which this system still contains a noticeable superfluid fraction. The ultimate goal that our Group pursued, was to frustrate the formation of the crystal in the simulated system and to calculate the temperature dependence of the one-body density matrix and of the superfluid fraction. I present the study of the limit of zero temperature using the diffusion Monte Carlo method. Results for the energy, condensate fraction, and structure of the metastable liquid phase at are reported and compared with the ones of the stable solid phase. The simulation at zero temperature is used by our Group as a starting point for the simulation of the system at low temperatures by using Path Integral Monte Carlo technique.

The structure of the Thesis is as follows.

In Chapter \thechapter we discuss the analytical approaches and approximations used in the subsequent Chapters; also we describe the general concepts of the two-particle scattering problem as a tool to construct Jastrow terms in trial wave functions. Chapter \thechapter explains in details the Quantum Monte Carlo methods employed in our calculations from the theoretical and practical points of view. In Chapter \thechapter we explain the Ewald summation technique, applied to a power-law interaction potential, and a generic approach to obtain the Ewald terms. The obtained expressions of this analytic work are implemented into simulations of different physically relevant systems (Rydberg atoms and Yukawa particles). Chapter \thechapter is devoted to the modelling of a system, governed by the model potential between Rydberg atoms . The phase diagram of the system is obtained for a relevant range of densities and temperatures, combining quantum simulations at low temperature and classical treatment at higher temperature. A special attention is paid to the classical description of this system, composed of Rydberg atoms, and its comparison to the quantum system. In Chapter \thechapter we present the simulation of a system with the Yukawa interaction potential. The following Chapter \thechapter presents the results of the Quantum Monte Carlo simulations of molecular para-hydrogen at zero and finite temperatures, performed in our Group. Conclusions are drawn in Chapter \thechapter.

## Chapter \thechapter Tools

### 1 Introduction

This Chapter is intended to provide theoretical basis for the following Chapters. The quantities characterizing properties of a quantum system (correlation functions, static structure factor, and so on) are introduced here and are later used in the subsequent chapters. We also discuss the two-body scattering problem in a three-dimensional system geometry that sheds light to the short-range properties of many-body systems. The two-body scattering solution can be used in the development of trial wave functions needed in the Quantum Monte Carlo algorithms.

The structure of the Chapter is the following.

In Section 2 we introduce experimentally relevant magnitudes and functions that are present in a quantum system. First of all, we consider the analytic forms of the first and second quantization (Secs. 2.1, 2.2). Special attention is given to the relation between correlation functions and mean values of quantum operators. Some correlation functions may be greatly simplified in case of a homogeneous system, which is presented in Section 2.2. The definitions and general comments on static structure factor and momentum distribution are drawn in Section 2.3.

Section 3 is devoted to the study of two-body scattering processes in three-dimensional geometry. The solutions of the two-body scattering problem are provided for a number of physically relevant interaction potentials. The main aim of this last Section is to give an efficient tool to construct two-body Jastrow terms of the trial wave function for Quantum Monte Carlo simulations (for details on QMC methodology see Chapter \thechapter).

### 2 Correlation functions

#### 2.1 Second quantization form

A quantum system of identical particles of variable number is generally described with the help of annihilation and creation operators. The commonly used notations for the auxiliary field operators are for an operator creating a particle in the position , and for an operator destroying a particle in the same position. By means of the creation operator for -th state, that puts a particle to an orbital , and the annihilation operator, , that removes a particle from the orbital , these field operators can be easily represented in the following form:

 Unknown environment '% (1)

If we consider a uniform gaseous system within a volume , single particle states are evidently plain waves . Bosonic operators (1) obey commutation relation , , while fermionic operators obey commutation relations.

First of all, let us discuss the relation between the correlation functions and the mean values of one- and two-body quantum mechanical operators. Let us consider the simplest case when the Hamiltonian of the system is a sum of only one-body and two-body operators

 ^H=^F(1)+^F(2), (2)

where the one-body operator stands for a sum of the one-body terms, and the two-body operator is a sum of corresponding two-body terms, depending on :

 ^F(1) = N∑i=1^f(1)i, (3) ^F(2) = 12N∑i≠j^f(2)i,j. (4)

Obvious examples for one-body operators are an external potential field, depending only on the particles’ coordinates: , or the kinetic energy: . The first operator is diagonal in coordinate space, while the second one is diagonal in momentum representation. A typical example of a two-body operator is a pairwise interaction potential, given in coordinate space: .

The representation of one- and two-body operators and in terms of field operators (see (1)) is straightforward:

 ^F(1) = ∫∫^Ψ†(r)f(1)(r,r′)^Ψ(r′)drdr′ (5) ^F(2) = 12∫∫^Ψ†(r)^Ψ†(r′)f(2)(r,r′)^Ψ(r′)^Ψ(r)drdr′ (6)

where the factor is introduced to take into account the double summation.

Up to now, we did not restrict ourselves to local one-body operators (that is those satisfying the relation for the quantum averages ), but we also consider non-local operators, that is, ones allowed to depend on two arguments in the corresponding integral of (1).

Correlation functions can be introduced in terms of the field operators in the following way:

 C1(r,r′) = ⟨^Ψ†(r)^Ψ(r′)⟩, (7)
 C2(r,r′) = ⟨^Ψ†(r)^Ψ†(r′)^Ψ(r′)^Ψ(r)⟩, (8)

Note that in one-body correlation function (7) we consider non-local dependence, and it has two arguments. At the same time we consider only local two-body operators, that is why we keep two arguments instead of four in (6).

The quantum averages of the operators and may be obtained from and , when the correlation functions are known:

 ⟨^F(1)⟩ = ∫f(1)(r)C1(r,r)dr (9) ⟨^F(2)⟩ = 12∫∫f(2)(r,r′)C2(r,r′)drdr′. (10)

The correlations of the field operator between two distinct points and , are characterized by the one-body correlation function . The diagonal component of (7) yields the density of the particles , hence the sum over the diagonal terms, i.e., the trace of the matrix , is equal to the total number of particles . The two-body correlation function defines correspondingly the density correlations between the particles at positions and , respectively.

It is convenient to introduce dimensionless versions of these functions (7) and (8):

 c1(r,r′) = C1(r,r′)√ρ(r)√ρ(r′) (11) c2(r,r′) = C2(r,r′)ρ(r)ρ(r′) (12)

It can be seen that for bosons the range of any of the functions (11-12) is , and the function can be interpreted as a probability to remove a particle from the position and place it to the position . The obvious relation reflects the fact, that there is always a possibility to put the particle back to its initial location. If no Bose-Einstein condensate is present, the non-diagonal terms asymptotically vanish in the long range limit . The function can be understood as a joint probability to find one particle in the point and another one in the point .

Additional information on correlation functions can be found in Refs [Fee67, Mah00, Gla63, NG99, GS03].

#### 2.2 First quantization form

The physical meaning of the correlation function written in the form of the second quantization has been briefly discussed in Section 2.1. We will use the Monte Carlo methods in order to evaluate averages over the wave function of the system. For that one needs to represent the averages as integrals of the operators over the wave function . We will look for the mean values of the operators in forms, similar to that of (9) and (10). In the first quantization the expectation value of a one-body operator reads as

 ⟨A(1)⟩=∫ψ∗(R)A(1)ψ(R)dR∫|ψ(R)|2dR=N∑i=1∫ψ∗(r1,...,rN)a(1)iψ(r1,....,rN)dR|ψ(R)|2dR= =N∫a(1)(r1)|ψ(r1,...,rN)|2dR∫|ψ(R)|2dR=∫a(1)(r)C1(r,r)dr, (13)

with the notation used for the expression

 C1(r,r′)=N∫ψ∗(r,r2,...,rN)ψ(r′,r2,...,rN)dr2...drN∫ψ∗(r1,...,rN)ψ(r1,...,rN)dr1...drN (14)

An average of a two-body operator (10) can be expressed in terms of the two-body correlation function (8) in the following way:

 ⟨A(2)⟩=∫ψ∗(R)A(2)(R)ψ(R)dR∫|ψ(R)|2dR=12N∑i≠j∫ψ∗(r1,...,rN)a(2)(ri,rj)ψ(r1,....,rN)dR|ψ(R)|2dR= (15) =N(N−1)2∫a(2)(r1,r2)|ψ(r1,...,rN)|2dR∫|ψ∗(R)|2dR=12∫∫a(2)(r1,r2)C2(r1,r2)dr1dr2, (16)

with the following expression for the first quantization form of the two-body correlation function

 C2(r′,r′′)=N(N−1)∫|ψ(r′,r′′,r3,...,rN)|2dr3...drN∫|ψ(r1,...,rN)|2dr1...drN (17)

The situation can get easier if we stick to the case of a homogeneous system, as it possesses translational symmetry. In the case of one-body operator, the diagonal element of Eq. (14) is just a constant, which value is fixed by the density , . The non-diagonal elements of the normalized matrix of one-body operator in the first quantization (or simply one-body density matrix) read as

 g1(r)=Nn∫ψ∗(r,r2,...,rN)ψ(0,r2,...,rN)dr2...drN∫|ψ(r1,...,rN)|2dr1...drN (18)

where is the average density of a homogeneous system. The normalized two-body density matrix, also called pair distribution function is represented by

 g2(r)=N(N−1)n2∫|ψ(r,0,r3,...,rN)|2dr3...drN∫|ψ(r1,...,rN)|2dr1...drN (19)

The basic properties of the pair distribution function in the zero temperature limit can be deduced in the following way. In a gas phase, density-density correlations vanish for large interparticle distances, which corresponds to , hence in the thermodynamic limit asymptotically tends to 1.

One faces the opposite situation at short distances, where the particle correlations are strong, and the value of can vary depending on the interaction potential. For instance, in case of a repulsive potential , on the contrary for a purely attractive potential , and in the case of a hard-core interaction when the particles are not allowed to overlap, , when .

Let us consider the expectation value of a two-body operator (10) and see how it can be simplified in a homogeneous system:

 ⟨A(2)⟩=n22∫∫g2(r1,r2) a(2)(|r1−r2|)dr1dr2=nN2∫g2(r) a(2)(r)dr (20)

In the integration performed above we made use of the mentioned fact, that the operator depends only on the distance between particles, allowing to integrate one of the coordinates out.

In the simplest case of a contact delta-potential ( stands for a coupling constant, defining the interaction strength) the potential energy is simply proportional to the value of the pair distribution function at :

 EpotN=12gng2(0) (21)

#### 2.3 Static structure factor

By virtue of the field operator (1) the momentum distribution is represented as

 nk=⟨^Ψ†k^Ψk⟩, (22)

The field operator in momentum space is in fact the Fourier transform of :

 ⎧⎪ ⎪⎨⎪ ⎪⎩^Ψk=∫e−ikr^Ψ(r)dr√2πD^Ψ(r)=∫eikr^Ψmdk√2πD (23)

where stands for a dimensionality of the system. Applying the relation (23) to the equation (22) one finds the following form of the momentum distribution

 nk=12πD∫∫eiksC1(s2+R,R−s2)dsdR (24)

It can be noticed that in a homogeneous bulk system the center of mass motion can be integrated separately, as the dependence on the momentum is defined by the distance between the particles, not positions themselves. Performing this integration for a homogeneous system it yields

 nk=n∫e−ikrg1(r)dr (25)

For example, for a fully Bose-condensed gas, the off-diagonal terms of one-body density matrix are constant, , which results in a singular momentum distribution . In this case all particles are condensed in state.

Another useful quantity is the dynamic structure factor of the system, which characterizes a scattering process, corresponding to the exchange of energy and the momentum in the scattering event. The dynamic structure factor can be expressed by virtue of the -component of the density operator at zero temperature

 ^ρk=N∑i=1e−ikri (26)

in the following form

 S(k,ω)=∑i|⟨n|^ρ†k−⟨^ρ†k⟩|0⟩|2δ(ℏω−ℏωi). (27)

where is the frequency of the -th stationary state. The static structure factor is proportional to the frequency integral of the dynamic structure factor, that is it characterizes the overall probability of scattering of a probe particle with the momentum transfer . The separate integration over gives

 S(k)=ℏN∫∞0S(k,ω)dω=1N(⟨^ρk^ρ−k⟩−|⟨^ρk⟩|2) (28)

The latter expression (28) can be sampled directly in Quantum Monte Carlo simulations. The other way to write the static structure factor is to relate it to the two-body density matrix by means of the equations (8) and (26), having in mind the commutation properties of the field operator

 S(k)=1+∫∫1Nei(r2−r1)k(C2(r1,r2)−n(r1)n(r2))dr1dr2 (29)

In case of a homogeneous system, the positions of the particles enter the two-body density matrix only as an interparticle distance , thus the static structure factor can be rewritten in terms of the Fourier transform of the pair distribution (12) of the system:

 S(k)=1+n∫eirk(g2(r)−1)dr (30)

The static structure factor can yield valuable information on the arrangement and the order of the system, and its value can be directly accessed in spectroscopy experiments.

### 3 Scattering problem

#### 3.1 Introduction

The construction of a trial wave function for a many-body problem is in most cases a very complex task since the exact solution is generally unknown (rare exceptions for analytic solutions are the 1D gas of hard rods and hard points (Tonks-Girardeau gas), where the analytic solutions are known). A typical approach to develop a trial wave function therefore consists in matching long-range behavior with a two-body solution at short distances. In this Section, we will be concerned with the two-body scattering problems in three-dimensional systems, whose solutions are then used in many-body calculations presented in the following chapters.

#### 3.2 Scattering problem in three-dimensional geometry

General approach In this Section, we formulate a generic two-body scattering problem in a 3D geometry. For a three-dimensional system the low-density regime of interparticle interaction is supposed to be correctly described by two-body collisions.

Consider two particles with coordinates and and respective masses and staying close enough to see the process as a two-particle collision. We suppose that the system is not confined externally, hence the problem may be treated as translationary invariant, with the center of mass moving with a constant speed. Our purpose is to obtain the stationary solution of the Schrödinger equation

 [−ℏ22m1Δr1−ℏ22m2Δr2+V(|r1−r2|)]p(r1,r2)=E12p(r1,r2). (31)

In the center of mass frame, the coordinate variables get separated, thus the representation of the Schrödinger equation for the motion of the center of mass gets simple

 −ℏ22mtotalΔRfR(R)=ERfR(R), (32)

with staying for the total mass. The solution of Eq. (32) is of a free wave form. Omitting the normalization, the solution reads with , being the initial center of mass wave number of the system with the corresponding energy .

The equation for the position difference contains the pairwise interaction potential

 (−ℏ22μΔr+V(|r|))f(r)=Ef(r), (33)

with

 μ=m1m2m1+m2 (34)

denoting the reduced mass. When one finds the solutions of Eqs (32-33), it is possible to obtain the needed solution of the scattering problem (31) as

 {p(r1,r2)=fR(R)f(r)E12=E+ER (35)

We will assume that the energy of the incident particle is small enough and the solution is spherically symmetric . Under these prescriptions, and using the spherically symmetric representation of the Laplacian , one can conveniently rewrite the equation (33), introducing the auxiliary function

 f(r) = w(r)r (36)

in a way that its analytic form is similar to a one-dimensional Schrödinger equation:

 −ℏ22μw′′(r)+Vint(r)w(r)=Ew(r) (37)

with the additional requirement of the boundary condition

 w(0) = 0 (38)

At distances, large compared to the range of the potential, one can neglect (r) term in Eq. (37) leaving with a free wave differential equation.

 −ℏ22μw′′(r)=Ew(r) (39)

The solution for this equation is a plane wave

 w(r)=Asin(ksr+δ(ks)), (40)

where

 ℏks=√2μE (41)

stands for the momentum of the incident particle and is the scattering phase and is an arbitrary constant.

Properties of the low-energy scattering depend on a single parameter, namely -wave scattering length . Its value can be obtained from the phase shift as the following limit

 a3D=−limk→0δ(ks)ks (42)

If we consider the asymptotic low-momentum limit (slow particles) the scattering solution (40) may be expanded

 f(r)→const(1−a3Dr) (43)

and it is easy to see, that it has a node at a distance . The position of the first node of the positive energy scattered solution in the limit of low momenta can be seen as an equivalent definition of the scattering length in a three-dimensional system.

In low-density regime of weekly interacting gas the interparticle distance is large compared to the range of the potential. Under such conditions the exact shape of the interaction potential is not important and the description in terms of the -wave scattering length is universal.

In the next several sections we will consider the scattering problem for a hard-sphere potential (3.2.2) as a simplest example, and afterwards the same problem for the Yukawa potential (3.2.3) and the common potential between Rydberg atoms (3.2.4) will be solved. We will obtain explicitly the expressions for the scattered functions, which are of great importance, since in many cases they can give a deep physical insight into properties of a many-body system. Indeed, under particular conditions the relation between the correlation functions and the scattered wave function can be found. Another important point to mention is that often a many-body trial wave function is taken as a product of scattered functions . Hence, these calculations are of great importance for their further implementation of the Quantum Monte Carlo algorithms.

Scattering on hard sphere potential As it was mentioned in Sec. 3.2.1, in the limit of low energy collisions the interaction potential is characterized exclusively by one parameter, the -wave scattering length. It means that when the scattering on each potential possessing equal scattering lengths is the same, and it is said that the scattering becomes universal. If the scattering is considered on some repulsive potential, then the easiest choice is the hard sphere (HS) potential:

 Vhs(r)={+∞,r

This potential is controlled by a single parameter, which we denote in the definition (44). It is clear that we can treat this value as a range of the potential. Simultaneously it has a meaning of the scattering length, as it was presented in (42). It comes out in a natural way from the solution of the scattering problem, as by definition the -wave scattering length is the position of the first node of the analytic continuation of a large-distance free wave solution. For the hard sphere potential this free-wave solution is valid for with the position of the node given by .

The Schrödinger equation (37) can be rewritten as (the reduced mass is )

 −ℏ2mw′′(r)+Vhs(r)w(r)=Ew(r) (45)

A particle is unable to penetrate the potential wall of the hard core thus the solution tend to zero for distances below the size of the hard sphere. Notice that the energy in this case has only a kinetic component, and the interaction potential does not enter explicitly. Nevertheless it affects the boundary condition for the solution.

 Unknown environment '% (46)

with a convenient substitution for the frequency . The solution of this differential equation (46) may be obtained easily. Joining together with (36) one gets:

 f(r)={0,|r|

with for an arbitrary constant. The phase shift is related linearly to the wave vector of the incident particle (42), showing explicitly that the range of the potential (44) has in fact the meaning of the three-dimensional scattering length as we said above in the same Section.

Scattering on Yukawa potential The solution of the two-body scattering problem, as it was mentioned above, is used to construct the trial wave functions for subsequent use in the Quantum Monte Carlo simulations. The need of having correctly posed short-distance approximation for the trial wave stems from the fact that the majority of physically relevant pairwise interactions contain a repulsive core, that can vary in hardness. Since the diffusion Monte Carlo method is based on a sampling from the particle distribution , an inaccurate choice of the trial wave function for small can result in a substantial growth of the overall error of the calculation. This can be manifested by a need of raising the number of walkers to reach a convergence as well as by a growth of a common statistical variance. On the other hand, the errors in the long-range part are usually easily “corrected” by the DMC algorithm, as this range is statistically well represented, and also the actual discrepancy of the trial wave and the ground state solution is relatively low.

The two-body scattering problem for the potential of Yukawa is solved in a similar manner, as was used in the solution for a system of hard spheres. The solution of the initial Schrödinger equation is considered spherically symmetric and therefore rewritten as a one-dimensional equation with a single argument, the interparticle distance . If we look for the solution in the form convenient for subsequent use as a Bijl-Jastrow term in Monte Carlo calculations, then in accordance with Eq. (37)

 −ℏ2mw′′(r)+V0exp(−2r)rw(r)=Ew(r). (48)

The solution of the latter equation for short-range distances can be obtained by approximating by . One possibility is to consider the scattering at a finite energy and fix the value of from continuous matching with the long-range behavior of Bijl-Jastrow term. Equation (48) can be solved by means of the hypergeometric functions. A less precise description can be obtained by setting , still the obtained Bijl-Jastrow term is well suited for DMC calculation. The case

 −ℏ2mw′′(r)+exp(−2r)rw(r)=0 (49)

results in a more simple solution which is a linear combination of the modified Bessel functions of the first kind and the second kind with particular square root factors,

 f(r)=CiI1(2√r/ϰ)ϰ√r+CkK1(2√r/ϰ)ϰ√r (50)

with . The first component is finite for , while the second one diverges for . The arbitrary constant is irrelevant for the QMC algorithms that we use, therefore we stick to the following form of the two-body scattering solution

 f(r)=I1(2√r/ϰ)√r=ϰ+ϰ32r+ϰ512r2+O(r3) (51)

A more precise analytic solution can be found by using a higher-order expansion of the Yukawa factor , that is , however the use of a single first term of the expansion is usually enough for practical purposes.

The diffusion Monte Carlo study of Yukawa systems, made by Ceperley et al. [CCK78], was based on the following form of the Jastrow term of the trial wave function, typically used in the nuclear matter calculations

 u(r)=Ae−Br(1−e−r/D)/r. (52)

It can be shown that the expression can be made coincident in the leading terms with the expansion of the solution (51) of the two-body scattering problem:

 e−u(r) = e−A/D+A(1+2BD)e−A/D2D2r+ (53) +A(3D−4D+12ABD−12BD2+12AB2D2−12B2D3)e−A/D24D2r2+O(r3)

One can note that Jastrow term (53) coincides with the solution of the two-body scattering problem (50) for a particular choice of variational parameters . Notice that the parameters are subject to a variational optimization within a quantum Monte Carlo framework, although the trial wave function, constructed from a two-body scattering solution, is fixed for any given choice of the mass of the particle and of the interaction strength. However in practice it might be convenient to keep the functional form of the solution and to treat its characteristic coefficients as variational parameters. A typical Jastrow term of the trial wave function of this symmetrized functional form is given in Fig. (2) in the Section, devoted to the Monte Carlo methodology.

Worth noticing that in case of the Yukawa potential the pair distribution function at zero can be finite for , as it happens for the Coulomb potential. This can be easily confirmed if one takes a series expansion from Eq. (52). The typical value of the leading component in the conditions of our problem is of the order of 10, therefore the zero value of the trial wave is very small and practically indistinguishable of zero, as one can see from Fig. (2).

Scattering on repulsive van der Waals A similar derivation path can be applied to obtain the solution of the two-body scattering for the system of Rydberg atoms, interacting via the simple interaction potential. The Schrödinger equation for the two-body Hamiltonian in the reduced units for this problem (see Sec. \thechapter) in the system of the center of masses reads as

 −w′′(r)+C6r6w(r)=Ew(r) (54)

where we used a familiar substitution . The finite low-energy solution of the latter equation has the following form (normalization is omitted):

 f(r)=1√rK(14,√C62r2) (55)

where stands for the modified Bessel’s function of the second kind. Its expansion in the series of is given by

 1√rK(14,√C6r2)=e−√C62r2⎡⎣√2πC1/46√r+O(r5/2)⎤⎦. (56)

The short-range behavior of the function is dominated by the exponential term , which smoothly approaches zero as . As in the case of the Yukawa potential, the interaction constant and the power (-2) can be conveniently treated as variational parameters, while keeping the overall functional form of the trial wave function intact. A typical form of a trial wave function and a pair distribution function for the system is presented in Fig. (1). Figure 1: Typical forms of the trial wave functions for (a) Yukawa potential, and (b) model potential between Rydberg atoms.

## Chapter \thechapter Quantum Monte Carlo methods

### 4 Introduction

Quantum Monte Carlo methods (QMC) are very efficient and multi-purpose tools for the investigation of quantum many-body systems (for a detailed review of the methods see, for instance,  [Cep95][Gua98]). The use of Quantum Monte Carlo techniques provides a deep insight into the microscopic behavior of quantum states of matter. QMC are essentially ab initio methods, relying on a microscopic description of a system and gathering valuable information on its properties of the system through a numerical simulation. In certain cases, it turns out that this technique is the only tool for studying complex problems with reasonable calculation costs. In fact, in order to have a model, accessible for being solved analytically in the exact way, a physicist usually faces the necessity to make some kinds of approximations, but this can be avoided to a big extent by virtue of Quantum Monte Carlo simulations. As an example, the applicability of perturbation theory is limited by a small value of the perturbation parameter, while the QMC methods do not present any restrictions of this kind. Quantum Monte Carlo technique allow to find the ground state solution of the many-body Schrödinger equation at zero temperature. As follows from the name, the Quantum Monte Carlo methods are based on stochastic numerical algorithms of different sorts, that by nature are especially advantageous when the system in question possesses multiple degrees of freedom. As for any other stochastic procedure, the QMC methods provide results with a certain statistical error that can be diminished by performing longer measurement series.

We are interested in studying the quantum properties of a given system. The quantum effects manifest the most when they are not disturbed by the thermal motion, that is at the lowest temperatures, when the system maintains in its ground state. For this case a possible method of choice to address the problem is the diffusion Monte Carlo (DMC) method. For a bosonic system this method allows to obtain the exact result for the energy of the ground state, as well as for any diagonal property of the state.

In this chapter first of all we discuss the variational Monte Carlo method as the simplest one. Then we will discuss the bosonic diffusion Monte Carlo method and give a detailed explanation on how trial wave functions are constructed. Finally, we will present the main ideas for the implementation of the sampling of the quantities of interest.

### 5 Variational Monte Carlo

#### 5.1 General notes

The simplest of the Quantum Monte Carlo methods is the variational method (VMC). The general idea behind this technique is to find an approximate wave function , called trial wave function (or, sometimes, variational wave function), and then by sampling from the probability distribution

 p(r) = |ψtrial(r)|2 (57)

calculate averages of physical quantities. It can be shown that , the expectation value of the Hamiltonian, is an upper bound to the ground-state energy . By expanding the normalized trial wave function in the basis of the normalized eigenfunction of the Hamiltonian

 ψT=∞∑i=0ciϕi (58) ∞∑i=0|ci|2=1, (59)

one can rewrite the variational energy as

 ET = ⟨ψT|^H|ψT⟩⟨ψT|ψT⟩ (60) = ⟨∞∑i=0ciϕi|^H|∞∑j=0cjϕj⟩=∞∑i,j=0c∗icj⟨ϕi|^H|ϕj⟩ (61) = ∞∑i,j=0c∗icjδi,jEi=∞∑i=0Ei|ci|2≥E0∞∑i=0|ci|2=E0 (62)

where stands for a corresponding eigenvalue (energy of the -th state). By minimizing the variational energy with respect to the parameters entering into it one can optimize the wave function within the given class of considered wave functions. The only situation when the zero variance can be reached is when the wave function is exactly known.

#### 5.2 Usage of the VMC method

If the wave function, corresponding to the ground state, is exactly known the sampling by means of the Variational Monte Carlo method permits to evaluate exactly any static property of the system within some statistical errors. Such systems are scarce; the ground state of a system of hard rods [KMJ99] and the Tonks-Girardeau gas [Gir60] are among the most famous ones. The Variational Monte Carlo method in this case can provide the correlation functions, which could be not accessible directly through the wave function.

The Variational Monte Carlo approach provides not only a valuable description of the quantum systems, but it also can be used as a first step to deliver a good quality input for the diffusion Monte Carlo method. The efficiency and even applicability of this method depends substantially on the optimization of the trial wave function within a chosen class of functions.

#### 5.3 Notes on algorithmic realization

Let us stick to a coordinate representation in the following description, since it is the easiest way to work with external or interparticle potentials. Consider a common dimensional Euclidian space with a system of particles inside. The probability distribution function in this system will be a function of variables . The mean value for an arbitrary operator is therefore calculated as an integral of dimensions in the following form:

 ⟨Z⟩=∫...∫Z(r1,...,rNp)p(r1,...,rNp)dr1...drNp∫...∫p(r1,...,rNp)dr1...drNp (63)

It is clear that the complexity of the estimation of this integral with conventional non-stochastic methods, based on a grid calculation, grows extremely fast with the number of particles, and already for a few dozens of particles its calculation is unreachable. On the other hand, the stochastic procedure on which the Monte Carlo methods are based is not affected strongly by the growth of the dimensionality of the problem. The basic idea of the variational technique is to generate configuration states with the probability distribution and by means of these states estimate the average value of the operator ,

 ⟨Z⟩≈1mm∑j=1Z(Rj). (64)

Every state is obtained only from its preceding configuration, thus this set of states is indeed a Markovian chain. The Metropolis algorithm  [MRR53] despite its simplicity is a very efficient tool to produce the chain with the desired probability . The transition from the old configuration to the new one is accepted with the probability , expressed by the formula

 Unknown environment 'tabular' (65)

In a quantum system, the probability distribution is given by the square of the wave function. The way we construct the particular trial wave function for different quantum systems will be discussed in Sec. 7.

There are distinct approaches to perform transitions (or trial moves) between different particle configurations. A straightforward way for creating a new state is to move one particle at once or all the particles at once , where the displacements are a set randomly chosen vectors with a certain upper bound . The limiting amplitude can be adjusted in order to have a desired acceptance rate. It is readily seen that for very small values of the trial moves are accepted almost all the time, but the adjacent states are strongly correlated, that affects the variance of the overall statistics, and in the limit all the states are the same, making the whole calculation pointless. On the other hand, the steps of very large amplitude are accepted only a small fraction of time, which also leads to a poor performance. The value of can be optimized to ensure the maximum total displacement of the whole system by means of a set of benchmark calculations, but in practice a simple rule of thumb to have the acceptance rate of the trial moves in the range provides good enough results.

A generalization of these two strategies of particle displacements, when only a particular fraction of the points is shifted together, can bring even faster performance. The group of particles to move in one step can be chosen randomly. The advantage of this technique lies in the possibility of fine tuning the calculation parameters in order to achieve an optimal point in terms of the interplay between the correlation of the states and the acceptance rate.

### 6 Diffusion Monte Carlo technique

The diffusion Monte Carlo method (DMC) is a stochastic computational technique applied to systems at zero temperature, when all of the thermal motion can be neglected. The key point of the DMC method is to provide the solution of the time-dependent Schrödinger equation in imaginary time, which is known to exponentially decay to the ground state solution in the limit of long times. By means of the diffusion Monte Carlo method the equation of state for the system as well as diagonal properties can be calculated exactly with the only cost of controlled statistical noise.

#### 6.1 Schrödinger equation

The wave function of a quantum system obeys the Schrödinger equation

 iℏ∂∂tψ(R,t)=^Hψ(R,t). (66)

Our aim is to find the ground state properties of the system, rather than its actual time evolution. By substituting the time variable by an imaginary one , one arrives to another representation of the Schrödinger equation

 −∂∂τψ(R,τ)=(^H−E)ψ(R,τ), (67)

where stays for a constant energy shift close to expected ground state energy. The latter equation (67) has a formal solution . One can expand this solution as the sum over the eigenstates of the Hamiltonian , with the eigenstates taken such that the eigenvalues are growing with ascending indexes, that is is the lowest among the eigenvalues. Performing the expansion,

 ψ(R,τ)=∞∑i=0aiϕi(R)e−(Ei−E)τ. (68)

It can be easily seen that the exponents in the sums behave differently (decay or grow) provided the sign of is positive or negative. For large enough the only component of the sum (68) that survives is the one, corresponding to the ground state. All the other terms decay in time exponentially fast (we suppose the spectrum to be discreet):

 ψ(R,τ)→a0 ϕ0(R) e−τ(E0−E),if~{}~{}τ→∞ (69)

A general expression for the Hamiltonian of a many-body system of