Abstract

Lecture notes on Diagrammatic Monte Carlo for the Fröhlich polaron

J. Greitemann1, L. Pollet1*

1 Department of Physics and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, Theresienstr. 37, 80333 Munich, Germany

* Lode.Pollet@physik.uni-muenchen.de

August 3, 2019

Abstract

These notes are intended as a detailed discussion on how to implement the diagrammatic Monte Carlo method for a physical system which is technically simple and where it works extremely well, namely the Fröhlich polaron problem. Sampling schemes for the Green function as well as the self-energy in the bare and skeleton (bold) expansion are disclosed in full detail. We discuss the Monte Carlo updates, possible implementations in terms of common data structures, as well as techniques on how to perform the Fourier transforms for functions with discontinuities. Control over the variety of parameters, especially in the bold scheme, is demonstrated. Sample codes are made available online along with extensive documentation. Towards the end, we discuss various extensions of the method and their applications. After working through these notes, the reader will be well equipped to explore the richness of the diagrammatic Monte Carlo method for quantum many-body systems.

 

Contents\@mkbothCONTENTSCONTENTS


\@afterheading\@starttoc

toc  

1 Introduction

These notes originate from a series of lectures taught at international summer schools intended for researchers interested in numerical methods and strongly correlated systems. They introduce the diagrammatic Monte Carlo (DiagMC) method, a quantum Monte Carlo method for strongly correlated systems in which one, simply put, samples over all Feynman diagrams. Feynman diagrams are versatile and employ a universal language used in high-energy as well as in condensed matter physics. DiagMC is one of the most promising methods still under active development to deal with generic fermionic models in high dimensions. The goal is to give an introduction and flavor of this method.

Prerequisities for a thorough understanding of this text are familiarity with the basics of quantum mechanics and elementary quantum field theory (notions such as the interaction picture, Wick’s theorem, Green function formalism, etc.), statistical mechanics (partition function, solving two-level systems, etc.), and undergraduate computational physics (curve fitting, root solving, interpolation techniques, etc.) including classical Monte Carlo methods (notion of detailed balance, Markov chain Monte Carlo, Metropolis algorithm, etc.).

Let us summarize the main idea of the method, and how it differs from other quantum Monte Carlo schemes – admittedly, different researchers use the notion of diagrammatic Monte Carlo in quite different contexts. To this end, we must discuss the type of expansion, the sampling space, and the nature of the sampled series. Newcomers may skip the remainder of this paragraph in a first reading.

The starting point is a very general perturbative expansion of the form,

(1)

We compute a function depending on external coordinates (for example, the Green function with momentum and imaginary time ) which has a perturbative expansion. At every order there are internal coordinates (these are the internal momenta and imaginary times) which can be discrete or continuous, and are summed or integrated over. The different topologies have different kernels (cf. Fig. 5 below). The starting point in DiagMC is a weak coupling expansion, i.e., an expansion in the interaction. Let us decompose the Hamiltonian as where contains all one-body terms (and constitutes hence a quadratic Hamiltonian) and the interactions. Our basis states are the eigenstates of . Similar choices are made in (lattice) determinant Monte Carlo simulations for fermions, and in fermionic impurity solvers such CT-INT and CT-AUX (see Ref. [1] for a recent review). By contrast, a “strong-coupling” expansion is used in path-integral Monte Carlo simulations [2], in the worm algorithm [3] and the fermionic impurity solver CT-HYB [1]. In these schemes one perturbs in the kinetic hopping term whereas the solvable system (the potential energy term) is diagonal in the chosen Fock or real-space basis but not quadratic – it corresponds to the atomic limit.

An expansion of the partition function at inverse temperature and volume in the sense of a weak-coupling expansion and in the spirit of Eq. 1 reads

(2)

where in the second line we worked out the time-ordering operator of the first line. This expansion leads to nothing but a Taylor expansion in the interaction , namely with the coupling strength amplitude of and the coefficients that can be determined by evaluating all the integrals in Eq. 2 order by order, and which remain independent of . Methods such as lattice determinant Monte Carlo and the impurity solvers CT-INT and CT-AUX (but also the Monte Carlo methods referred to as strong-coupling expansions) evaluate physical quantities in thermodynamic equilibrium as

(3)

and give it the following statistical meaning: Sample configurations are obtained, which are distributed according to the partition function with respective weights , and in which the quantity is evaluated. Hence,

(4)

The unbiased estimator for the expectation value of the quantity is then to sum up over all independent configurations and divide by the number of independent measurements. The normalization through the partition function is here manifest. As long as the system volume and its inverse temperature are finite, the Eq. 2 is an expansion in an entire function and hence always convergent (with the finiteness of the system we explicitly exclude all possible UV divergences that may still arise as e.g. in Sec. 8.2). The finiteness of the system ensures that no true spontaneous symmetry breaking can occur, which is at the heart of such methods as finite size scaling.

When physicists use the term DiagMC in the sense of the expression “sampling over all Feynman diagrams” it implies a number of differences compared to the previous paragraph: The thermodynamic limit is taken from the start, the partition function is usually not used for normalization (instead, the lowest order diagram is often chosen (see below)), nor does the sampling necessarily take place in the space of the partition function diagrams: The method (usually) relies on the cancellation of disconnected diagrams when computing correlation functions as can be found in standard textbooks [4, 5, 6, 7, 8]. This can equivalently be considered an expansion of the free energy .

These differences allow us to sketch some of the key properties of Feynman diagrams, which can be considered its advantages: All diagrams are topologically distinct and the magnitude of the prefactor is always 1 [6]. The language of Feynman diagrams is universal in all fields of physics. Feynman diagrams factorize over internal building blocks, such as particle propagators (single particle Green functions), interactions, and vertices. Consequently, the diagram weight also factorizes, which is a prerequisite for successfully developing a Markov chain Monte Carlo method. Analytical treatments of low orders or limiting cases can be built in analytically. In DiagMC one does not attempt to write down all diagrams explicitly (since the number of diagrams grows factorially with expansion order, this is only possible for the lowest expansion orders anyway) but one instead develops algorithmic rules that allow one to sample over all diagrams. This implies changing the internal integration variables, but also the topology and the expansion order. Non-perturbative features are accessible via skeleton series [9] and (partial) resummations of a certain class of diagrams. This takes us away from the bare expansion, and we will also see how this works for the Fröhlich polaron. In fact, any analytical treatment known from the literature can be built in. Ideally, the Monte Carlo sampling should only deal with featureless functions originating from high-dimensional integrals whereas any intricacy related to the field theory is dealt with analytically a priori.

The aforementioned differences bring us at the same time to the first main difficulty in the development of the DiagMC method, which is the series convergence: It is usually unknown whether a series converges or not. The series is guaranteed to diverge at a phase transition, but it may happen sooner. In fact, most series in physics are asymptotic, which can be established rigorously in a number of cases. A well known argument, first formulated in the context of quantum electrodynamics, is Dyson’s collapse argument [10]: When rotating the electric charge from to in the complex plane around the origin, one sees that the system is unstable to collapse (the potential energy scales quadratically with the number of particles, which is faster than the kinetic energy), rendering the convergence radius zero. The same holds for any interacting bosonic field theory: No matter how small in magnitude the attraction in the potential energy is, it beats the kinetic energy for large enough particle numbers, leading to a collapse. The asymptotic nature of the series can sometimes be dealt with using resummation methods [8], but, in general, the issue of a non-convergent series is an open problem and in our view the most difficult one that DiagMC faces.

The second main difficulty in the development of DiagMC is the sign problem. Sign alternations are often inherent (and necessary) to the issue of convergence – without sign alternations the factorial growth in the number of diagrams could never lead to a meaningful result for an asymptotic series. Nor is the sign extensive in the system volume, as in path integral Monte Carlo simulations, which would prohibits us from finding the full solution [11]. Nevertheless, the sign problem puts in practice a limit on the expansion orders that can be reached. DiagMC features hence a tacit assumption that the sign problem is sufficiently weak such that sufficiently high expansion orders can be reached in order to extrapolate in a reliable way to infinite expansion orders (often in combination with a resummation scheme that is powerful enough). Unfortunately, this assumption can only a posteriori be checked.

The third difficulty is dealing with multi-dimensional objects such as a multi-legged vertex in the Bethe-Salpeter equation. Despite active research in the fields of self-adaptive grids and concise data storage formats, this is equally an unsolved problem. However, only in cases that an explicit expression for the whole object (or a high-dimensional subpart) is required (such as in self-consistency schemes) can this be considered a problem; otherwise one can just sample over such an object without ever evaluating it in full.

In these notes we consider a model where these three problems do not occur: the Fröhlich polaron model is sign-positive in the imaginary time formalism and the Green function convergent for all finite values of the imaginary time. Due to the rotational symmetry of free space can the Green function be stored as a two-dimensional object, which is easy to histogram and manipulate. There are other simplifying factors, which are related to the absence of vacuum polarization diagrams, or, equivalently, the observation that Feynman diagrams for polaron (and impurity) problems can be mapped onto path integrals (cf. the structure of a backbone line in Fig. 5 below). Indeed, the analytical properties of mesoscopic systems such as polarons and impurity systems appear to be much simpler than those of true many-body problems. Furthermore, for almost all problems of this type very accurate variational approaches (and wavefunctions) are known. The Fröhlich polaron is hence ideal to get acquainted with the DiagMC method. Not suprisingly, it was also the first model to which the method was applied 20 years ago [12, 13].

This text is structured as follows. After discussing perturbative expansions with continuous variables in Sec. 2, the main body of this text deals with the Fröhlich polaron problem, whose Green function is obtained from a bare expansion in Sec. 3, the self-energy from a bare expansion in Sec. 4 and from a bold expansion in Sec. 5. The source codes are made publicly available as discussed in Sec. 6. In Sec. 8 some related physical systems (of the polaron or impurity type) are listed where the acquired techniques can (and have been) applied without going into detail about the physics. For completeness, we mention that the method has also been successfully applied to a number of problems that cannot be considered of the polaron or impurity-type leading to deeper insight in notoriously hard problems. We mention resonant fermions [14, 15, 16], frustrated magnetism [17, 18, 19], and physics found in the Hubbard model [20, 21, 22, 23], among others.

2 Continuous-Time Monte Carlo

It is quite common to have discrete as well as continuous variables in quantum field theory. In this first section we explain, by means of the celebrated two-level system, how continuous variables and variable expansion orders can be dealt with in a Monte Carlo sampling. We employ the path integral representation here.

2.1 Model

Figure 1: (Color online) Magnetization along the and the axis for , and variable .

Consider a two-level system with Hamiltonian,

(5)

where and are the usual Pauli matrices in the -basis with basis states and . The -field tries to orient the spin along the -axis which is countered by the -field which tries to orient the spin along the -axis. This system can be solved exactly, with the solutions (shown in Fig, 1)

(6)

which makes this system a good model to get acquainted with continuous-time Monte Carlo. From the symmetry of the Hamiltonian we see that we can swap if we also swap .

2.2 Perturbative Expansion

Starting from the partition function

(8)

we notice that the operator is diagonal in this basis. In order to prepare for a perturbative expansion in the term, we introduce the Heisenberg operators

(9)

and rewrite the partition function as

(10)

This is an explicit formulation of Eq. 2, .

Figure 2: (Color online) (a) The order contributions to the partition function of the two-level system. Spin-up worldlines are represented by a full line, spin-down worldlines by a dashed line. (b) The second order contributions, and (c) the general structure. Worldlines are periodic over the imaginary time period .

To lowest order there are just 2 contributions, . Graphically, this can be depicted as a continuous worldline from to (see panel (a) in Fig. 2). We use a full line for spin-up and a dashed line for spin-down. Note that worldlines are continuous and periodic in because of the cyclical properties of the trace. For this reason, there are no non-zero contributions for , nor for any odd value of . This means that the term in Eq. 10 is always positive and we do not have to worry about a sign problem. In second order (see panel (b) in Fig. 2) we have

(11)

Note that there is no factor of because it cancelled with the number of equivalent contributions from the time ordering operator and the corresponding changes in the time integration boundaries [6]. Although the higher order terms can be written in the same fashion, the integrals quickly become too complicated to evaluate explicitly. We therefore switch to a stochastic approach, for which it is easiest to think in terms of a graphical depiction, as shown in panel (c) of Fig. 2. To a vertex we attribute a factor , and to each segment of length (measured taking the periodic boundary conditions in into account) we attribute a weight with the sign depending on the spin state.

Analyzing the limiting cases, we expect to find, with almost equal probability, worldlines that are dominated by one of the spin states with few vertices at high temperatures. At low temperatures, we expect a dashed line with few kinks for , whereas for the spin wants to orient along the -direction, which graphically translates into having many vertices, and for which our chosen basis along the -direction is a poor choice. Our main task when designing a Monte Carlo scheme is hence to reach high expansion orders at low temperature with good efficiency.

2.3 Monte Carlo updates

Figure 3: (Color online) Illustration of the INSERT/REMOVE pair of updates for the two-level system.

There exist many equivalent ways to sample this system. The choice we make here resembles the updates later used in the Fröhlich polaron code, with similar design criteria. A minimal ergodic set of updates consists of the pair INSERT/REMOVE. If the INSERT update is chosen, we attempt to insert a new pair of vertices as shown in Fig. 3. We therefore select a random time chosen uniformly over . Looking in the direction of positive imaginary times, we determine the time interval counted from over which the spin occupation does not change. The second vertex is placed at a time chosen uniformly over the interval . For the reverse update, the pair to be removed consists of randomly selecting a vertex and taking the subsequent one in the direction of positive time. The weight of the diagram segment between and in the old (i.e., before the INSERT update) configuration is , with the spin occupation at time in the old configuration. The weight of the corresponding segment in the new configuration is . With equal probabilities of selecting the INSERT and REMOVE updates, the probability factors are and with the number of vertices present in the old configuration. The update INSERT is accepted according to the Metropolis algorithm with probability where the acceptance factor is given by . For the REMOVE update the acceptance factor is . The differentials and enter the formulas for and as a consequence of working with a continuous variable but they drop out in the acceptance factor .

2.4 Estimators

The observables of interest are the expectation value of the spin magnetization along the - and the -axis. There are two ways to measure the magnetization along the -axis (up to an irrelevant minus sign). The first one consists of evaluating the magnetization at a fixed time , which is an integer number. The second one evaluates the integral , which is a floating point number. Perhaps the reader thinks that the second way is far superior because it contains information from all times, but we will see that this is not true: the second way of measuring has only slightly lower error bars for the same runtime, whereas it is a considerably more expensive operation to perform, scaling linearly in the number of interaction vertices (even if done “on the fly” after every update). The magnetization along the -direction can be measured as as can be seen from Eq. 10. It is equally straightforward to obtain estimators for quantities such as , but we will not discuss this further.

2.5 Results

Let us start at low temperature with a strong magnetic field in the -direction. We take as parameters . After an initial thermalization phase of one million updates, we perform 10,000,000 updates, measuring after each one. After just a few seconds we see that we reproduce the exact result with error bars between and . The integrated autocorrelation times are about . We spend about of the time in the zeroth order diagram, and close to of the time in fourth order, although the code has occasionally gone to 16th order. So we sampled over quite a large Hilbert space and the code performed very well. There is no reason to optimize further.

At low temperature and strong magnetic field in the -direction ( and same runtime parameters as before) the autocorrelation times are also about . The error bars on are typically an order of magnitude smaller than in the case , which is explained by the fact that our basis is better suited. The error bars on are only slightly larger than before. In other words, the code is still behaving as expected.

At high temperature ( and same runtime parameters as before) the magnetization along either direction is about and hence very weak. Perhaps surprisingly, we see that the error bar on is of the order of , which is 50 times larger than the error bar on and one to two orders of magnitude larger than what we had at low temperatures – whereas low temperatures should be much more difficult to simulate. This is also reflected in the integrated autocorrelation times, which are about (it could well be worse because it is not clear if the code has converged) for the magnetization along the -axis and only along the -axis. Physically, the system has rotational symmetry in spin space, but this is clearly not respected in our updating procedure. As expected, the code spends of the time in the zeroth order diagram and the acceptance ratio for our INSERT-REMOVE updates is . What could be the reason for such bad autocorrelation times in an essentially non-interesting regime? The world-lines are of the time straight world-lines but the up and down orientations are almost equally probable because of the high temperature. Our current update scheme only allows one to change the orientation of the magnetization via the insertion of kinks, which is highly inefficient at high temperature. To cure this problem, we add another update SPIN-FLIP which, for simplicity, is only allowed in the zero-vertex sector and which attempts to swap between the up and down orientations of the spin. Adding this update cures the problem. It is good practice to keep the code as simple (and local) as possible, and to optimize or write extra updates only in case problems pop up.

With this we close the discussion on sampling continuous variables and different expansion orders and proceed to the main part.

3 Fröhlich polaron: Bare expansion for the Green function

The Fröhlich polaron problem describes the interaction between an itinerant electron and longitudinal, optical phonons. Historically, it was the first problem to which diagrammatic Monte Carlo was applied [12, 13, 24] for which it could provide definite answers regarding the polaron spectrum and arbitrarily precise polaron energies for any coupling strength. The Hamiltonian for a system in a volume is given by

(12)

The operators and are annihilation operators for electrons of mass with momentum and phonons with momentum , respectively. The phonon frequency can be taken momentum-independent for optical, longitudinal phonons. The dimensionless coupling constant is . Typical values for vary from for InSb over for CdTe to for AgCl (and are thus rather weak) [25]. We will work in units and take the continuum limit .

It is not the purpose of these notes to give an overview of the physics of the Fröhlich polaron, whose thermodynamics is now well understood (but questions remain for transport). We refer to the lecture notes by J. Devreese [25] for a pedagogical introduction. The basic competition in the model is between the electron kinetic energy trying to delocalize the particle and the phonons trying to localize it. The system can lower its energy by dressing the electron with phonons, resulting in the formation of a polaron. Its residue can be very low and the effective mass very high, but the polaron is never fully localized or fully self-trapped; there is hence no transition in this model.

For historical importance and to illustrate the connection with path integrals, let us remark that the Hamiltonian is quadratic in the phonon propagators, which can hence be integrated out. This results in a retarded one-particle propagator for the electron,

(13)

where is in the basis of position and imaginary time. Thus, Eq. 13 conveys the intuitive idea of obtaining the probability amplitude for an electron to return to its initial position after an imaginary time evolution up to inverse temperature by integrating over all possible trajectories (‘paths’) through imaginary time.

This path integral expression served as the basis of Feynman’s variational ansatz [26] which is remarkably accurate for the polaron energy for all coupling strengths. This path integral is, because of the retarded self-interaction, not as easy to simulate as the two-level system of the previous section, and will hence not be used for actual computations.

The structure of this section is as follows: We start with reviewing the necessary field-theoretical formulas to study quasi-particle properties, followed by the description of the algorithm used to simulate the polaronic Green function using a bare expansion. Next, we show some results that can be obtained with this code. In the following section the self-energy is computed using the bare expansion, with special emphasis on Fourier transforms and an illustration for the first-order diagram. Finally, the bold expansion of the self-energy is introduced, again splitting the discussion between the first-order diagram and higher order ones.

3.1 Digest of many-body theory

The central object of our analysis is the full single-particle Green function, which is related to the bare Green function and the self-energy via the Dyson equation as

(14)

For the polaron problem, we will work at zero temperature. To avoid instabilities due to poles, it is more convenient to work in imaginary time than with Matsubara frequencies in the sampling. For impurity problems, the bare Green function is just

(15)

with the Heaviside function, the dispersion, and an energy shift which is used as a tuning parameter (see below). In Matsubara representation the bare Green function takes the form

(16)

The full Green function will have a pole at where is the self-consistent solution to

(17)

given that the imaginary part of vanishes. We may then expand the self-energy around the pole position,

(18)

allowing us to rewrite the full Green function approximately as

(19)

with the quasi-particle residue

(20)

The approximation Eq. 19 holds as long as the quasi-particle pole is sufficiently far away from the dissipative continuum, the separation to which we call . Transforming back to imaginary time, the quasi-particle energy and residue (which is the modulus squared of the overlap between the quasi-particle state and the free electron state) can be extracted from the large behavior of the full Green function under the same assumptions,

(21)

We will solve the problem of obtaining for fixed by diagrammatic Monte Carlo and are left with the task of finding such that Eq. 17 is satisfied. This can be done by a root-solving algorithm in combination with one-dimensional integration. When is found self-consistently, Eq. 20 determines the corresponding residue. The dispersion of the quasi-particle is given by analyzing as a function of .

3.2 Algorithm

Figure 4: Bare diagrammatic elements for the Fröhlich polaron. From left to right, shown are the bare Green function, the phonon propagator, and the 2 vertices where a phonon is emitted (absorbed) causing a shift of the electron momentum.
Figure 5: Bare expansion of the full Green function for the Fröhlich polaron. A non-interacting Green function is denoted by a thin full ‘backbone’-line, a phonon propagator by a dashed arc and a vertex by a dot. The expansion order is given by counting the number of arcs (here, ). Only connected diagrams are considered. This diagram is a typical example of a ‘configuration’ whose weight is the product of the weights of the composing Green functions, vertices and phonon propagators.

The simplest way to solve the Fröhlich polaron problem is by considering the bare expansion of the full green function by using the expansion elements shown in Fig. 4. This was also presented in the original solution by Prokof’ev and Svistunov [12]. Wick’s theorem tells us that there can be no unpaired phonon creation and annihiliation operators, i.e., all phonon operators pair into ‘arcs’, the number of vertices is always even and in Eq. 12 only enters as a product with its complex conjugate. Graphically, the expansion is illustrated in Fig. 5. We can label the expansion order by counting the number of phonon propagators. In order there are phonon propagators, vertices and impurity Green functions. Our task consists of sampling over all possible diagrams for the Green function , i.e., sample over all possible expansion orders , all allowed topologies, and integrate over all internal momenta , and vertex times .

Every Feynman diagram is a valid Monte Carlo configuration, with a weight that factorizes into the product of the individual electron propagators , phonon propagators \linenomath

(22)
\endlinenomath

and vertices. It is convenient to absorb the vertex dependence into the phonon propagators and constants into the coupling constant , see Eq. 12 and our choice of units.

As an example, the full expression for the weight of the second order diagram with crossing phonon lines (one of the three possible topologies in second order) reads \linenomath

(23)
(24)
\endlinenomath

Here and are the external momentum and time, respectively. The independent momenta are chosen as and , whereas , , and follow through momentum conservation. The factorization of the weight into bare Green functions and phonon propagators is now manifest. The extension of such explicit analytical formulas to higher order is however cumbersome in comparison to drawing the Feynman diagrams.

We proceed therefore to how the diagrammatic Monte Carlo sampling can be performed. The updating scheme discussed below differs from the one introduced originally by Prokof’ev and Svistunov. Using the freedom which every designer of a Monte Carlo procedure has, we seek the simplest set of updates that is ergodic and remains as local as possible. By locality we mean that the number of changes to the current configuration is minimal and only involves one diagrammatic element plus its adjacent elements.

External variables – Because of the spherical symmetry of the Hamiltonian, we can choose the orientation of the external momentum to be along the -axis as , in which case the full Green function is a two-dimensional object in space. We can predefine a set of external momenta for which we compute . The simplest choice is a uniform grid, with where the momentum cutoff and the number of momentum points. Other choices for the grid are possible and perhaps even better because for low momenta we expect a dispersion akin to with the effective mass of the polaron, which suggests that a quadratic grid might be better. We will not go into further detail. The same procedure can be applied for the external time . Let us follow here another approach and consider to be a continuous coordinate between and , which we bin into a uniform grid of bin length at the expense of a small systematic discretization error if we use the discretized . A logarithmic grid might be a better choice given Eq. 21 (from experience we know that it does not really matter at this stage).

Normalization – We choose the zeroth order diagram for normalization, which is just the bare propagator . Because of the updates CHANGE-P and CHANGE-TAU (see below) the total normalization integral is . All quantities in the Monte Carlo sampling can be normalized by multiplying with where counts how often we are in the zeroth order diagram. This can be seen as follows: The estimator for the normalization diagram is

(25)

The estimator for the full Green function is

(26)

where we used that and all residing in the bin are taken together in entry (it is possible to improve on this by computing the ratio , although there is seldomly a need for that). The proportionality constant drops out when normalizing

(27)

with the volume of the time-bin . The same normalization can be applied to other quantities of interest such as the bare Green function and the first-order Green function.

CHANGE-P – This update is only allowed if the expansion order is 0. In this update, which is its own reverse, we uniformly select a new from the set of allowed external momenta and accept it according to the Metropolis algorithm as with acceptance factor . We can also opt to keep the external momentum fixed in a single run.

CHANGE-TAU – This update is only allowed if the expansion order is 0. In this update, which is its own reverse, we select a new external time using an exponential distribution. If the dispersion is with the external momentum and a random number uniformly chosen between , then we construct and accept it as the new external time of the diagram if .

  

Figure 6: Illustration of the INSERT-REMOVE pair of updates.

INSERT – This update attempts to increase the number of phonon propagators by one (its reverse is REMOVE, see below and Fig. 6) and is constructed as follows: Select a random electron propagator and identify its left and right endpoints. Let us call this propagator . Select with uniform probability a time , which serves as the time of the left vertex of the new phonon propagator. The time is obtained as with chosen uniformly. If the update is rejected. The three components of the momentum are obtained from the Box-Müller algorithm as a gaussian random number with mean 0 and variance . The acceptance factor is given by where stands for the old configuration and for the new one, for their respective weights, and for their a priori transition probabilities. They are given by

(28)

Here, and are the probabilities to select the INSERT and REMOVE probabilities, respectively. Note in particular that all differentials cancel in the acceptance factor . The reader notices that further cancellations occur in the acceptance factor such as the electron propagators between and , as well as any -dependence. Those cancellations are only exact if function calls are used; for tabulated objects in combination with interpolation techniques there are tiny deviations from these cancellations.

REMOVE – This is the reverse update of INSERT. We uniformly select a phonon arc and check if its vertices are consecutive elements in the time ordered confiugration (see above). If this is not the case, the update is rejected. The acceptance factor is the inverse of the one determined above for the INSERT update.

  

Figure 7: Illustration of the SWAP update. The phonon propagators carrying momentum and are understood to connect back to the fermion line at vertices at times and , respectively. and may each be either before or after and . After the update, the central fermion propagator carries a momentum .

SWAP – The INSERT and the REMOVE update allow to change the expansion order but are insufficient to generate all possible topologies because they do not allow phonon arcs to cross. The SWAP update allows one to change the topology within a given expansion order . With the notation of Fig. 7 we randomly select a vertex excluding the last one. If it has a time and the next one a time , we attempt to swap the end points of their respective phonon propagators. In order to conserve momentum at every vertex, the momentum of the electron propagator between and changes – in line with our design criterion of finding updates that are local. The acceptance factor is given by with and .

EXTEND – Although this update is not needed for ergodicity, it is a useful one to improve the sampling. It changes the duration of the rightmost electron propagator in a similar fashion as the CHANGE-TAU update.

3.3 Implementation

The number of diagrams grows as . Only the lowest expansion orders can be evaluated explicitly, but the Monte Carlo algorithm manages to sample over the most important contributions in any order. The parameter requires some finetuning: Its magnitude needs to be sufficiently large such that the full Green function decays exponentially. The closer is chosen to the unknown polaron energy (i.e., ) the less rapid this decay will be and the more accurate the fit (cf. Eq. 21) can be performed. For sufficiently strong , and chosen closely to , the expansion order can be 100 or more.

Other authors prefer the use of a cyclical implementation [13] instead of a backbone line. The aim is to treat the electrons and the phonons on equal footing. It is also the structure that naturally arises at finite temperature. At zero temperature, we see little advantages for polaron problems and have not used cyclical diagrams in our codes.

3.4 Data structure

Let us now discuss the data structure. There are various equivalent ways to store the diagram. E.g., one may either (i) store the intervals between the emission and absorption of a single phonon along with its momentum, or (ii) one opts to store the vertices. We choose the latter approach. The necessary information needed to specify a vertex are its time, a pointer to the vertex that it connects to via the phonon propagator, the phonon momentum and at least one momentum interacting at the vertex such that all momenta can be inferred from momentum conservation. If we choose, say, to store only the phonon momenta, all electron momenta in the diagram can be computed from the given external electron momentum and by invoking momentum conservation at every vertex, but this is obviously a costly operation scaling linearly with the number of vertices. In the present implementation we decided to redundantly store all three momenta at each vertex for reasons of simplicity and memory-locality. A configuration is then specified by a time-ordered collection of such vertex objects.

When choosing the data structure, one should be conscious of the operations required by the update scheme and their respective complexity. Obviously, the ability to INSERT and REMOVE vertices efficiently while retaining the time ordering as well as the ability to seek forward and backward along the electronic backbone line are crucial, thus ruling out plain contiguous array-like data structures. Likewise, the INSERT update needs to randomly pick an electron backbone segment, the REMOVE update randomly picks a phonon propagator, and the SWAP update randomly selects a pair of adjacent vertices. All three of these ultimately draw a vertex uniformly from the set of all vertices (or in case of SWAP from all but one).

We implemented a number of different data structures to meet these requirements to varying degrees and gauge their impact.

  1. A doubly-linked list as provided in by std::list satisfies the first criterion with insertion and removal but requires one to start at the beginning and iterate through the list to reach a randomly picked vertex, thus resulting in scaling (with the number of vertices).

  2. A self-balancing binary search tree, e.g. an AVL or red-black tree, provides insertion and removal and in principle also allows for true random access of an ordered sequence in when nodes keep track of the number of nodes in their subtrees. Search trees will automatically enforce ordering which we however do not benefit from as the update scheme is designed in a way that retains time ordering anyway. While std::map is usually implemented in terms of binary search trees, it cannot be used off-the-shelf here as it hides its tree implementation and does not allow for the kind of additional bookkeeping required to achieve fast random access. For testing, we implemented an AVL search tree with a function to randomly access elements by index.

  3. A doubly-linked list may be combined with a contiguously stored array (a std::vector) of iterators to the list elements that serves as a lookup table. Upon insertion, an iterator to the newly created list element is pushed to the back of the array. The list element is likewise tagged with the index of its iterator in the array. When removing a list element, its iterator in the array swaps places with the last one (updating the tag of its list element) before it is popped. This procedure retains the complexity of insertion and removal operations and keeps an up-to-date array containing iterators to all the list elements contiguously, albeit not in time order. Thus, we do not get proper random access but gained the ability to pick a random element in . Care has to be taken when applying this to the SWAP update.

The performance impact of the choice of data structure depends on the average order that is reached in the course of the simulation which in turn depends on the system parameters. In our benchmark, Fig. 8, we decided to keep fixed and vary the chemical potential to probe a number of mean expansion orders.

Figure 8: (Color online) Benchmark of different diagram data structures. To provoke large expansion orders, has been used and imaginary times reached as far as . The chemical potential has been tuned from very close to the polaron energy, (high expansion order), to (low expansion order).

In situations where the average order was below 10, the search tree (implemented as an AVL tree) performed badly compared to the list-based data structures due to the added overhead. It would only become a feasible alternative outperforming the plain list when orders beyond 40 were reached as can be seen from Fig. 8. In contrast, the list-array combination barely shows any scaling with the diagram order and was consistently faster than the plain list indicating that the overhead added due to the lookup array is very light. For models with a sign problem where only low expansion orders can be reached, it does not matter how the data structure is implemented.

3.5 Error bars

The estimation of the error bars on the Green function is complicated by the fact that the normalization itself is estimated from the same simulation. We employ the jackknife resampling technique to account for that. This requires knowledge of the time series. Sampling after every single update would result in excessive memory demand and post-processing time due to many highly-correlated samples and negate the efficiency of the local update scheme. Thus, we group updates into bunches of elementary updates. After each elementary update, we increment histograms for the Green function, the zeroth order counter used for normalization, etc. After elementary updates, the histograms are measured, i.e. recorded in the time series, and subsequently reset. The choice of can be guided by the estimated autocorrelation time obtained from a binning analysis.

Within the framework provided by the ALPSCore library (cf. Sec. 6), we chose to rely on the FullBinningAccumulator to perform the above binning analysis for us. Further, any derived quantities calculated from the observables are automatically resampled using the Jackknife method.

3.6 Results

For , and a runtime of about 1 minute on a single core laptop one can extract the polaron energy with an accuracy better than a percent. Stronger couplings are a little bit harder to simulate: We show the Green function for , and zero momentum in Fig. (a)a. By fitting the exponential tail according to Eq. 21, one can extract the polaron energy () and residue (). Here, the fit took data from onwards into account. However, for the Green function decays rather quickly. This limits the maximum time that can be accessed with reasonable accuracy. Additionally, when taking the error bars into account in the fit, the data close to impact the fit more strongly due to their higher accuracy. This leads to a heightened sensitivity towards systematic errors from the non-asymptotic fast initial decay with respect to the choice of .

In order to get a more reliable estimate of the polaron energy, we tuned the chemical potential to achieve longer imaginary times along with a less severe growth of the error bars. Choosing (Fig. (b)b), was accessible, allowing us to probe the asymptotic regime over time scales many times that of the initial fast decay. The inset in Fig. (b)b displays results for the polaron energy estimated from fits with different . This has been done for the data from each of the 28 independent MPI processes to yield an error on that estimate. It can be seen that after an initial influence from the non-asymptotic onset, the results beyond stay consistent within their error bars. Our final result reads and . The polaron energy was thus found with a relative accuracy of at modest computational effort.

The polaron energy is remarkably close to the value predicted by Feynman’s variational ansatz despite the rather strong coupling . Feynman’s trial action is parametrized by parameters and , the latter of which was assumed to have only a mild influence on the end result [26]. Feynman then optimized for at fixed , treating parts of the integrals approximately. From the expressions he gave in the strong-coupling regime, we find (for ) and (for ). With today’s readily available numerical integration and optimization tools, we also optimized for and simultaneously without taking any approximations to the integrals. This results in an improved variational energy of (, ). Thus, our Monte Carlo estimate is lower, in accordance with the variational principle.

The dispersion for is shown in Fig. 10. In this calculation we recalculated one of the first hallmarks of DiagMC [12]. It shows that the perturbative result incorrectly predicts an endpoint to the dispersion, whereas the DiagMC results show that the binding energy can be seen up to zero energy. In passing we note that other formula for the computation of the effective mass and the group velocity exist, see Ref. [13, 24]. The histogram over the expansion orders for is shown in Fig. 11. For large enough expansion orders , decays exponentially. The acceptance factors are about for INSERT and REMOVE and for SWAP. Those numbers are acceptible. If deemed too low, or if the frequency of visiting the normalization diagram is too low, reweighting and flat histogram techniques should be used.

(a)
(b)
Figure 9: (Color online) Logarithm of the Green function as a function of imaginary time for and different values of . Error bars are indicated by the shaded area. The exponential decay of the Green function is fitted starting from in order to obtain the polaron ground state energy and its residue, cf. Eq. 21. The inset in panel (b) shows the polaron energy (computed from the slope of the fit) for different fit windows . Error bars on these data have been obtained by considering 28 independent simulations. One can see that is sufficiently great to probe the asymptotic regime. We switched off the CHANGE-P update and used a uniform grid in the -direction. The code ran for 36 (a) and 467 (b) CPU-hours with the number of updates totalling and respectively.
Figure 10: (Color online) Dispersion of the Fröhlich polaron for compared to the perturbative result  [25]. Error bars are shown but may be too small to be visible. Although the full solution shows the existence of an end point where , the perturbative result shows unphysical behavior for large momenta. The effective mass, obtained by fitting to the polaron energy curve for low values of the momentum , is , also in reasonable agreement with the perturbative result. This calculation was historically one of the first successes of DiagMC [12].
Figure 11: Histogram over the expansion orders for the same system as in Fig. 9.

4 Fröhlich polaron: Self-Energy

It is often advantageous to compute the self-energy instead of the full Green function and resort to the Dyson equation (Eq. 14) to obtain the latter. However, a Fourier transform from imaginary times to (Matsubara) frequencies is needed to cast the Dyson equation in algebraic form; otherwise, it is a convolution. Below we first discuss how to perform such Fourier transforms by considering the first-order diagram, and then proceed with the diagrammatic Monte Carlo computation of the full self-energy. In this text, the self-energy is always understood as the one-particle irreducible self-energy [5].

4.1 Fourier Transforms explained for the first-order self-energy

Figure 12: The first-order self-energy and one of the terms in the corresponding Green function .
Figure 13: (Color online) Computation of the Green function via the first-order self-energy. The result is shown for different number of Matsubara frequencies (powers of 2 are indicated) and compared to the result where is directly sampled (cf. Sec. 3 with minor modifications: the INSERT update is only allowed on the final electron propagator (which has zero phonon coverage) and REMOVE can only remove the last phonon arc. The SWAP update is disabled). The error bars on the latter correspond (or are smaller than) the line width. The inset shows a zoomed-in version of the last-time region. Error bars on a subset of the data sampled from the bare expansion are shown.
Figure 14: (Color online) Same as in Fig. 13 but instead of the self-energy we compute the convolution for , which behaves as for . Compared to Fig. 13, treating the divergence of the self-energy more carefully allowed us to substantially reduce the required number of Matsubara frequencies.

The first-order self-energy is shown in Fig. 12 together with the Green function related to this diagram via the Dyson equation. The first-order self-energy can be computed analytically for zero external momentum,

(29)

When applying Eq. 17 and using a root solver, we find that the polaron energy is given by for in our units. The first-order self-energy for non-zero external momenta can be computed as,

(30)

where is the Dawson function, . For small values of the argument, it behaves as . For large values of , it behaves as . This formula for is the Fourier transform of the self-energy in real space , with

(31)

and

(32)

which can be seen as a retarded Coulombic-like potential.

In principle, all we need to do is apply the Dyson equation (Eq. 14) and Fourier transform back to the imaginary time domain. However, as we will see, this must be done very carefully.

Shape of the self-energy – The most important observation is that the self-energy diverges as for for any . This is in fact quite a common situation in continuous space, originating from the momentum integral over the bare electron Green function. The first-order self-energy very often has limits that need to be analyzed analytically. Is this divergence a problem? On the one hand, any integral is convergent; in particular, there is no problem with the existence of a proper Fourier transform. On the other hand, a Taylor expansion of the self-energy around the middle of the -bin shows that the binning process has uncontrollable systematic error bars for sufficiently small values of . The solution is not difficult: One can choose to refrain from sampling and compute the self-energy analytically for fixed, discrete values, thereby circumventing the binning issue. If this is not an option and sampling remains essential, then one should make a measurement of , which is a featureless function of . Whenever the discretized is needed, it is obtained from . Unfortunately, this is not the only problem associated with the divergence as we will see in the discussion on the Fourier transforms.

Fourier transforms – One of the most fundamental differences between classical mechanics and quantum mechanics is the occurrence of non-commuting operators in the latter, which in turn leads to the time ordering inherent to quantum field theory. This is already apparent from the Heaviside function in Eq. 15. It has the following frequency representation

(33)

It is this behavior which explains the structure in Eq. 16. Note that the coefficient of the term in Eq. 16 is exactly 1 (which is identical to the jump in the Green function for and ) thanks to the (anti-)commutation relations of bosonic (fermionic) annihilation and creation operators [5]. Therefore, the same asymptotic behavior holds not only for but for any Green function . In the inverse Fourier transform , brute force summing (or integrating) over all frequencies in the hope of restoring the Heaviside function is hopeless. One therefore needs to treat the large-frequency tails carefully. At finite temperature, Fourier transforms assume that the function can be periodically continued, which is likewise violated. The easiest solution is to (i) only use the analytic formulations Eq. 15 and Eq. 16 for the bare Green function, and (ii) never perform a Fourier transform on any full Green function but only on differences . In doing so, the leading asymptotic frequencies are compensated as well as the discontinuity in imaginary time taken care of. It is also possible to treat the in the same fashion: Its coefficient in frequency space corresponds to the (sum of the) slope(s) of the Green function at (and ) in imaginary time.

In the literature one can also find formulas for the term, but we have never seen a case where this is necessary for the success of the calculation.

When we rely explicitly on the jump being in between and , we need to make sure in the Monte Carlo sampling that we choose large enough such that since by construction we have . As we have seen before, this means that must be quite large when is chosen close to the polaron energy . The region where is sizable may well appear smaller than this. Let us now use the imaginary-time and Matsubara formalisms for the Fourier transforms; specifically,

(34)
(35)

with the Matsubara frequencies for bosons and for fermions. Here, we have introduced a fictitious inverse temperature to impose the discretization in frequency space. A single electron obviously has no statistics, so we can use either bosonic or fermionic frequencies, or just – it should not matter as long as we use a transformation that turns the Dyson equation into an algebraic equation. From the Dyson equation can be written as

(36)

Observing the decay of over many decades (as a function of ) requires in turn a huge number of Matsubara frequencies. The naive implementation of the Fourier transform scales as where is the number of points in the time/frequency domain and becomes too costly. Although alternative approaches exist [27], let us explain here how fast Fourier transforms (FFT) can be used, with a scaling as . For simplicity and efficiency, we rely on the open source package FFTW [28]. Although our input data is real (impyling that ) and we could use the function calls r2c and c2r (cf. the FFTW documentation; we could save a factor 2 in storage) we consider this advantage negligible and use instead the easier function call dft for complex input and output. At this point the reader should keep in mind that FFT is only a tool for solving the equations Eq. 34 and Eq. 35. It performs

(37)

between input data and output data , and this is not identical to Eqs. 34 and 35. Recall what FFT really computes (cf. the FFTW documentation): first, the phase used in the forward (backward) Fourier transform corresponds to the backward (forward) sign convention in Eqs. 34 and 35; second, the forward transform immediately followed by the backward transform multiplies the input by ; and third, the positive frequencies are stored in the first half of the output and the negative frequencies are stored in backwards order in the second half of the output.

FFT assumes an equidistant grid where the input data are located exactly on the grid points. If we need more Matsubara frequencies than we have grid points in imaginary time, or if we use a non-uniform grid, we need interpolation methods. In practice, quadratic or spline interpolation is used. After binning the data for the self-energy, we have discretized values