Order-\mathbf{N} Cluster Monte Carlo Method for Spin Systems with Long-range Interactions

# Order-N Cluster Monte Carlo Method for Spin Systems with Long-range Interactions

Kouki Fukui Synge Todo Department of Applied Physics, University of Tokyo, Tokyo 113-8656, Japan CREST, Japan Science and Technology Agency, Kawaguchi, 332-0012, Japan
###### Abstract

An efficient cluster Monte Carlo method for Ising models with long-range interactions is presented. Our novel algorithm does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. The realized stochastic dynamics is equivalent to that of the conventional Swendsen-Wang algorithm, which requires operations per Monte Carlo sweep if applied to long-range interacting models. In addition, it is shown that the total energy and the specific heat can also be measured in time. We demonstrate the efficiency of our algorithm over the conventional method and the algorithm by Luijten and Blöte. We also apply our algorithm to the classical and quantum Ising chains with inverse-square ferromagnetic interactions, and confirm in a high accuracy that a Kosterlitz-Thouless phase transition, associated with a universal jump in the magnetization, occurs in both cases.

###### keywords:
long-range interaction, cluster algorithm, method, Ising model, quantum Monte Carlo, Kosterlitz-Thouless transition
###### Pacs:
02.50.Ga, 02.70.Ss, 05.10.Ln, 64.60.De

and

## 1 Introduction

Systems with long-range interactions exhibit more involved phase diagrams and richer critical phenomena than those with only nearest-neighbor interactions. One of the most prominent examples is the Ising model with long-range interactions, whose Hamiltonian is defined as

 −βH=∑i

where , is the coupling constant between the th and th sites (), and is the total number of spins. Among the models described by Eq. (1), the one-dimensional chain with algebraically decaying interactions has been studied most intensely so far. The interaction for the model is written as

 Jij=1r1+αij, (2)

where is the parameter characterizing the range of interaction. (Note that should be positive to assure the energy convergence.) In spite of the extremely simple form of its Hamiltonian, the model is known to exhibit various critical behavior with respect to the parameter : When is sufficiently large, the system belongs to the same universality class as the nearest-neighbor model, i.e., no finite-temperature phase transitions [1, 2]. At , however, the system exhibits a Kosterlitz-Thouless phase transition at a finite temperature [3, 4, 5]. In the regime , the critical exponents of the system changes continuously as is decreased [6]. Finally, when is equal to or smaller than , the system shows the critical exponents of the mean-field universality [6]. Such rich and nontrivial phenomena associated with the long-range interactions have attracted much interest and many researches have been done both theoretically and numerically.

On numerical researches of long-range interacting spin models, however, the standard Monte Carlo techniques encounter a serious problem, i.e., quadratic increase of CPU time per Monte Carlo sweep as the system size increases. This is simply because there are different pairs of spins to be considered in an -spin system. For unfrustrated spin models, the cluster methods, such as the Swendsen-Wang [7] or Wolff [8] algorithms, are the methods of choice, since they almost completely eliminate correlations between succeeding spin configurations on the Markov chain. Unfortunately, the cluster algorithms share the same difficulty with the single spin flip update. In 1995, however, Luijten and Blöte introduced a very efficient cluster algorithm [9]. What they focused was that, on average, only among bonds contribute to cluster construction. By employing a rejection-free method based on binary search on a cumulative probabilities, they succeeded in reducing the number of operations per Monte Carlo sweep drastically to . Recently, the same strategy has been applied to the quantum Monte Carlo method for long-range ferromagnetic Ising models in a transverse external field [10].

In the present paper, we propose a still faster cluster Monte Carlo algorithm for long-range interacting ferromagnets. Our method is based on the extended Fortuin-Kasteleyn representation of partition function [11, 12] and an extremely effective technique for integral random number generation, so-called Walker’s method of alias [13, 14]. As the method of Luijten and Blöte, the proposed algorithm does not introduce any cutoff for interaction range and realizes the identical stochastic dynamics with the original Swendsen-Wang method. The CPU time per Monte Carlo sweep is, on the other hand, merely proportional to instead of or , and is an order of magnitude shorter than that of Luijten and Blöte for sufficiently large systems. In addition to its speed, our algorithm has several advantages: First, it is quite robust, that is, it works efficiently both for short-range and long-range interacting models as it stands. Second, the calculation of the total energy and the specific heat are also possible in time without any extra cost. Third, our Monte Carlo algorithm is straightforwardly extended for quantum models, such as the transverse-field Ising model, the Heisenberg model, etc.

The organization of the present paper is as follows: In Sec. 2, we briefly review the Swendsen-Wang cluster algorithm and its variant by Luijten and Blöte. In Sec. 3, we present our new algorithm in detail. We also show how the total energy and the specific heat are calculated in time, and the extension of the algorithm to the transverse-field Ising model. In Sec. 4, a benchmark test of our new algorithm is presented. As an application of the algorithm, the Kosterlitz-Thouless transition of the Ising chain with inverse-square interaction is investigated in Sec. 5. Especially, we confirm the universality between the classical and quantum Ising models in a high accuracy. Section 6 includes a summary and discussion, followed by appendices on some technical details about Walker’s method of alias.

## 2 Conventional Cluster Algorithms for Ising model with Long-range Interactions

### 2.1 Swendsen-Wang Method

In this section, first we briefly review the cluster Monte Carlo method by Swendsen and Wang [7]. Each Monte Carlo sweep of the Swendsen-Wang algorithm consists of two procedures, graph assignment and cluster flip. In the former procedure, one inspects all the bonds sequentially, and each bond is activated or deactivated with probability

 Pij=δσzi,σzj[1−exp(−2βJij)] (3)

and , respectively. Then, after the trials for all the bonds, each cluster of spins connected by active bonds is flipped at once with probability , and a terminate configuration is generated.

The stochastic process achieved by the Swendsen-Wang algorithm is ergodic. It is also proved, with the help of the Fortuin-Kasteleyn representation of the partition function [11, 12], that the algorithm satisfies the detailed balance. The partition function of the Hamiltonian (1) is written as

 Z=∑c∏i

where () denotes the total number of bonds, is the bond index, and and are the coupling constant and the product of the spin states of the both ends of the bond , respectively. We first extend the original phase space of Ising spins to the direct product of phase spaces of spins and graphs . A graph is defined by a set of variables (), each of which is defined on each bond (or link). The graph variable describes whether the th bond is activated () or not (). By using the extended phase space, the partition function (4) is expressed as

 Z=C∑c∑gω(c,g) (5)

with

 ω(c,g)=Nb∏ℓ=1 Δ(σℓ,gℓ) Vℓ(gℓ), (6)

where is a constant and the summation runs over possible graph configurations. The weight functions and are defined as

 Δ(σℓ,gℓ) ={0if gℓ=1 and σℓ=−11otherwise (7) Vℓ(gℓ) =(e2βJℓ−1)gℓ, (8)

respectively. The equality between Eqs. (4) and (5) is verified by figuring out the summation with respect to in the latter:

 ∑gω(c,g)=∑gNb∏ℓ=1Δ(σℓ,gℓ)Vℓ(gℓ)=Nb∏ℓ=1(Δ(σℓ,0)Vℓ(0)+Δ(σℓ,1)Vℓ(1))=Nb∏ℓ=1(1+δσℓ,1(e2βJℓ−1))=eβ∑Nbℓ=1JℓNb∏ℓ=1(e−βJℓ+δσℓ,1(eβJℓ−e−βJℓ))=eβ∑Nbℓ=1JℓNb∏ℓ=1eβJℓσℓ, (9)

and thus . From Eq. (5), we can consider as a weight of the configuration in the extended phase space.

The Swendsen-Wang method is a procedure to update the spin configuration dynamically by going through an intermediate graph configuration. Consider that the initial spin configuration is . We assign a graph for the spin configuration with the following probability:

 P(g|c)=ω(c,g)∑g′ω(c,g′)=Nb∏ℓ=1Vℓ(gℓ)Δ(σℓ,gℓ)∑g′ℓVℓ(g′ℓ)Δ(σℓ,g′ℓ). (10)

That is, for each bond we assign with probability

 P(gℓ=1|σℓ)=Vℓ(1)Δ(σℓ,1)Vℓ(1)Δ(σℓ,1)+Vℓ(0)Δ(σℓ,0)=δσℓ,1(1−e−2βJℓ). (11)

This probability turns out to be the same as the one in Eq. (3). The cluster construction in the Swendsen-Wang algorithm is thus equivalent to assigning graph variables in the Fortuin-Kasteleyn language. The second procedure (cluster flip) in the Swendsen-Wang algorithm is also represented clearly in the Fortuin-Kasteleyn representation: Under a given graph configuration , a new spin configuration is selected according to probability

 P(c′|g)=ω(c′,g)∑cω(c,g)=Nb∏ℓ=1Δ(σ′ℓ,gℓ)∑cNb∏ℓ=1Δ(σℓ,gℓ). (12)

Note that takes either 1 or 0, depending on whether all the active bonds have , or not. The last equation means that among all the allowed spin configurations, which have nonzero , for a given , a configuration is chosen with equal probability. This is equivalent to flipping each cluster of spins connected by active bonds independently with probability .

The detailed balance condition of the Swendsen-Wang method is thus represented in a concrete form:

 P(c′|c)ω(c)=∑gP(c′|g)P(g|c)ω(c)=∑gω(c,g) ω(c′,g)ω(g). (13)

Since the most right-hand side of Eq. (13) is symmetric under the exchange of initial and terminal spin states and , the detailed balance is satisfied automatically.

### 2.2 O(NlogN) Method by Luijten and Blöte

It has turned out that the Swendsen-Wang cluster algorithm works quite well for wide variety of systems without frustration. Especially, it removes almost completely the so-called critical slowing down near the continuous phase transition point. Since there is no constraint about the range of interactions in its construction, the Swendsen-Wang algorithm is also applicable to long-range interacting systems without any modification. However, since the number of bonds is in such systems, the number of operations required for one Monte Carlo sweep is proportional to , which is significantly more expensive than those for the nearest-neighbor models.

A nifty solution for reducing drastically the number of operations from to was devised by Luijten and Blöte [9]. What they noticed is separating the activation probability into the two parts:

 Pℓ =pℓ δσℓ,1 (14) pℓ =1−exp(−2βJℓ). (15)

If one chooses candidate bonds with probability and then activate them with probability afterward, the probability is realized eventually. For choosing the candidates bonds, one could use a more efficient method than the exhaustive search, since is independent of the spin state and predetermined statically at the beginning of the Monte Carlo simulation. Indeed, it is seen that the number of candidate bonds are typically much smaller than . The average number of candidate bonds is evaluated as

 Nb∑ℓ=1pℓ=12N∑i=1∑j≠i(1−e−2βJij)∼12N∑i=1∫N1/d1dr rd−1(1−e−2βJ(r))∼βN∫N1/21dr rd−1J(r). (16)

Here we assume the translational invariance and that depends only on the distance, i.e., . If decays faster than , which is equivalent to the condition of energy convergence for the ferromagnetic models, the integral in the last expression converges to a finite value in the thermodynamic limit. Thus, at a fixed temperature, the number of candidate bonds increases as instead of .

For choosing candidate bonds, Luijten and Blöte adopted a kind of rejection-free method, which is based on the binary search of cumulative probability tables. Let us define

 q(0)m = pmm−1∏ℓ=1(1−pℓ)(q(0)1=p1) (17) C(0)m =m∑ℓ=1q(0)ℓ(C(0)0=0 and % C(0)Nb+1=1) (18)

where is the probability that the th bond is eventually chosen as a candidate after the failure for the first, second, , and th bonds, and is the cumulative probability of . When an uniform real random variable () is generated, satisfies with probability . The first candidate bond can then be directly chosen by searching the first element larger than . After the th bond is activated or deactivated depending on its spin state , one can continue the same procedure using the tables

 q(m)n = pnn−1∏ℓ=m+1(1−pℓ)(q(m)m+1=pm+1) (19) C(m)n =n∑ℓ=m+1q(m)ℓ(C(m)m=0 % and C(m)Nb+1=1). (20)

In practice, one does not have to prepare for all ’s, since is readily expressed in terms of and as

 C(m)n=pmq(0)m(C(0)n−C(0)m). (21)

In other words, comparing to a random number () is equivalent to comparing to .

Besides an initial table setup, which requires operations, searching an element in the table can be performed very quickly by using the binary search algorithm. The number of operations required for each search is , which is significantly smaller than for the naive sequential search. Since the average number of candidate bonds is , a whole Monte Carlo sweep is accomplished by operations on average.

## 3 New O(N) Cluster Algorithm

### 3.1 Formulation of O(n) Method

The factor in the method of Luijten and Blöte is due to the fact that they use the binary search algorithm in looking for a next candidate. This factor might be removed if one can use some method instead of the binary search. Walker’s method of alias [13, 14] has been known as such an method to generate integral random numbers according to arbitrary probability distribution for a long time (Appendix A), and is a potential candidate for the replacement. Unfortunately, for the Walker method one can not use the smart trick presented in Eq. (21) for reducing the number of tables. It means that one has to prepare a table of length for each before starting the simulation. The total amount of memory storage for storing all the tables is thus , which is not acceptable in practice. In the following, we present a different approach based on the extended Fortuin-Kasteleyn representation, which solves the storage problem and enables us to use the efficient method by Walker with reasonable storage requirement, (or for systems with translational invariance).

Our central idea is assigning a nonnegative integer to each bond instead of a binary (active or inactive). The integer to be assigned is generated according to the Poisson distribution. The probability that a Poisson variable takes an integer is given by

 f(k;λ)=e−λλkk!, (22)

where is the mean of the distribution. Note that and therefore

 ∞∑k=1f(k;λ)=1−e−λ, (23)

which is equal to in Eq. (15), if one puts to be . That is, if one generates an integer according to the Poisson distribution with , it will take a nonzero value with probability . Thus conventional procedure in activating bonds in the Swendsen-Wang algorithm can be modified as follows: Generate a Poisson variable for each bond with a mean , then activate the bond only when the variable is nonzero and the spins are parallel. At first glance, it seems that the situation is getting worse, since a Poisson random number, instead of a binary, is needed for each bond. At this point, however, we leverage an important property of the Poisson distribution: the Poisson process is that for random events and there is no statistical correlation between each two events. It allows us to realize the whole distribution by calculating just one Poisson random variable with the the mean . The following identity clearly represents the essence:

 Nb∏ℓ=1f(kℓ;λℓ)=f(ktot;λtot)(ktot)!k1!k2!⋯kNb!Nb∏ℓ=1(λℓλtot)kℓ, (24)

where . This identity is verified in a straightforward way by substituting Eq. (22) in both hands. The left-hand side of Eq. (24) is the probability that is assigned to each bond. The right-hand side, on the other hand, stands for the probability of generating a single Poisson number and then distributing events to each bond with the weight proportional to . Distributing each event can be carried out in a constant time using Walker’s method of alias. Since generating a Poisson number with the mean takes only time on average, the number of operations of the whole procedure is also proportional to , which is for energy converging models.

Before closing this section, let us describe our algorithm in terms of an extended Fortuin-Kasteleyn representation. Introducing a configuration instead of in the original representation, the partition function is expressed as

 Z=∑c∑kNb∏ℓ=1Δ(σℓ,kℓ)Vℓ(kℓ)=∑cNb∏ℓ=1∞∑kℓ=0Δ(σℓ,kℓ)Vℓ(kℓ) (25)

with

 Δ(σ,k) ={0if k≥1 and σ=−11otherwise (26) Vℓ(k) =e−βJℓ(2βJℓ)kk!. (27)

The original partition function is easily recovered by performing the summation over ’s first.

### 3.2 Procedure in Concrete

One Monte Carlo sweep of the algorithm is described as follows.

1. Generate a nonnegative integer according to the Poisson distribution with the mean .

2. Repeat the following procedure times:

1. Choose a bond with probability

 Jℓ∑Nbℓ′=1Jℓ′ (28)

by using Walker’s method of alias.

2. If then activate bond . If the bond is already activated, just do nothing.

3. Flip each cluster with probability 1/2.

For a system with translational invariance, step (2-a) in the above procedure can be replaced by

1. Choose a site with probability , then choose another site with probability

 Jij∑j′≠iJij′. (29)

In this way, the size of tables for the modified probability and alias number in the Walker method (see Appendices A and B for details) can be reduced from down to .

### 3.3 Total Energy and Specific Heat Measurement

Measuring the total energy is also costly for long-range interacting models. In Ref. [15], Krech and Luijten proposed a method based on the fast Fourier transform. In this section, however, we show that the total energy and the specific heat are also calculated in time in the present algorithm. Indeed, the both quantities are obtained free of charge during Monte Carlo sweeps.

Let us consider the expression for the energy in the extended Fortuin-Kasteleyn representation. Differentiating the partition function (25) with respect to the inverse temperature, we obtain

 (30)

where , and denotes the Monte Carlo average of an observable in the present algorithm. Thus, in order to calculate the total energy, nothing more than the information one uses during Monte Carlo sweeps is needed. It also applies to the the specific heat. Differentiating the right-hand side of Eq. (30) once again, one obtains the following expression

 C=−β2NdEdβ=−β2N[1β2⟨∑ℓkℓ⟩MC−⟨(Jtot−1β∑ℓkℓ)2⟩MC+⟨Jtot−1β∑ℓkℓ⟩2]=1N[⟨(∑ℓkℓ)2⟩MC−⟨∑ℓkℓ⟩2MC−⟨∑ℓkℓ⟩MC] (31)

for the specific heat, which is not simply a variance of of energy (30) but has an extra term . We note that the expressions for the total energy (30) and the specific heat (31) have a close relation with those for the quantum Monte Carlo method in the continuous imaginary-time path integral or the high-temperature series representations [16].

### 3.4 Quantum Cluster Algorithm for transverse-field Ising Model

The Monte Carlo algorithm can be extended quite naturally to quantum spin systems with long-range interactions. In this section, as a simplest example, we present a quantum cluster algorithm for the long-range Ising model in a transverse external field. Application to other quantum spin models, such as the Heisenberg or the models, is also straightforward.

The Hamiltonian of the transverse-field Ising model with long-range interactions is defined as

 H=−∑i

where denotes the strength of transverse external field, and and are the Pauli operators at site . According to the standard prescription [17], we start with dividing the Hamiltonian (32) into two parts, bond terms and site terms . The partition function is then expanded as

 (33)

where is the number of Suzuki-Trotter slices along the imaginary-time axis. In Eq. (33), we inserted the identities between the operators. The basis set is chosen so that are diagonalized (and so is ), and . We impose the periodic boundary conditions in the imaginary-time direction: . Expanding the exponential operators of the site Hamiltonian to the first order, we obtain the following discrete imaginary-time path integral:

 Z=limM→∞∑ϕ1,⋯,ϕMM∏m=1 e−βMEm⟨ϕm|N∏i=1[1+βΓM(σ+i+σ−i)]|ϕm+1⟩=ClimM→∞∑ϕ1,⋯,ϕMM∏m=1[∏i

where are the spin ladder operators, , and is a constant. Thus, the partition function of the transverse-field Ising chain of sites is represented by that of a two-dimensional classical Ising model of sites, where the interactions are long-ranged along one axis (real space direction) and short-ranged along the other axis (imaginary-time direction). The coupling constants in both directions are and , respectively, where is a fictitious inverse temperature of the mapped system. The cluster algorithm presented in the previous subsection is then applied to this classical Ising model straightforwardly.

Furthermore, it has been shown that one can take the Trotter limit () in Eq. (34), and perform Monte Carlo simulations directly in the imaginary-time continuum [18, 19]. It is possible because the coupling constant along the imaginary-time axis increases as does. The average number of antiparallel pairs (or kinks) remains finite even in the continuous-time limit, and therefore one does not have to take configurations with infinite number of kinks into account. Specifying the number of kinks by and its space-time position by (), we obtain the continuous-time path integral representation of the partition function:

 Z=∑ϕ0[e−βE0+∞∑n=1∑{sp}∫β0dτ1∫βτ1dτ2⋯∫βτn−1dτnΓnn+1∏p=1e−(τp−τp−1)Ep−1], (35)

where , , and is the diagonal energy of the spin configuration between the th and th kinks (). In Fig. 1(a) an example of path integral configuration is shown.

The cluster algorithm is also defined directly in the Trotter limit. Since the activation probability of temporal bond with parallel spins, , becomes almost unity for , the probability of finding inactive links (open squares in Fig. 1) in a uniform temporal segment of unit imaginary time, which contains Trotter slices, is given by a Poisson distribution,

 M/βCn(1−βΓ/M)(M/β−n)(βΓ/M)n≈f(n,Γ). (36)

Similarly, the probability of finding spatial candidate links (horizontal dashed lines in Fig. 1) between parallel spins at site and in unit imaginary time is . After all, the overall probability of finding events in total at some site or bond is given by with

 Λ=NΓ+2Jtot. (37)

Since these events are statistically independent with each other, a series of events is generated successively by using the exponential distribution for the temporal interval between two events:

 p(t)dt=Λe−Λtdt. (38)

At each imaginary time, then a site or bond is chosen according to the probabilities or , respectively. This is again done in a constant time by using the Walker method. If a site is chosen, the temporal bond is deactivated, i.e., clusters are disconnected at this space-time position. If a bond is selected (and if the spins on its ends are parallel), on the other hand, a spatial link is inserted, i.e., two sites are connected at this imaginary time (horizontal solid lines in Fig. 1). At the space-time points where the spin changes its direction (open circles in Fig. 1), we always deactivate the temporal bond. By repeating this procedure until the imaginary time is reached, the whole lattice is divided into several clusters [Fig. 1(b)]. Finally each cluster is flipped with probability to generate a terminal configuration.

The number of operations per Monte Carlo sweep is proportional to the number of generated events. Its average is given by , which is proportional to the system size as the algorithm for classical models. We note that the quantum cluster algorithm presented in this section is also formulated in the same way in the high-temperature series representation [10].

## 4 Performance Test

In order to demonstrate the efficiency of the present method, we carried out Monte Carlo simulations for the classical mean-field (or infinite-range) model of various system sizes (). We use the naive Swendsen-Wang and Luijten-Blöte methods as benchmarks. The coupling constants of the mean-field model is given by

 Jij=1N (39)

for all . The denominator is introduced to prevent the energy density of the system from diverging in the thermodynamic limit. We choose the mean-field model as a severest test case for these algorithms, though simpler and faster algorithms, even exact analytic results, exist for this specific model. The benchmark test was performed on a PC workstation (CentOS Linux 5.1, Intel Xeon 3.2GHz, 1MB cache, GNU C++ 4.1.1).

We confirm that all these algorithms produce the same results, total energy, specific heat, magnetization density squared, Binder cumulant, etc, within the error bar in the whole temperature range we simulated. The CPU time spent for one Monte Carlo sweep at the critical temperature () is shown in Fig. 2. For the naive Swendsen-Wang algorithm, as one expects, the CPU time grows rapidly as . On the other hand, it is clearly seen in Fig. 2 that the present algorithm has a different scaling, linear to the system size, and is indeed much faster than the Swendsen-Wang method except for very small system sizes . The present and Luijten-Blöte methods exhibit a similar scaling behavior, but the former is faster for all the system sizes we simulated. To see the difference in scaling behavior in detail, we plot the relative speed of the present algorithm to the latter in the inset of Fig. 2. For , it scales as , which is consistent with the performance difference between the Walker and the binary search algorithms. Around the system size , however, the results for the present algorithm start to deviate from the -linear scaling. Those for the Luijten-Blöte method shows a similar anomalous behavior, but the situation is much worse in this case as seen in the inset of Fig. 2. We attribute these anomalies to the occurrence of cache miss, for the spin configuration of does not fit the cache memory, whose size is typically a few MB. The naive Swendsen-Wang method should also suffer from the same problem, but in the present benchmark test its effect seems to be hidden under the quadratic growth in the number of operations.

In summary, among the existing three algorithms the present method is the fastest except for very small system sizes. Especially, it outperforms the Swendsen-Wang method by four orders of magnitude at and the Luijten-Blöte method by about factor twenty at . This efficiency of the present method enables us to simulate much larger systems or further improve statistics as compared with the previous Monte Carlo studies, as demonstrated in the next section.

## 5 Kosterlitz-Thouless Transition in Ising Chain with Inverse-square Interactions

In this section, as a nontrivial example, we apply our cluster algorithm to the phase transition of the one-dimensional Ising model with inverse-square interactions [Eq. (2) with ]. As we mentioned in the introduction, among the models with algebraically decaying interactions, this model is special as a boundary case, i.e., it has the weakest (or shortest) interactions to trigger a finite-temperature phase transition. What is more, this phase transition belongs to the same universality class as the Kosterlitz-Thouless transition [3, 4, 5], where logarithmic excitations brought by formation of domain walls compete with the entropy generation. The Kosterlitz-Thouless transition is characterized by an exponential divergence of the correlation length toward the critical temperature and a finite jump in the magnetization. Especially, the amount of the magnetization gap at the critical point is conjectured to satisfy the following universal relation:

 2m2=TKT, (40)

where , being the square of magnetization density. For the classical Ising chain with inverse-square interactions, it is confirmed that a phase transition of Kosterlitz-Thouless universality occurs by an extensive Monte Carlo study [20].

We perform Monte Carlo simulations by using the cluster algorithm for the chain length . We impose periodic boundary conditions. In order to minimize the effect of boundary conditions, we use the following renormalized coupling constant

 ~Jij=∞∑n=−∞1(i−j−nL)2=π2L2sin2π(i−j)L, (41)

in which contribution from all periodic images is taken into account. It reduces to the bare coupling constant (2) in the thermodynamic limit . Measurement of physical quantities is performed for 524288 Monte Carlo sweeps after discarding 8192 sweeps for thermalization.

In Fig. 3, we show the temperature dependence of the magnetization density squared for (a) and (b) . For the classical system (), our results coincide quite well with the previous Monte Carlo study [20]. In both cases, decreases monotonically as the temperature increases. At high temperatures, vanishes quite rapidly as the system size increases, while it seems converging to a finite value in the low temperature regime though the convergence is rather slow. This suggests an emergence of long-range order at some finite critical temperature. At low temperatures, of the quantum system is smaller than the classical one. Indeed, even at for , which is in contrast to the classical case, . This is due to quantum fluctuations introduced by the transverse external field. In a previous quantum Monte Carlo study [10], intersections of magnetization curves for different system sizes at intermediate temperatures have been reported. In the present study, however, we do not observe such a nonmonotonic behavior regardless of the system size. We would attribute this discrepancy to a relaxation problem in the Monte Carlo calculation in Ref. [10], where only a local flip scheme is used for updating spin configurations.

In order to discuss the critical behavior in detail, next we perform a finite-size scaling analysis. As is well known, the standard finite-size scaling technique does not work in the case of the Kosterlitz-Thouless transition, for the correlation length exhibits an exponential divergence. Instead of the ordinary finite-size scaling, which depends on an algebraic divergence of the correlation length, an alternative scaling form for the magnetization has been suggested from the renormalization group equations [5, 21, 22]:

 2m2T−1=ℓ−1F(tℓ2), (42)

where is a scaling function, , , and a constant. It is confirmed that this finite-size scaling assumption works well for the two-dimensional model [22], in which the helicity modulus, instead of the magnetization, is the quantity exhibiting a universal jump.

In Fig. 4, we show the scaling plots for and 1. Both data are scaled excellently by using the same scaling form (42), where we have only two fitting parameters, and . This strongly supports that the magnetization shows the universal jump (40) at the critical point. From these scaling plots we conclude

 TKT={1.52780(9)for Γ=01.38460(25)for Γ=1 (43)

for the Kosterlitz-Thouless critical temperature. The result for the classical case is compared with that in the previous Monte Carlo study,  [20], which differs slightly beyond the error bar. This tiny discrepancy might be due to the difference in the way of scaling analysis. In Ref. [20] the magnetization data at low temperatures are first extrapolated to the thermodynamic limit, then further extrapolated towards the critical point, whereas the Monte Carlo data are directly used to estimate the critical temperature in the present finite-size scaling analysis. Thus, we expect that the present estimate for is more reliable.

As for the quantum system (), the critical temperature is lower than the classical one due to the quantum fluctuations. However, the finite-scaling analysis confirms that the phase transition belongs to the Kosterlitz-Thouless universality class as in the classical case. The finite-size scaling plots shown in Fig. 4 suggest that the scaling function itself is universal as well. As is increased further, decreases monotonically, and it finally vanishes at a critical transverse field ( [23]), where a quantum phase transition occurs. At this point some exotic quantum critical behavior with a nontrivial dynamical exponent is expected, since the -dimensional system is extremely anisotropic, i.e., in real space direction the system has long-range interactions, whereas the interaction in the imaginary-time direction is still short ranged. More detailed analyses on the quantum criticality of the transverse-field Ising model will be presented elsewhere [23].

## 6 Summary and Discussion

We presented an cluster algorithm for Ising models with long-range interactions. The algorithm proposed in the present paper is exact, i.e., it does not introduce any cutoff for interaction range and thus it strictly fulfills the detailed balance. Our algorithm is formulated based on the extended Fortuin-Kasteleyn representation, where bond variables have a nonnegative integral value instead of a binary number. For each bond, an integer is generated according to the Poisson distribution. However, it does not necessarily mean that each Poisson variable has to be generated one by one. We show that generating an overall Poisson distribution and ex-post assignment of events, using Walker’s method of alias, are statistically equivalent to the naive Swendsen-Wang method. In Sec. 4, we demonstrated the -linear scaling behavior in the CPU time for the mean-field model.

The present method has several advantages over the existing methods, such as the Metropolis method, the Swendsen-Wang algorithm [7], the improvement by Luijten and Blöte [9], or the recently proposed method [24], in several aspects: (a) The CPU time per Monte Carlo sweep is . (b) It works effectively both for short-range and long-range interacting models. (c) It is a cluster algorithm and free from the critical slowing down near the critical point. (d) It is possible to formulate a single-cluster variant [8]. (e) It is very easy to implement the algorithm, based on an existing Swendsen-Wang code. (f) It can also be used for systems without translational invariance, though it once costs to initialize lookup tables. (g) Calculation of the total energy and the specific heat can be done all together in time. (h) It can be applied to Potts, , and Heisenberg models with the help of Wolff’s embedding technique [8]. (i) Extension to quantum models, such as the transverse-field Ising model or the Heisenberg model, is also possible straightforwardly.

In Sec. 5, we have applied our new algorithm to the phase transition of Ising model with inverse square interactions, where we see that the method works ideally for both of the classical and quantum systems. It is confirmed in a high accuracy that the phase transition belongs to the same universality as the Kosterlitz-Thouless transition.

Finally, let us discuss the efficiency of the present algorithm at very low temperatures. For a fixed system size , the calculation cost of the present method grows linearly as the inverse temperature increases, whereas that of the naive method is constant regardless of the temperature for classical Ising models. It indicates that at lower temperatures than some threshold , the naive method outperforms the present method. At extremely low temperatures, almost all the bonds are activated. The present method then activates such bonds many times, which is the cause of the slowing down. Although the -linear increase of CPU time is inevitable for quantum systems, where the standard quantum Monte Carlo algorithms for short-range models also suffer from the same slowing down, however, one can adopt a “hybrid” scheme to optimize the calculation cost at intermediate temperatures, for classical models. Suppose ’s are sorted in descending order, and we use the naive method for the first bonds and the method for the others. The CPU time per Monte Carlo sweep is estimated as

 C(n)≃An+BNb∑ℓ=n+12βJℓ, (44)

where and are some constants. The optimal value of is then given by . For the one-dimensional model with algebraically decaying interactions (2), for example, we have

 noptNb≈N−1(2BβA)11+α. (45)

The threshold is defined as the inverse temperature where , that is, , which grows as the system size increases.

The part of the simulations in the present paper has been done by using the facility of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo. The simulation code has been developed based on the ALPS/looper library [25, 26, 27]. The authors acknowledge support by Grant-in-Aid for Scientific Research Program (No.15740232) from JSPS, and also by the Grand Challenge to Next-Generation Integrated Nanoscience, Development and Application of Advanced High-Performance Supercomputer Project from MEXT, Japan.

## Appendix A Walker’s Method of Alias

Consider a random variable which takes an integral value according to a probability ( and ). In this appendix, we discuss how to generate such random numbers effectively. One of the simplest and the most well-known methods is the one based on rejection:

Rejection Method

1. Generate a uniform integral random variable ().

2. Generate a uniform real random variable ().

3. If is smaller than then , otherwise repeat from (1).

Here, . Since the acceptance rate in step (3) is (), the probability of obtaining eventually is . Notice that the number of iterations is on average, and therefore it would take time for each generation. Especially, the efficiency decreases quite rapidly as the variance of increases. One may reduce the number of operations down to by employing the binary search on the table of cumulative probabilities (see Sec. 2.2). However, there exists a further effective method, called “Walker’s method of alias” [13, 14], which is rejection free and generates a random integer in a constant time.

The Walker algorithm requires two tables of size , which need to be calculated in advance. One is the table of integral alias numbers () and the other is that of modified probabilities (). Using these tables a random integer is generated by the following procedure:

Walker’s Method of Alias

1. Generate a uniform integral random variable ().

2. Generate a uniform real random variable ().

3. If is smaller than then , otherwise .

This procedure has no iterations, and thus completes in a constant time. The meaning of the tables and and the correctness of the algorithm is readily understood with the following example:

 i 1 2 3 4 5 6 7 8 9 10 11 12 pi 0 136 236 336 436 536 636 536 436 336 236 136 Pi 0 13 23 33 33 23 23 13 23 0 23 13 Ai 10 9 8 * * 5 6 6 7 7 8 8

The modified probabilities are determined from (see Appendix B), which gives the probabilities whether one should accept the firstly chosen number or choose the alias number . Let us consider, for example, the probability of . There are two possibilities: One is and , and the other is and since . The sum of these two probabilities is

 112[P9+(1−P2)]=19, (46)

which is equal to as expected. One can confirm that and are given in the example so that

 pi=1N[Pi+N∑j=1(1−Pj)δi,Aj] (47)

holds for . Together with the ordinary requirement for probabilities, (), Eq. (47) is the necessary condition for and to satisfy.

In practice, when is not a power of two, we expand the size of tables from to , where is the smallest integer satisfying . For , we assume . In this way, generating in step (1) is optimized as a bit shift operation on a 32- or 64-bit integral random number [14]. Furthermore, steps (2) and (3) can be replaced by a comparison between two integral variables by preparing a table of integers (or ) instead of floating point numbers , by which a costly conversion from an integer to a floating point variable can also be avoided.

In summary, by using the Walker method, integral random numbers according to arbitrary probabilities can be generated in a constant time. This extreme efficiency is essential for the present cluster Monte Carlo method. In the next appendix, we describe how to prepare the tables and .

## Appendix B Preparation of Modified Probabilities and Aliases

In the original paper by Walker [13] and also in the standard literature [14], only a naive method is presented for initializing and . Here we propose for the first time an efficient alternative procedure, which takes only time.

Consider the following table of probabilities for :

 i pi Pi 1 2 3 4 5 6 7 8 9 10 11 12 0 136 236 336 436 536 636 536 436 336 236 136 0 13 23 33 43 53 63 53 43 33 23 13

Here is initially set to a tentative value for . First we rearrange the table so that all the elements with precede those with

4 5 6 7 8 9 10 12 11 3 2 1
0

The rearrangement can be done by steps in contrast to the perfect sorting, which is an procedure. The white triangle points to the rightmost element with and the black triangle points to the rightmost element in the rearranged table.

Next, we determine the alias numbers sequentially from the right. We fill the “shortfall” of the element pointed by the solid triangle by the one pointed by the white triangle. The latter is always large enough, since by definition. In the present example, first the shortfall of the rightmost element is filled by the element with . The alias number for is then set to 10 and is replaced by . Since is no more larger than nor equal to unity, we shift the white triangle to the left by one. We repeat the same for the next “unfilled” element. After four iterations, the table is transformed as follows:

 ▽ ▼ i 4 5 6 7 8 9 10 12 11 3 2 1 Pi 33 43 53 63 13 23 0 13 23 23 13 0 Ai 8 8 9 10

Here the solid and white triangles are shifted four and three times from their original positions, respectively. The above procedure is repeated until the black triangle points to the same element as the white one, i.e., all the elements get filled. One should note that the black triangle always moves by one after each iteration, though the white one may stay on the same element depending whether or not after the step. The whole procedure is thus completed at most after iterations. In the present example, after 10 iterations we end up with

▼ ▽ i 4 5 Pi 33 3 6 7 8 9 10 12 11 3 2 1