An in-depth view of the microscopic dynamics of Ising spin glasses at fixed temperature

An in-depth view of the microscopic dynamics of Ising spin glasses at fixed temperature

Abstract

Using the dedicated computer Janus, we follow the nonequilibrium dynamics of the Ising spin glass in three dimensions for eleven orders of magnitude. The use of integral estimators for the coherence and correlation lengths allows us to study dynamic heterogeneities and the presence of a replicon mode and to obtain safe bounds on the Edwards-Anderson order parameter below the critical temperature. We obtain good agreement with experimental determinations of the temperature-dependent decay exponents for the thermoremanent magnetization. This magnitude is observed to scale with the much harder to measure coherence length, a potentially useful result for experimentalists. The exponents for energy relaxation display a linear dependence on temperature and reasonable extrapolations to the critical point. We conclude examining the time growth of the coherence length, with a comparison of critical and activated dynamics.

Keywords:
Spin glasses, nonequilibrium dynamics, characteristic length scales
1

1 Introduction

Below their glass temperature, Spin Glasses (1) (SG) are perennially out of equilibrium. The understanding of their sophisticated dynamical behavior is a long standing challenge both to theoretical and to experimental physics.

Aging (2) is a feature of SG dynamics that shows up even in the simplest experimental protocol, the direct quench. In these experiments, the SG is cooled as fast as possible to the working temperature below the critical one, . It is then let to equilibrate for a waiting time, , its properties to be probed at a later time, . For instance one may cool the SG in the presence of an external field, which is switched off at time . The so-called thermoremanent magnetization decays with time, but the larger is, the slower the decay. In fact, it has been claimed that, if the cooling is fast enough, the thermoremanent magnetization depends upon and only through the combination , at least for and in the range 50 s —  s (3). In other words, the only characteristic time scale is the sample’s own age as a glass, (this behavior is named Full Aging). Note, however, that there is some controversy regarding the natural time variable which could rather be with slightly less than one (4).

The time evolution is believed to be caused by the growth of coherent spatial domains. Great importance is ascribed to the size of these domains, the coherence length , which is accessible to experiments through estimates of Zeeman energies (5). The time evolution of plays a crucial role in the droplets theory of SG nonequilibrium isothermal dynamics (6). Perhaps unsurprisingly, it also plays a central role in yet incipient attempts to rationalize memory and rejuvenation experiments (see (7); (8); (9); (10) and references therein), where the experimentalist probes the glassy state by playing with the working temperature.

Even for the simplest direct quench experiment, there is some polemics regarding the growth law of : some theories advocate a logarithmic growth (6), while a power law describes numerical simulations (11) or experiments (5) better (a somewhat intermediate scaling has been proposed by the Saclay group (12) and found useful in experimental work (8), see also Sect. 6 below). Nevertheless, two facts are firmly established: (i) the lower is, the slower the growth of and (ii) lattice spacings, even for and as large as  s (5). Hence, the study of SG in thermal equilibrium seems confined to nanometric samples, or to numerical simulations.

There is clear evidence, both experimental (13) and theoretical (14); (15), for a thermodynamic origin of this sluggish dynamics. A SG phase appears below the critical temperature, . Several theories propose mutually contradicting scenarios for the equilibrium SG phase: the droplets (16), replica symmetry breaking (RSB) (17), and the intermediate Trivial-Non-Trivial (TNT) picture (18). Even if this equilibrium phase is experimentally unreachable (at least in human time scales), we now know (19) that it is nevertheless relevant to the nonequilibrium dynamics probed by experiments.

Droplets expects two equilibrium states related by global spin reversal. The SG order parameter, the spin overlap (precise definitions are given below in Sect. 2), takes only two values In the RSB scenario an infinite number of pure states influence the dynamics (17); (20); (21), so that all are reachable. TNT (18) describes the SG phase similarly to an antiferromagnet with random boundary conditions: even if behaves as for RSB systems, TNT agrees with droplets in the vanishing surface to volume ratio of the largest thermally activated spin domains (i.e. the link-overlap defined below takes a single value).

Droplets’ isothermal aging (6) is that of a disguised ferromagnet.2 Indeed, superuniversality, the emerging picture of isothermal aging, has been found useful for the study of basically all coarsening systems. For the growing domains are compact geometrical objects. Even if the surface of these domains might be fractal, their surface to volume ratio vanishes as diverges, see Eq. (13) below. Inside them, the spin overlap coherently takes one of its possible equilibrium values . Time dependencies are entirely encoded in the growth law of these domains, since correlation functions (in principle depending on time and distance, ) are universal functions of .

We are not aware of any investigation of the dynamical consequences of the TNT picture. Nevertheless, the antiferromagnet analogy suggests that TNT systems will show coarsening behavior.

As for the RSB scenario, equilibrium states with a vanishing order parameter do exist. Hence, the nonequilibrium dynamics starts, and remains forever, with a vanishing order parameter. Furthermore, the replicon, a Goldstone mode analogous to magnons in Heisenberg ferromagnets, is present for all  (22). As a consequence, the spin overlap is expected to vanish inside each domain in the limit of a large . Furthermore, is not a privileged observable (overlap equivalence (20)): the link overlap displays equivalent Aging behavior.

In order to be quantitative, these theoretical pictures of nonequilibrium dynamics need numerical computations for model systems. Indeed, several investigations have been carried out even for the simplest cooling protocol, the direct quench (9); (10); (11); (23); (24); (25); (26); (27); (28); (29). A major drawback of this approach, however, is the shortness of the reachable times. Indeed, one Monte Carlo Step (MCS) corresponds to (1). The experimental scale is at MCS ( s), while typical nonequilibrium simulations reach s. The problem has been challenging enough to compel physicists to design high-performance computers for SG simulations (30); (31); (32).

The situation has dramatically changed thanks to Janus (32), an FPGA computer that allows us to simulate the dynamics of a reasonably large SG system for eleven orders of magnitude, from picoseconds to tenths of a second.3 Thanks to Janus, we have recently performed a study of the nonequilibrium dynamics of the Ising Spin Glass (33). We introduced novel analysis techniques that allow the computation of the coherence length in a model independent way. This was crucial to obtain evidence for a replicon correlator. Furthermore, we showed how to investigate overlap equivalence and presented evidence for it.

In this work, we shall concentrate on the simplest protocol, the direct quench, for an Ising SG. We present a detailed study of dynamic heterogeneities, an aspect untouched upon in (33) in spite of its relevance (27); (28); (29). We show the first conclusive numerical evidence for a growing correlation length in the nonequilibrium dynamics, and its relationship with the coherence length is explored. Furthermore, we compute the anomalous dimension for the two-time, two-site propagator (see definitions below). Due to their central role, a systematic way of extracting coherence (or correlation) lengths from numerical data is called for, but it has scarcely been investigated in the past (see however (10); (11); (23)). This is why we take here the occasion to give full details on our integral estimators (33) (see also (23)).

The layout of the rest of this paper is as follows. In Sect. 2 we define the model as well as the correlation functions and time sectors. We describe our simulations, which have been extended as compared with (33) and discuss the difficult topic of extracting the best fit parameters from extremely correlated data. Sect. 3 is devoted to the integral estimators of the coherence (or correlation) lengths. To the best of our knowledge, the investigation of this technical (but crucial) issue was started in the context of lattice field theories (34). These integral estimators were instrumental to develop modern Finite Size Scaling techniques for equilibrium critical phenomena (35); (36) and, therefore, to establish the existence of the Spin Glass phase in three dimensions (14); (15); (37). In the context of nonequilibrium dynamics, new aspects (and opportunities) appear. In Sect. 4 we investigate the dynamic heterogeneities. In Sect. 5, the information gathered on length scales is used to analyze time correlation functions (and to extrapolate them to infinite time). We also study the thermoremanent magnetization. The crucial issue of the time growth of the coherence length is considered in Sect. 6. We present our conclusions in Sect. 7

2 Model, correlation functions, time sectors

2.1 Model

The Edwards-Anderson Hamiltonian is defined in terms of two types of degrees of freedom: dynamical and quenched. The dynamical ones are the Ising spins , which are placed on the nodes, x, of a cubic lattice of linear size and periodic boundary conditions. The nondynamical (or quenched) ones are coupling constants assigned to the lattice links that join pairs of lattice nearest neighbors, . In this work with probability. Each assignment of the will be named a sample, and, once it is fixed, it will never be varied (1).

The interaction energy for the spins is

(1)

( denotes lattice nearest neighbors). The choice fixes our energy units. We work in the so-called quenched approximation: any quantity (either a thermal mean value or a time dependent magnitude, see below) is supposed to be computed first for a given assignment of the couplings. Only afterwards the resulting value is averaged over the , which we denote by .4

The spins evolve in time with Heat-Bath dynamics (see, e.g., (38)), which belongs to the universality class of physical evolution. The starting spin configuration is taken to be fully disordered, to mimic the experimental direct quench protocol.

Note that the Hamiltonian (1) has a global symmetry (if all spins are simultaneously reversed the energy is unchanged). This symmetry gets spontaneously broken in three dimensions upon lowering the temperature at the SG transition at  (39).5

Finally, let us recall that the average over the coupling constants induces a non dynamical gauge symmetry (40). Let us choose a random sign per site . Hence, the energy (1) is invariant under the transformation

(2)

Since the gauge transformed couplings are just as probable as the original ones, the quenched mean value of an arbitrary function of the spins is identical to that of its gauge-average which typically is an uninteresting constant value. Constructing non trivial gauge-invariant observables is the subject of the next subsection.

2.2 Observables

A standard way of forming operators that are gauge-invariant under (2) is to consider real replicas. These are two statistically independent systems, and , evolving in time with the very same set of couplings. Their (obviously gauge-invariant) overlap field at time is

(3)

A slight modification consists in using just one of the real replicas, say , but considering times and

(4)

In many of the quantities defined below using , one may obviously gain statistics by averaging over the two real replicas. We have done so whenever it was possible, but this will not be explicitly indicated. We consider three types of quantities:

  1. Single-time global quantities:

    • Time-dependent energy density ( is the number of spins in the lattice)

      (5)

      Recall that indicates summation restricted to lattice nearest neighbors.

    • The spin glass susceptibility is defined in terms of the SG order parameter

      (6)

      Of course, the quenched mean value vanishes in the nonequilibrium regime where the system size is much larger than the coherence length . The susceptibility

      (7)

      steadily grows with the size of the coherent domains. Note that fluctuation-dissipation relations imply that is basically the nonlinear magnetic susceptibility.

  2. Two-times global correlation functions:

    • Spin-spin correlations:

      (8)

      The function carries many meanings:

      1. If the first argument is held fixed, and is studied as grows, it is just the thermoremanent magnetization. Indeed, because of the symmetry (2) the uniform configuration that would have been enforced by holding the spin glass in a strong external magnetic field can be gauged to the spin configuration found at time after a random start.

      2. On the other hand, in the pseudoequilibrium regime , the (real part of the) magnetic susceptibility at frequency is given by the fluctuation-dissipation formula

      3. Another point we shall be concerned with is the computation of the SG order parameter. It may be defined from the translationally invariant time sector6

        (9)

        as

        (10)

      The computation of is notoriously difficult (26). Note that other authors (27) subtract from in such a way that it tends to zero for large .

    • The link correlation function

      (11)

      carries information on interfaces. Indeed, consider a coherent spin-flip in a domain half of the system size. This will induce a dramatic change in . On the other hand, the change in will be concentrated at the lattice links that are cut by the surface of the flipped domain. If the geometry of this flipped region is that of a compact object with a vanishing surface to volume ratio, will remain basically unchanged.

  3. Space dependent correlation functions:

    • Single time correlation function:

      (12)

      The long distance decay of defines the coherence length:

      (13)

      The exponent is relevant, because at distances tends to zero as . For coarsening systems, because , the order parameter does not vanish inside a domain. For the Ising SG in three dimensions the exponent was found to be  (33); (41) (see Sect. 3.2 for details). The long distance damping function seems to decay faster than exponentially, with (10); (11). Note as well that, at the critical point, is related to the anomalous dimension, the latest estimate being  (39). The physical origin for a nonzero below is in the replicon mode. In fact, it was conjectured that for all ,  (22). In (33) we found values not far from this prediction. Note that the exponent is discontinuous at  (45).

    • The two-time spatial correlation function (see (27); (28))

      (14)

      (one could also subtract , but due to the self-averaging character of this leads to the same thermodynamic limit). This correlation function is rather natural for the structural glasses problem, see for instance (42), where an adequate order parameter is unknown.

      There is a simple probabilistic interpretation of . Let us call a defect a site where and let be the density of these defects. We trivially have . The conditional probability of having a defect at site knowing that there already is a defect at site x is , where the defects’ pair-correlation function is . Hence, is just .

      The long distance decay of defines the correlation length 7

      (15)

      Basically nothing is known on exponent nor on the long distance damping function . In (27); (28), this decay was fitted with and , but the smallness of the found correlation lengths for  (27); (28), does not permit strong claims. In the structural glasses context (42), one tries to interpret as a coherence length such as , rather than as correlation length. As we shall empirically show, this might be very reasonable in the limit .

      In an RSB framework, the relaxation within a single state corresponds to the range (the further decay of corresponds to the exploration of new states). This regime is quite naturally identified with the condition that . In fact, yields the (correlated) percolation threshold for defects.

      Sometimes we will find it useful to change variables from to . This is always feasible, because is a monotonically decreasing function of for fixed . The accuracy of our numerical data allows this change of variable without difficulty (we have used a cubic spline, since the function was sampled at a selected set of times).

      Finally, note that is the difference of two statistically correlated quantities (hence, the statistical error in the difference may be expected to be smaller than that for each of the two terms). This can be adequately taken into account by means of a jackknife procedure (see, e.g., (38)).

All the quantities defined so far are self-averaging (i.e., their relative errors for a fixed number of samples decrease as ), with the notorious exception of . This fact provides justification for the standard strategy in nonequilibrium studies (both numerical and experimental!) of averaging results over very few samples.

Self-averageness stems from the fact that the computed/measured quantities are averages of local observables taken over the full system (which provides a number of statistically independent summands of the order of . The exception, (7), is actually non local as it is the integral over the whole system of , Indeed, the central limit theorem suggests that the probability distribution function of should tend to a Gaussian when . Hence, the variance for is in the limit of a large system.

2.3 Simulation details

The Janus computer (32) can be programmed for the simulation of the single spin flip Heat Bath dynamics up to a very large number of lattice sweeps (units of Monte Carlo time) for systems of linear sizes up to .8

We have spent the most effort in simulating the dynamics of the model described by Eq. (1) in the direct quench protocol described in the Introduction, for several runs of about a hundred samples of linear size and up to Monte Carlo steps (we recall that a single step corresponds to roughly 1 picosecond of time in the real world). Details of our simulations are given in Table 1. We extend here the analysis of the simulations reported on (33), but additional simulations have also been carried out. Most notably, we simulated new samples of size at up to in Monte Carlo time, which have been useful to improve and test the statistical accuracy in some aspect of our analysis.

We wrote to disk the spin configurations at all times of the form , with integer and (the square brackets stand for the integer part). Hence, our and are of the form . Nevertheless, we computed only for powers of two, due to the increased computational effort.

A final note on the time span of our runs. Much to our surprise, we found in (33) that, even for our very large systems, finite size effects in the coherence length can be resolved with our statistical accuracy. In this work, we have restricted ourselves to the time window that is not affected by them. The single exception will be in the analysis of energy relaxation, Sect. 5.2, where this range is too short. Nevertheless, we have explicitly checked that the energy suffers from smaller finite size effects than the coherence length.

MC steps
80 0.8 96
80 0.9 32
80 1.15 32
80 0.7 768
Table 1: Parameters of our simulations. The overall wall-clock time needed was less than six weeks. We highlight with boldface the simulations performed after completion of (33). Recall that we take the critical temperature from (39), . The full analysis of spin configurations was performed offline.

2.4 Fits for extremely correlated data

Computing the best fit parameters and estimating errors from extremely correlated data sets presents a common, and still not satisfactorily solved, difficulty in many numerical studies. For instance, our study of requires considering approximately random variables extracted from a set of only samples. The standard approach, computing the covariance matrix and inverting it, fails because this matrix is necessarily singular.9 In this paper we shall follow an empirical procedure. We shall consider only the diagonal part of the covariance matrix in order to minimize when performing fits. Unless otherwise indicated, we shall always use this diagonal in the rest of the paper. Yet, in order to take correlations into account we shall perform this procedure for each jackknife block and later on compute error estimates from their fluctuations. As we have run simulations with both 63 (from (33)) and 768 samples for , we are in a position to test this method by comparing the results obtained and the ones to be expected for 63 samples (see Sect. 3.2).

The main drawback of this approach is that the standard test of fit likelihood cannot be applied blindly. Of course, were the exact fitting function known, the average value of diagonal should be per degree of freedom. Yet, since the obtained fitting function may coherently fluctuate with the numerical data, we shall encounter anomalously low values of diagonal . We examine this problem in Sect. 3.2, empirically finding that behaves as if there were many fewer degrees of freedom than what one would expect.

3 Integral estimators of characteristic length scales

The need to estimate characteristic length scales, such as or is a recurrent theme in lattice gauge theory and statistical mechanics. The more straightforward method is to consider a particular functional form for the long distance damping function in Eqs. (13) or (15). One of the problems with this approach, already identified in the study of equilibrium critical phenomena, is that it is extremely difficult to extract from numerical data simultaneously the length scale and the exponent for the algebraic decay. Note that quite often computing the exponent is as important as extracting the length scale to draw physical conclusions. The situation worsens if the functional form is only an educated guess, which is precisely our case. Furthermore, numerical data for the correlation function at different lattice sites suffer from dramatic statistical correlations, which complicates fitting procedures.

A different approach, the use of integral estimators, has been known since the 1980s (34); but only in the mid 1990s (see e.g. (35); (36)) it was realized that it provided an enormous simplification. The use of integral estimators for the length scale enables determinations of exponents such as in Eq. (13), which are completely independent from the functional form of the long distance damping. The only place left for systematic errors is in finite size effects or in scaling corrections (when the considered range for the variation of length scales such as is too small). As for the determination of the length scale itself, integral estimators are guaranteed to produce numbers that scale as the inaccessible true , provided that it is large enough.

The fact that the correlation functions that will be considered here, and , are self-averaging in a nonequilibrium context provides an impressive error reduction, which is not accessible for equilibrium studies.

Our chosen example to explain the method will be that of and the determination of the coherence length and of the exponent .

3.1 The coherence length

Cooper et al. (34) suggested the second moment determination of the characteristic length,

(16)

where is the Fourier transform of , and is the minimal non-vanishing wave vector allowed by boundary conditions ( or permutations). Notice that . As can be readily seen, in the thermodynamic limit this is equivalent to

(17)

The denominator in this equation is just the SG susceptibility (7) which, as we said in Sect. 2.2, does not self-average (and neither does the numerator). Because of this, if one were to follow this method, a very large number of samples would be needed.

We would like to have a better statistically behaved definition of . In order to get it, we start by considering the integrals10

(18)

As we are going to work in the thermodynamical limit, we are interested in the regime , so we can safely reduce the upper integration limit from to .

With this notation and assuming rotational invariance, the second moment coherence length is just

(19)

We also recall that in (23) it was proposed to identify with ,11 but this would only be appropriate for . For a correlation function following the scaling law (13), one can use a more general definition, because :

(20)

Definitions such as (16) and (20) suffer from systematic errors because equation (13) is only an asymptotic formula for large values of . Therefore, the systematic errors in these definitions can be reduced by considering a large value of (since the factor would suppress the deviations at short distances). However, there is also the issue of statistical errors to consider. As we can see in Fig. 1, a large power of pushes the maximum of into the region where and the signal to noise ratio of the correlation function is extremely low. Because of this, a compromise in the choice of is needed. Our preferred option is .

Figure 1: The spatial autocorrelation of the overlap field for and three subcritical temperatures, as computed in our lattice. On the left panel we show , recall (18), while on the right one we show (mind the different scales). While the signal to noise ratio of both quantities falls equally rapidly, the problem is less severe for the computation of since the maximum there is not in the noise dominated region. The curve for is the average of 768 samples, while those of and are computed from 96 samples.

Even though our use of instead of already mitigates the statistical problems in the integration of , we can still improve the computation. As can be plainly seen in Fig. 1, starts having very large fluctuations only at its tails, where the contribution to the integral is minimal. To take advantage of this fact, we are going to use a self-consistent integration cutoff (a method applied before in the study of correlated time series (43)). We only integrate our data12 for up to the point where this function first becomes less than three times its own statistical error. Of course, while this method provides a great reduction in statistical errors, it does introduce a systematic one. To avoid it, we estimate the small contribution of the tail with a fit to

(21)

Notice that this is just the scaling function (13), using as our damping function and . Of course, while this fit is used to estimate the contribution of the interval , we actually perform the fitting for , where the signal is still good. This last step is important for large (see Fig. 2).

Figure 2: Left: Result of computing in two different ways for our 96 samples at . In the orange curve (color online) we stop the integration at the cutoff point where relative error of is greater than one third. In the blue curve (color online) we estimate the contribution of the tail from that point on extrapolating with a fit to (21). The difference is small, but with the second method the power law behavior of lasts longer. Right: Same plot for our 768 samples at . With the increased statistics this extrapolation is not as important and both curves are compatible for the whole simulation. With the 63 samples of (33), the tail contribution is as significant as in the left panel.
Figure 3: The SG susceptibility for our 768 samples at computed from and from the integral : . The main difference between the two determinations is that the second one has been computed with a self-consistent cutoff. As we can see, even though both curves are compatible, the integral one is much more precise. The inset details the upper right corner.

As a consistency check of this method and as a demonstration of its enhanced precision we can consider the SG susceptibility (7). This observable, , coincides with in the presence of rotational invariance. We have plotted both expressions as a function of time in Fig. 3. The only systematic discrepancy between the two is at short times, when the system cannot be considered rotationally invariant (see Sect. 3.3). However, the integral determination is much more precise for the whole span of our simulation.

As a second check, we have plotted in Fig. 4 the integral estimators and , together with the traditional second moment estimate and the result of a fit to (21). As we can see, all determinations are indeed proportional, but the integral estimators are much more precise.

Figure 4: Comparison of our integral estimators and , Eq. (20), with the second moment estimate and the result of a fit to (21), with . All the curves become parallel at large , but the integral estimators have much smaller errors. All curves are for our 768 samples at .
Figure 5: Left: Ratio of the errors in for the simulations at with the 63 samples of (33) and those of our simulations with 768 new samples (see text for discussion). The extrapolation to include the tails in the integrals is immaterial for this ratio. The horizontal line is . Right: Cutoff of the integrals as a function of time for both simulations.
Figure 6: Difference between the coherence length computed with the 63 samples of (33) and with the 768 new samples (the errors are the quadratic sum of those for each simulation). Both curves are compatible in the whole time range. Mind the dramatic statistical correlation in the sign of this difference.

Finally, we address the issue of our error estimates by comparing our data at for the 63 samples of (33) with our new simulations with 768 samples. We have explicitly checked that the errors in , computed with the jackknife method, scale as the inverse of the square root of the number of samples, within errors (for Gaussian distributed data, the relative statistical error in the error estimate is ). The fact that verifies this basic expectation is a demonstration that large deviations, which would be missed for a small number of samples, do not appear, even for as large as . On the other hand, the behavior of statistical errors in integrals with a dynamically fixed cutoff is not that simple (43). In our case, the ratio of the errors in (Fig. 5, left) is around below the Gaussian expectation. We have checked that a similar effect arises in the computation of and that the effect of the tails is immaterial. In fact, this deviation is entirely due to the difference in the dynamical cutoffs of both simulations, Fig. 5, right. Whenever the cutoff coincides, the error ratio is in the expected region around . The overall consistency of our error determinations is demonstrated by Fig. 6.

3.2 The algebraic prefactor

One of our main goals is to provide a precise estimate of the exponent for the algebraic part of , see equation (13). Equilibrium methods (36) are not well suited to a nonequilibrium study in the thermodynamic limit. Instead, we introduce here the method used, but not explained, in (33).

The starting point is the realization that , which would indicate that can be obtained from a power law fit of as a function of the coherence length. Furthermore, the large statistical correlation between and can be used to reduce the statistical errors in . However, such a fit would be quite problematic, as we would have errors on both coordinates (the correlation of the data already poses a nontrivial problem with errors in just one coordinate, see Sect. 2.4). Instead, we fit separately and to power laws in the waiting time. This way, if

(22)

we have . This relation holds for each jackknife block, which lets us take full advantage of the correlations.

Following this method, we obtain the results in Table 2. In (33) (first four rows of Table 2) we quoted the results for a fitting range of , which is perfectly adequate for and . As we can see, appears to be a bit too large for , but if we narrow the fitting range it becomes reasonable and does not change.

As this is a very important magnitude, in this work we have increased the number of samples at by a factor of with respect to (33), (from to samples, with extra simulations that stop at , rather than ). This not only allows us to provide a better estimate of but we are now also able to check the soundness of the statistical procedure.

The first difficulty is that, with the corresponding reduction in statistical errors, the original fitting window no longer provides reasonable values of our diagonal estimator. Instead, we have pushed the lower limit to (see Table 2 for details). The new value of is

(23)
/d.o.f. /d.o.f.
96 14.06(25) 0.359(13) 41.7/82 49.0/82
63 11.84(22) 0.355(15) 82.7/81 131/81
12.03(27) 0.355(17) 52.7/71 75.5/81
96 9.42(15) 0.442(11) 17.1/63 12.2/63
32 6.86(16) 0.585(12) 18.7/46 26.1/46
768 11.45(10) 0.395(8) 86.9/76 269/76
11.56(13) 0.397(10) 46.6/66 101/66
11.64(15) 0.397(12) 40.1/58 60.4/58
11.75(20) 0.394(14) 29.6/50 35.8/50
Table 2: Value of the dynamic exponent and the algebraic prefactor for several temperatures. The fitting range , which worked for the smaller number of samples we had in (33), does not give good fits for with our enlarged statistics at (the fits for the coherence length itself are still good). Nevertheless, if we increase to get reasonable values of /d.o.f. we see that the estimate of does not change.
Figure 7: Left: Probability density function of the estimate of exponent , Eq. (13), as obtained from a set of 63 samples. The dots with horizontal error bars are, from top to bottom: our best estimate with 768 samples, the value with the 63 samples of (33), and the mean and standard deviation of . The continuous line is a Gaussian distribution with the same mean and variance as . Center: As in left panel, for the jackknife errors . The vertical line marks the standard deviation of . Right: Histogram of the /d.o.f. parameter for the fit. In all three panels the fitting range to obtain was taken as .

In accordance to the analysis of Fig. 5, the error has not decreased the factor with the increase in statistics. One could say that the dynamic cutoff procedure has traded statistical uncertainty for a reduction in systematic errors. Another contributing factor to the large statistical error is the raising of the minimal coherence length included in the fit.

To understand whether the discrepancy between the new estimate of and the one in (33) is due to a systematic effect or to a large fluctuation, we can use a Monte Carlo method. The probability distribution function of the estimates of , as computed with 63 samples, can be obtained easily from our set of 768 samples. One randomly picks sets of 63 different samples and determines and its jackknife error for each of these sets (mind that there are possible combinations). We have done this 10 000 times and computed normalized histograms of both quantities (Fig. 7). Clearly enough, the estimate in (33) was a fluctuation of size standard deviations, large but not unbelievably so. On the other hand we see that the jackknife method tends to slightly underestimate for 63 samples (Fig. 7, center). Note as well that there seems to be a small bias (smaller than the error) on the estimate of with only 63 samples (Fig. 7, left, compare the histogram with the uppermost horizontal point). There are two possible reasons for this. One is that is obtained from raw data through a nonlinear operation. The other is that the larger the number of samples, the smaller the cutoff effects in the computation of .

It is amusing to compute as well the probability density of /d.o.f. for fits with 63 samples. We show in Fig. 7, right, that for the fit of the coherence length can be much larger than what one would naively expect for a fit with degrees of freedom.

3.3 Isotropy of

Figure 8: Level curves for (dashed lines) and (solid lines) at (dotted lines are circles, for visual reference). We have restricted ourselves to the plane, for clarity. The innermost curve corresponds in both cases to and the succeeding ones correspond to geometrically growing times (). As we can see, the deviations from isotropy are mainly due to lattice discretization (i.e., functions of ), even though there is also a small dependence on time for curves of similar radii. The errors are smaller than the thickness of the lines (the interpolation to draw these continuous curves from our lattice data was performed with Mathematica TM).

In the previous section we neglected the issue of the isotropy of . At all times we worked with radial functions , obtained by averaging the correlation at distance along the three axes. In doing this, we ignored most of the points that has for a given . The main motivation for doing this is avoiding the computation of the whole correlation function, a task which in a naive implementation is . Of course, due to the Wiener-Khinchin theorem, we can reduce this computation to the evaluation of two Fourier transforms which, using an implementation of the FFT algorithm (44), is an task.

We shall examine in this section whether the complete correlation functions are isotropic and whether we can take advantage of them to reduce the errors in our determination of the coherence length. The first question is answered by Fig. 8, where we compute the level curves for several values of and . As we can see, isotropy is recovered at quite small distances (remember we are only concerned with ).

In order to use our integral method for a three-dimensional we must first average it over spherical shells. We do this by defining the functions ,

(24)

Notice that and that the division by the number of points is needed to average over the spherical shell. Now we can use in the same way we used in the previous section. The resulting coherence length would be expected to coincide with in the large limit, but have much smaller errors due to the large increase in statistics. As we can see from Fig. 9, however, the correlation among the points is so great that the gain in precision is insignificant.

We can conclude from this section that the usual approximation of considering isotropic is a well founded one and that it is safe, and statistically almost costless, to restrict ourselves to correlations along the axes, as we did in (33) and in the previous section. Nevertheless, the computation of the whole could be rewarding if one were to study , with .

Figure 9: Left: Comparison of with for . The first quantity is obtained by averaging over spherical shells, while the second one considers only correlations along the axes. The behavior of is better at the tails, but does not imply any gain in practice, as both functions are equally well behaved up to the cutoff point. Right: The coherence length computed with the whole correlation functions, , and with correlations along the axes, . Both estimates coincide for large times.

4 Characteristic length scales for dynamical heterogeneities

Figure 10: Left: The two-time spatial correlation function (see Eq. (14)) as a function of , for a high value of , and several values of . Right: as a function of and several values of , fixing . Both figures are for our 63 sample simulation at .
Figure 11: Left: Coherence length as a function of for several values of at (63 samples). Right: As figure left, for (96 samples).

The concept of heterogeneous dynamics has been recently borrowed from structural glasses to describe non-local aging in Spin Glasses (27); (28). A coarse-grained spin correlation function may be defined for which the microscopic average is not performed on the overall volume but on cells of size . Non-uniform aging of the system results in spatial fluctuations of the coarse-grained correlation functions, that can be used to define a two-time dependent correlation length. In fact, at fixed system size and temperature one may study the two-time dependent distribution of the coarse-grained correlation function values or, equivalently, the distribution as a function of and of the global spin-spin correlation function , Eq. (8): if the coarse-graining size is larger than the correlation length, then one should observe Gaussian statistics, while strong deviations from a Gaussian distribution are present in case of small  (28). Such dependence of the statistics on the coarse-graining size defines a crossover length interpreted as an aging (two-time) correlation length.

It has been observed (28) that such an aging correlation length may be obtained from the spatial decay of the two-time two-site correlation function , Eq. (14); still, the authors of reference (28) could not measure values greater than two lattice spacings. In this section we present data from our simulations on Janus showing correlation lengths for dynamical heterogeneities up to order ten lattice units. We show for and some values of at temperature in the left picture of Fig. 10 (as for , we denote by the correlations along the axes). As one can see in this figure, at large times, correlations grow up to several lattice spacings.

In what follows it is convenient to eliminate the time dependence in favor of the global spin-spin correlation function . For given and values, we obtain easily because the monotonic time dependence of the (discrete measures of the) correlation function can be smoothly interpolated by means of cubic splines. Then, for each value of , we must interpolate . As one can see in the right picture of Fig. 10 there is a sharp change in the -derivative of , so we had to resort to a linear interpolation in order to avoid the strong oscillations that a spline had suffered from. Once has been interpolated at all values, the methods of Sect. 3 allow us to estimate the correlation length .

In the large correlation sector, i.e. , and for large , the correlation length approaches a independent value. On the other hand, one expects that for small values of (that is, ) and large , diverges as the coherence length defined in Eq. (13(29). Such behavior is represented in Fig. 11, in which we plot as a function of , at temperatures and , for some values of . It is also interesting to consider the ratio and study how its behavior as a function of changes with . As pointed out above, for small values of and large , we expect . Moreover, since the coherence length diverges for large , should vanish at large waiting times when . In Fig. 12 we show at temperatures , and . An interesting feature is the crossover between the two sectors and . At the is too small (see Table 4 in Sect. 5.1 below) to let us observe the small behavior described above. On the other side, data for at quickly approaches a constant value for small correlations. It seems that the larger , the fastest the convergence to a constant, determining in this way a crossing point. However, these data suffer from large fluctuations that do not allow us to make any strong speculation on the crossings among curves at large values of . Indeed, we know that should vanish for large roughly as , but up to our knowledge, there is no reason for , as a function of , to converge faster to a constant when increases. In the bottom pictures of Fig. 12 we report the same data at , averaged on both (left) and samples (right). Even if simulations on larger sample statistics are not as long as those of the first samples, the smoothing in the curves does not improve the crossing definition. In addition, we have not been able to find any clear scaling behavior of the crossing points with the waiting time.

Figure 12: Behavior of the ratio as a function of for several values of . Top left: at our lowest temperature . Top right: at temperature . Bottom left: at temperature averaged over samples. Bottom right: same plot as bottom left, averaging over samples.

Since at large times and small correlations and only differ in a constant factor (which is also close to ), one may expect that the behavior of the two-time, two-site correlators (Eq. (14)) should be analogous to that of the four point correlation function (Eq. (12)). We can then probe the long distance scaling of with the same analysis performed in reference (33) (and in Sect. 3.2) for , Eq. (13). In particular, we can extract the exponent of the algebraic prefactor in Eq. (15) and the dynamic exponent , assuming a power-law growth for the correlation length:

(25)

We fix to a -dependent value, which is a value small enough to be below the at all considered temperatures , and ( values should be compared with estimates given in Sect. 5.1), and large enough to have the necessary number of points in order to obtain fair fits. We also had to impose a constraint to avoid the effects of lattice discretization. We did not impose any upper limit to , whose values hardly reach lattice spacings at (even less at colder temperatures), and we expect that in the time sector considered the constraint imposed on in reference (33) and in Sect. 3.2 would work as well as in that case.

/d.o.f. /d.o.f.
Table 3: Value of the dynamic exponent and exponent for the algebraic prefactor, for three subcritical temperatures. We limited the fitting window constraining the fits to . Data at averaged over samples.

We summarize the obtained values of in Table 3, reporting results for the smallest attainable (that is, permitting reasonable fits) at all temperatures, as well as for larger values of , for which the number of points allows for quite good fits. We see that and are slightly different from the values of and for the four point correlators presented in Sect. 3.2. Unfortunately our data do not permit a more precise determination for these exponents in the deep sector. In this respect, the determination of is a crucial issue (see Sect. 5.1).

5 The time correlation functions

5.1 The stationary part of

The naive computation of the stationary part of , Eq. (9), suffers from an essential problem: how to know when is large enough. The consideration of characteristic length scales may simplify this problem, as the limit is equivalent to .13 However, the approach to this limit will be acutely -dependent. Hence, it is better to consider a dimensionless variable,

(26)

As we saw in Sect. 4, the correlation length quickly reaches a -independent limit, so is essentially in its natural units for each . In fact, see Fig. 13 for our data at , the plot of against is pretty smooth for . Furthermore, the curves for different become parallel as grows, which suggests the existence of a smooth scaling function, . We have fitted the curves for each in the range to a quadratic polynomial (see inset to Fig. 13),

(27)

Unlike our treatment of the exponent (Sect. 3.2), here we cannot skirt the issue of errors in both coordinates. As the effect of both errors is similar, we have performed a least squares fit and an a posteriori