Critical behavior of entropy production and learning rate

# Critical behavior of entropy production and learning rate: Ising model with an oscillating field

## Abstract

We study the critical behavior of the entropy production of the Ising model subject to a magnetic field that oscillates in time. The mean-field model displays a phase transition that can be either first or second-order, depending on the amplitude of the field and on the frequency of oscillation. Within this approximation the entropy production rate is shown to have a discontinuity when the transition is first-order and to be continuous, with a jump in its first derivative, if the transition is second-order. In two dimensions, we find with numerical simulations that the critical behavior of the entropy production rate is the same, independent of the frequency and amplitude of the field. Its first derivative has a logarithmic divergence at the critical point. This result is in agreement with the lack of a first-order phase transition in two dimensions. We analyze a model with a field that changes at stochastic time-intervals between two values. This model allows for an informational theoretic interpretation, with the system as a sensor that follows the external field. We calculate numerically a lower bound on the learning rate, which quantifies how much information the system obtains about the field. Its first derivative with respect to temperature is found to have a jump at the critical point.

## 1 Introduction

Nonequilibrium systems can display phase transitions, with a number of well understood universality classes [1, 2, 3]. Some features not observed in equilibrium systems can occur if detailed balance is not fulfilled, e.g., correlations with a power law decay far from criticality and phase transitions with short range interactions in one dimension. Examples of nonequilibrium phase transitions include boundary induced phase transitions [4, 5], phase transitions into absorbing states [1], real space condensation [6], and a transition to collective motion in active systems [7, 8].

The production of entropy is a signature of systems out of equilibrium. This entropy production can be defined for many nonequilibrium models, which can be related to quite different phenomena. Such a feature makes the investigation of the critical behavior of entropy production appealing. It is intriguing to wonder whether nonequilibrium phase transitions can be classified with respect to the critical behavior of the entropy production.

The critical behavior of the entropy production rate has been analyzed in the majority vote model [9], in a two-dimensional Ising model in contact with two heat baths [10, 11], and in a model for nonequilibrium wetting [12]. For the first two models, the first derivative of the entropy production rate with respect to the control parameter was found to diverge at the critical point. For the third model the first derivative of the entropy production rate was found to be discontinuous at criticality. This discontinuity in the first derivative was also found within a mean-field approximation of the first model [9]. Furthermore, the entropy production rate of a model for population dynamics with a non-equilibrium phase transition has also been analyzed in [13].

In this paper we investigate the critical behavior of the entropy production rate in an Ising model driven by a magnetic field that oscillates deterministically in time. This model displays a phase transition characterized by an order parameter given by the magnetization integrated over a period [14, 15]. For the mean-field model, the phase transition can be either first or second-order, depending on the field amplitude and frequency. For the two dimensional model, the same kind of phase diagram with first and second-order phase transitions has been observed [15]. However, a more careful numerical analysis indicates that for the two dimensional model the transition is always second-order [16], with critical exponents compatible with the Ising universality class [17, 19, 18]. In contrast to the models listed in the previous paragraph, this model is driven by an external protocol and, therefore, reaches a periodic steady state [20, 21, 22, 23].

We analyze both a mean-field Ising model where all spins interact with each other and a two-dimensional Ising model with nearest neighbors interactions. Within the mean-field approximation, the entropy production rate is found to have a kink at the critical point for a second-order phase transition and is found to be discontinuous at criticality for a first-order phase transition. For the two-dimensional model the first derivative of the entropy production rate is found to diverge at the critical point, independent of the the frequency and amplitude of the field, which is in agreement with the absence of a first-order phase transition in two dimensions.

An Ising model with a field that changes at stochastic time-intervals between two values is also considered. The critical behavior of the entropy production rate does not change in relation to the one observed in the model with a deterministic field. However, this model allows for a further perspective related to the relation between information and thermodynamics [24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. The field can be seen as a signal and the system as a sensor that follows this signal. An information theoretic observable that quantifies the rate at which the system obtains information about the field is the learning rate [32, 33]. This learning rate appears in a second law inequality for bipartite systems [29, 30], being bounded by the thermodynamic entropy production rate that quantifies heat dissipation [32].

We study the critical behavior of the learning rate. Specifically, we introduce a lower bound on the learning rate that can be calculated in numerical simulations. Its first derivative with respect to the temperature is found to be discontinuous at the critical point.

The paper is organized in the following way. In Sec. 2 we calculate the entropy production rate for the mean-field model. The two-dimensional model with a deterministic field is analyzed in Sec. 3. In Sec. 4 we introduce the model with a field that changes at stochastic time-intervals and investigate the critical behavior of the learning rate. We conclude in Sec. 5.

## 2 Mean-field approximation

### 2.1 Model definition and phase diagram

We consider a Curie-Weiss mean-field Ising model of spins that is subjected to a time-dependent external field of strength . The time-dependent Hamiltonian is given by

 H(t)≡−JL2∑ijsisj−h(t)∑isi, (1)

where the first term on the right hand side involves a sum over all spins. The size-dependent pre-factor in this first term makes the Hamiltonian extensive. The external field varies periodically with time as

 h(t)≡h0cos(ωt), (2)

i.e., the field oscillates with an amplitude and a frequency .

We consider a model with Markovian dynamics. The transition rate from configuration to configuration is denoted by . The probability of an state at time is denoted . The average magnetization at time is given by

 m(t)≡∑ssiPs(t), (3)

where the above definition is independent of due to homogeneity.

Even though the model does not reach an equilibrium state due to the periodic variation of the external protocol, the transition rates at a fixed time fulfill the detailed balance condition

 wi(si)wi(−si)=e−2hi(t)si/T, (4)

where is the temperature, Boltzmann constant is throughout, and

 hi(t)≡JL2∑jsj+h(t). (5)

Assuming Glauber transition rates, i.e.,

 wi(si)=12τ[1−sitanh(hi/T)], (6)

where sets the time-scale for a spin flip, it is possible to show that the magnetization follows the equation [14]

 τdmdt=−m(t)+⟨tanh[hi(t)/T]⟩. (7)

In the thermodynamic limit this equation is simplified by the relation . Hence, in this limit, Eq. (7) becomes

 τdmdt=−m(t)+tanh[Jm(t)/T+h(t)/T]. (8)

The solution of Eq. (8) reaches a periodic steady state, i.e., , that is independent of the initial condition.

This model has a phase transition at a critical temperature that depends on the amplitude of the field and the frequency . The order parameter of this transition is the magnetization integrated over a period in the periodic steady state

 M≡ω2π∫2π/ω0m(t)dt. (9)

Below (above) the critical temperature the magnetization is (). A phase diagram obtained with numerical integration of Eq. (8) is shown in Fig. 1. This phase diagram has been obtained in [14] and is shown here for illustrative purposes. Depending on and the phase transition can be first-order with a discontinuity in , or second-order.

### 2.2 Heat and Work

Taking the time derivative of the internal energy per spin , we obtain

 dudt=dudmdmdt+dudhdhdt. (10)

Following the standard definition of work in stochastic thermodynamics [34], we identify the rate of work done on the system as

 ˙w(t)≡dudhdhdt. (11)

The expression for the dissipated heat follows from the first law

 ˙q(t)≡˙w(t)−dudt=−dudmdmdt. (12)

Since is periodic, we obtain

 ∫2π/ω0˙w(t)dt=∫2π/ω0˙q(t)dt, (13)

i.e., the average work done on the system in one period equals the average dissipated heat in one period. The entropy production rate in the periodic steady state is defined as

 σ≡1Tω2π∫2π/ω0˙q(t)dt=1Tω2π∫2π/ω0˙w(t)dt, (14)

where we used the first law (13) in the second equality.

We solved equation (8) numerically and calculated the entropy production rate with Eqs. (11) and (14). The results are shown in Fig. 2, where we plot as a function of the temperature for two different values of the amplitude and fixed frequency . If is such that the phase transition is second-order, the entropy production rate has a kink at the critical point, indicating that the first derivative of with respect to has a discontinuity at criticality. If the phase transition is first-order, the entropy production rate itself is discontinuous at the critical point. Hence, within the mean-field model the critical behavior of the entropy production rate is different in the two different regions of the phase diagram. We note that a discontinuity in the first derivative of with respect to the control parameter has also been observed in a mean-field approximation of the majority vote model [9] and in a model for nonequilibrium wetting [12].

## 3 Two-dimensional Ising model

### 3.1 Model definition

For the two-dimensional Ising model with nearest neighbors interactions, a configuration at time has energy

 Es(t)≡−J∑⟨ij⟩sisj−h(t)∑isi, (15)

where the first sum is over nearest neighbors, is given by (2), and we consider periodic boundary conditions. The Binder cumulant is defined as [38]

 UL≡1−⟨M4⟩/(3⟨M2⟩2), (16)

where represents the magnetization integrated over a period and the brackets denote an average over stochastic trajectories. We calculated this Binder Cumulant with numerical simulations, which are explained below. The critical temperatures are determined from the crossing points of the Binder Cumulant in Fig. 3. The lack of a minimum of the Binder cumulant that crosses from 2/3 to 0 in Fig. 3 is an indicator of a second-order phase transition. The minimum of the Binder cumulant at a negative value in Fig. 3 is an indicator of a first-order phase transition, and this transition has been interpreted to be first-order from this kind of numerical result [15]. However, this minimum has been shown to be finite size effect with a solid theoretical argument and extensive numerical simulations that show that the minimum disappears for large enough systems [16] (see Fig. 3). The basic idea of the theoretical argument is that for small systems the temperature at which the system crossover from the multiple droplet to the single droplet regime is above the critical temperature, leading to major differences in the probability distribution of the order parameter [16]. Hence, convincing numerical evidence supports that in two dimensions the transition is always second-order, independent of and . Our numerical results in Fig. 3 indicate a minimum that decreases with system size, in agreement with the results from [16].

The system reaches a periodic steady state characterized by the probability , which has a period . The entropy production rate per spin in this periodic steady state is defined as [34]

 σL≡1L2ω2π∫2π/ω0dt(∑ss′wss′(t)Ps(t)Es(t)−Es′(t)T), (17)

where is the transition rate from state to state at time . These transition rates are nonzero only if the configurations and differ by one spin flip and they fulfill the detailed balance relation . The factor makes finite in the thermodynamic limit, where

 σ≡limL→∞σL. (18)

The rate of dissipated heat is , which is equal to the rate of work done on the system due to the first law.

Numerical simulations were performed with the following procedure. The initial condition is a random configuration of spins, corresponding to . The time is discretized with the integer variable , in such a way that for we have , i.e., the time is in units of Monte Carlo steps. We use the standard metropolis rule for flipping a spin [35]. A randomly chosen spin may flip depending on the energy difference

 ΔEi(n/L2)=2si[J∑jsj+h(n/L2)], (19)

where the sum in is over the four nearest neighbors. If this energy difference is negative the spin flips with probability one, and if it is positive, the spin flips with probability .

The entropy production rate was computed in the following way. After a certain transient the system reaches a periodic steady state and we compute the change in the entropy of the external medium from time , after the transient, to time , which corresponds to several periods . If the system jumps from a configuration to a configuration the entropy changes by an amount . The entropy production rate per spin is then given by

 σL=ΔS/(TL2). (20)

### 3.2 Critical behavior of the entropy production

In Fig. 4 we plot the entropy production rate as a function of the temperature for two different values of . In both cases the entropy production rate has a maximum above . At the critical point, the entropy production rate seems to have an inflection, indicating a divergence of the first derivative of at the critical point, i.e.,

 dTσ∼|T−Tc|−α. (21)

Direct numerical evaluation of the exponent with off-critical simulations turn out to be difficult. As shown in Fig. 5, the first derivative for a finite system has a maximum that increases with , in agreement with the expectation that diverges at criticality. Plotting this maximum as a function of system size in Fig 6, we obtain

 dTσmaxL∼lnL. (22)

The exponent is related to an exponent , defined by , through the scaling relation , where is the critical exponent characterizing the divergence of the correlation length at criticality ( [15]). Relation (22) implies , and, therefore, .

The critical behavior of the entropy production rate is the same in both plots shown in Fig. 6. This result provides further support for a second-order phase transition also for the amplitude , for which the Binder cumulant for small systems shows a minimum in Fig 3, in the sense that with a first-order phase transition the system would explore different regions of the phase-space, which could lead to a discontinuity in the entropy production at . However, the lack of a jump in cannot be taken as a demonstration that the transition is not first-order: a relation connecting the average magnetization integrated over a period with the entropy production is not known, and, therefore, a discontinuity in the order parameter does not necessarily imply a discontinuity in the entropy production.

The same kind of critical behavior of the first derivative of the entropy production, characterized by a logarithmic divergence, has been observed in a majority vote model [9] and in a Ising model in contact with two heat baths [11].

## 4 Critical behavior of the learning rate

### 4.1 Stochastic external field

We now consider a two-dimensional Ising model with a magnetic field that changes at stochastic times. A similar model has been considered in [36]. This magnetic field changes at a rate between the values and . The system and external field together form a bipartite Markov process [29, 30], which has states.

A state of this bipartite process is characterized by the vector and a binary variable that indicates whether the external field is or . The transition rate from an state to a state is

 wxx′ss′≡⎧⎪⎨⎪⎩wxx′s=Γif x≠x′ and s=s′ wxss′=χss′/(1+e(Exs′−Exs)/T)% if x=x′ and s≠s′ 0if x≠x′ and s≠s′, (23)

where is if both configurations differ by a single spin flip and otherwise. The energy is given by

 Exs≡−J∑⟨ij⟩sisj−xh0∑isi. (24)

A bipartite Markov process has two kinds of jumps, internal ones that lead to a spin flip and external ones that change the external field. The dissipated heat is related to the internal jumps, whereas the external jumps are related to work. Hence, only internal jumps appear in the entropy production rate per spin , which is defined as [34]

 σL≡1L2∑x∑ss′Pxswxss′(Exs−Exs′)T, (25)

where is the stationary distribution.

We have performed continuous-time Monte Carlo simulations of this model, using a method related to the method introduced in [37]. The main difference is that we also have to account for jumps that lead to a change in the magnetic field. In our algorithm, at each jump, there is a probability , which depends on the state of the system, that the magnetic field changes. A spin flip happens with probability , and is executed with the procedure explained in [37].

The parameter is written as , where for different system sizes is kept fixed. The parameter must scale as for the following reason. The escape rate of an state is , where is the sum of all transition rates that lead to a spin flip. Since there are spins that can be flipped, the parameter has to scale with for the probability of a change in the field to be conserved with a change in system size.

The entropy production is calculated by adding to every time a jump from to occurs. If the simulation runs for a time , after some transient, the entropy production rate per spin is calculated with expression (20).

The critical point is again determined with the Binder cumulant

 UL=1−⟨m4⟩/(3⟨m2⟩2), (26)

where the brackets denote an average over stochastic trajectories and . The Binder cumulant in Fig. 7 indicates a second-order phase transition. The critical behavior of the entropy production rate is the same as in the previous model, as shown in Fig. 7, where the maximum of the first derivative of the entropy production rate follows the behavior in Eq. (22).

### 4.2 Learning rate

The Ising model with a stochastic field allows us to consider a further aspect related to information theory. The external field can be interpreted as a stochastic signal and the system of spins as a sensor that follows the signal. It turns out that there is a quantity, called learning rate [32, 33], that characterizes the rate at which the system obtains information about the signal. Technically, this learning rate is a time derivative of the mutual information between system and external field.

In the stationary state, the learning rate (per spin) is given by [32, 33]

 λL=ΓL2∑s(P+1s−P−1s)lnP+1sP−1s. (27)

The second law for a sensor following a signal reads [29, 30, 32]

 λL≤σL. (28)

The learning rate that quantifies how much information the system obtains about the signal is bounded by , which quantifies heat dissipation. This inequality allows for the definition of the informational efficiency .

In Fig. 8 we plot the learning rate and the entropy production rate as functions of temperature for . They both have maxima at some intermediate values of . These figures were obtained with the exact calculation of the eigenvector of the stochastic matrix that is associated with the eigenvalue , which lead to the stationary distribution .

For larger values of we have to use Monte Carlo simulations. The problem that arises for the calculation of in simulations is that the increment in the learning rate after a jump depends on the nonequilibrium stationary probability , which is not known.

We propose the following lower bound on the learning rate. Instead of the microscopic configuration we consider some mesoscopic variable . In particular, we consider a variable that gives the “class” of the spin in the continuous-time simulation [37]. This variable takes the orientation of a spin and its nearest neighbors into account and has possible outcomes: for each spin orientation the number of nearest neighbors with can go from to . The lower bound is then written as

 λlbL≡ΓL2∑a(P+1a−P−1a)lnP+1aP−1a. (29)

This lower bound fulfills , which is illustrated in Fig. 8, due to the log sum inequality [39]. The probability can be calculated in a Monte Carlo simulation by calculating the density of spins in each one of the ten classes in the steady state.

It turns out that the lower bound scales as , going to zero in the limit of infinite system size. This scaling can be understood with the following heuristic argument. The probability of changing the magnetic field in a transition, instead of flipping a spin, is conserved as is increased. Therefore, after a change in the field, the average number of spin flips before the next change in the external field is conserved for increasing . However, the number of spins flips to equilibrate the system is proportional to . As the number of spin flips is constant and the number of spin flips necessary for equilibration scale as , it is reasonable to expect that the difference , which we have confirmed numerically. From the expression (29) we obtain .

In Fig. 9 we plot the scaled learning rate as a function of the temperature . For large , this lower bound seems to have a kink, in the form of a local minimum, at the critical point. Hence, our results indicated that the first derivative of with respect to is discontinuous in the limit . The critical behavior of the lower bound on the learning rate is then different from that of the entropy production. We note that the lower bound on the efficiency goes to zero in the thermodynamic limit due to the different scaling of in relation to .

## 5 Conclusion

We have analyzed the critical behavior of the entropy production rate of a nonequilibrium Ising model subjected to a time-dependent periodic field. For the mean-field model, this entropy production rate is found to have a jump at criticality if the transition is first-order. However, if the transition is second-order, the entropy production rate is continuous but its first derivative is discontinuous at criticality. For the two-dimensional model, for which the transition is second-order, the first derivative of the entropy production rate has a logarithmic divergence at the critical point. The novelty of our results in relation to previous studies on the critical behavior of entropy production rate [9, 11, 10, 12] are the following. The models analyzed here are driven by an external periodic protocol, in contrast to previous studies that consider models driven by a fixed thermodynamic force; the entropy production rate was found to have a jump at criticality for the mean-field model in the region with a first-order phase transition, which is a critical behavior that has not been observed in previous studies. Furthermore, the results for the deterministic field support the lack of a first-order phase transition in two-dimensions.

We have also investigated the critical behavior of the learning rate for the model with an external field that changes at stochastic time-intervals between two values. It turns out that the calculation of the learning rate within numerical simulations would require the unknown nonequilibrium stationary distribution. We introduced a lower bound on the learning rate that can be calculated within numerical simulations. Our numerics indicates that the critical behavior of this lower bound is different from the one of the entropy production: it has a local minimum at the critical point and its first derivative seems to be discontinuous.

Our results on the Ising model with a stochastic field offers two fresh perspectives. First, most studies on the relation between information and thermodynamics consider small systems. However, the inequalities for bipartite processes [29, 30] are also valid for macroscopic systems with a large number of states, as explicitly illustrated here. For example, it would be interesting to build a model of a macroscopic Maxwell’s demon using the framework for bipartite systems. Second, this model allows for the definition of an informational efficiency. However, the lower bound on the efficiency, which is the quantity we could calculate, turned out to go to zero in the thermodynamic limit. Analyzing the critical behavior of efficiency in nonequilibrium models is also an interesting perspective.

This work and the few previous studies on the critical behavior of entropy production demonstrate that the average entropy production can be a useful observable to determine the critical point of a generic nonequilibrium phase transition. As an interesting direction for future work, higher order moments of the fluctuating entropy production could be even more effective for a precise determination of the critical point. While the entropy production has been found to display three distinct behaviors at the critical line, i.e., a logarithmic divergence on its first derivative, a discontinuity on its first derivative, and a discontinuity, the deeper question whether entropy production can be used to classify nonequilibrium phase transitions in a meaningful way remains open.

Acknowledgements
We thank Shamik Gupta for carefully reading the manuscript and Per Arne Rikvold for pointing out [16].

## References

### References

1. Hinrichsen H 2000 Adv. Phys. 49 815
2. Henkel M, Hinrichsen H and Lübeck S 2008 Non-equilibrium phase transitions vol 1. Absorbing Phase Transitions (Berlin: Springer)
3. Ódor G 2008 Universality in nonequilibrium lattice systems (Singapore: World Scientific)
4. Krug J 1991 Phys. Rev. Lett. 67 1882
5. Blythe R A and Evans M R 2007 J. Phys. A Math. Theor. 40 R333
6. Evans M R and Hanney T 2005 J. Phys. A Math. Theor. 38 R195
7. Vicsek T, Czirók A, Ben-Jacob E, Cohen I and Shochet O 1995 Phys. Rev. Lett. 75 1226
8. Solon A P and Tailleur J 2015 Phys. Rev. E 92 042119
9. Crochik L and Tomé T 2005 Phys. Rev. E 72 057103
10. de Oliveira M J 2011 J. Stat. Mech.: Theor. Exp. P12012
11. Tomé T and de Oliveira M J 2012 Phys. Rev. Lett. 108 020601
12. Barato A C and Hinrichsen H 2012 J. Phys. A Math. Theor. 45 115005
13. Andrae B, Cremer J, Reichenbach T and Frey E 2010 Phys. Rev. Lett. 104 218102
14. Tomé T and de Oliveira M J 1990 Phys. Rev. A 41 4251
15. Chakrabarti B K and Acharyya M 1999 Rev. Mod. Phys. 71 847
16. Korniss G, Rikvold P A and Novotny M A 2002 Phys. Rev. E 66 056127
17. Korniss G, Rikvold P A and Novotny M A 2001 Phys. Rev. E 63 016120
18. Fujisaka H, Tutu H, and Rikvold P A, 2001 Phys. Rev. E 63 036109
19. Buendía G M and Rikvold P A 2008 Phys. Rev. E 78 051108
20. Sinitsyn N A and Nemenman I 2007 Phys. Rev. Lett. 99 220408
21. Rahav S, Horowitz J and Jarzynski C 2008 Phys. Rev. Lett. 101 140602
22. Astumian R D 2011 Ann. Rev. Biophys. 40 289
23. Raz O, Subaşı Y and Jarzynski C 2016 Phys. Rev. X 6 021022
24. Sagawa T and Ueda M 2012 Phys. Rev. E 85 021104
25. Horowitz J M, Sagawa T and Parrondo J M R 2013 Phys. Rev. Lett. 111 010602
26. Mandal D and Jarzynski C 2012 Proc. Natl. Acad. Sci. U.S.A. 109 11641
27. Barato A C and Seifert U 2014 Phys. Rev. Lett. 112 090601
28. Barato A C and Seifert U 2014 Phys. Rev. E 90 042150
29. Hartich D, Barato A C and Seifert U 2014 J. Stat. Mech. P02016
30. Horowitz J M and Esposito M 2014 Phys. Rev. X 4 031015
31. Parrondo J M, Horowitz J M and Sagawa T 2015 Nature Phys. 11 131
32. Barato A C, Hartich D and Seifert U 2014 New J. Phys. 16 103024
33. Hartich D, Barato A C and Seifert U 2016 Phys. Rev. E 93 022116
34. Seifert U 2012 Rep. Prog. Phys. 75 126001
35. Newman M E J and Barkema G T 1999 Monte Carlo methods in statistical physics (Oxford: Oxford University Press)
36. Acharyya M 1998 Phys. Rev. E 58 174
37. Bortz A, Kalos M and Lebowitz J 1975 J. Comput. Phys. 17 10
38. Landau D P and Binder K 2014 A guide to Monte Carlo simulations in statistical physics (Cambridge: Cambridge university press)
39. Cover T M and Thomas J A 2006 Elements of information theory (Hoboken: Wiley)
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters