Similarities and Differences Between Nonequilibrium Steady States and Time-Periodic Driving in Diffusive Systems

# Similarities and Differences Between Nonequilibrium Steady States and Time-Periodic Driving in Diffusive Systems

D. M. Busiello Dipartimento di Fisica ‘G. Galilei’, Universitá di Padova, Via Marzolo 8, 35131 Padova, Italy Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742, USA    C. Jarzynski Institute for Physical Science and Technology, University of Maryland, College Park, Maryland 20742, USA Department of Chemistry and Biochemistry, University of Maryland, College Park, Maryland 20742, USA Department of Physics, University of Maryland, College Park, Maryland 20742, USA    O. Raz Department of Chemistry and Biochemistry, University of Maryland, College Park, Maryland 20742, USA Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot, 76100, Israel
July 14, 2019
###### Abstract

A system that violates detailed balance evolves asymptotically into a nonequilibrium steady state with non-vanishing currents. Analogously, when detailed balance holds at any instant of time but the system is driven through time-periodic variations of external parameters, it evolves toward a time-periodic state, which can also support non-vanishing currents. In both cases the maintenance of currents throughout the system incurs a cost in terms of entropy production. Here we compare these two scenarios for one dimensional diffusive systems with periodic boundary condition, a framework commonly used to model biological and artificial molecular machines. We first show that the entropy production rate in a periodically driven system is necessarily greater than that in a stationary system without detailed balance, when both are described by the same (time-averaged) current and probability distribution. Next, we show how to construct both a non-equilibrium steady state and a periodic driving that support a given time averaged probability distribution and current. Lastly, we show that although the entropy production rate of a periodically driven system is higher than that of an equivalent steady state, the difference between the two entropy production rates can be tuned to be arbitrarily small.

## I Introduction

A system that is coupled to a thermal environment generically relaxes to an equilibrium state, in which its interesting properties can be calculated using the standard tools of statistical mechanics and thermodynamics. A similar unifying theory for all non-equilibrium phenomena is still lacking, although systems out of equilibrium have been investigated from various broad perspectives, including linear response theory Kubo (1957); Zwanzig (2001), relaxation towards equilibrium Montroll and Shuler (1957), fluctuation theorems Seifert (2012); Jarzynski (2011), nonequilibrium steady states (NESS) Ge et al. (2012) and systems with time-periodic driving Astumian (2007); Kolomeisky and Fisher (2007); Ray and Barato (2017). Many interesting results have been established within each of these frameworks, but much remains to be understood about the similarities and the differences between them.

Systems that are constantly maintained away from equilibrium are of particular interest in biology and nano-science. There are two common ways to maintain a system out of equilibrium for arbitrarily long times: in the first, the system of interest is coupled to multiple environments, e.g. baths with different equilibrium properties such as temperature, chemical potential or voltage. In such cases, the constant fluxes between the baths drive the system into a steady state that is out of equilibrium, as it can only be maintained at the cost of thermodynamic resources (heat, fuel, photons, etc.) provided by the baths. These steady states are commonly referred to as Non-Equilibrium Steady States (NESS), and they are used to model a variety of biological processes, from photosynthesis Knox (1969) in which photons are consumed in the carbon fixation process, through the synthesis of by an -synthase where the chemical potential difference of ions across membrane is used to convert into Yasuda et al. (1998); Gaspard and Gerritsma (2007), to molecular motors as kinesin Fisher and Kolomeisky (2001) that consume molecules and generate directed motion for the transport of molecular cargo.

An alternative method to maintain a system out of equilibrium is to vary, periodically with time, one or more parameters of the system, environment or the coupling between them. This type of driving is often referred to as stochastic pumping or thermal ratcheting. Stochastic Pumps (SP) provide simple models of classical and quantum heat engines Gingrich et al. (2014); Brandner et al. (2015); Uzdin et al. (2015) or of the driving mechanism in artificial molecular motors Astumian (2007); Browne and Feringa (2006); Hernandez et al. (2004); Kolomeisky and Fisher (2007); Leigh et al. (2003), where periodic changes in macroscopic parameters such as temperature, pressure and pH keep the motor operating.

Both NESS and SP are characterized by the existence of non-vanishing currents, a non-vanishing entropy production in the environment and a non-equilibrium probability distribution. It is therefore natural and potentially fruitful to ask: are SP and NESS essentially equivalent in terms of currents, probabilities and entropy production? In other words – can any current, probability distribution and entropy production achievable using one type of driving can also be achieved with the other type as well? In terms of potential applications, this question can be stated as follows: can an artificial molecular motor driven by periodic changes in the environment exactly mimic a biological molecular motor driven by consuming fuel? For finite-state systems, this question has been recently addressed in Raz et al. (2016a), where it was shown that SP and NESS are equivalent – both systems can in principle have the same time-averaged probabilities, currents and entropy production rates. Interestingly, however, they are not equivalent in terms of fluctuations Rotskoff (2017): to match the current fluctuations of a NESS, SP must have a higher entropy production.

In this manuscript we extend the NESS-SP comparison to overdamped systems that evolve diffusively in one dimension, whose dynamics are described by a Fokker-Planck equation on a ring. For artificial molecular motors, this model is typically more accurate than the discrete state case, which can be viewed as a coarse-grained version of a diffusive system. In the context of “no pumping theorems”, a similar extension from discrete state models Mandal (2014); Maes et al. (2010); Asban and Rahav (2014); Chernyak and Sinitsyn (2008); Mandal and Jarzynski (2011); Rahav (2011); Rahav et al. (2008) to diffusive systems Horowitz and Jarzynski (2009) revealed a complete analogy between the two models.

As we show below, in the context of controllability diffusive systems are quite different from the discrete systems studied in Raz et al. (2016a). In discrete systems one can achieve full control of the system in the following sense: given a desired set of currents, entropy production, and probability distribution (which are time-independent in the case of NESS, or time-averaged over one period of driving in the case of SP), one can determine the parameters of the model required to achieve these targets. By contrast, in diffusive systems full control of averaged currents and probability is possible, but these set a minimal bound on the corresponding entropy production rate, or even uniquely determine it for a NESS. Moreover, diffusive SP always generate more entropy production than NESS, when both drive the same averaged current and probability distribution. This suggests a natural optimization problem: finding the SP that achieves a target current and probability, with minimal averaged entropy production rate.

This manuscript is organized as follows: in Sec. II we introduce the mathematical framework to model diffusive SP and NESS systems. In Sec. III the entropy production inequality is derived. In Sec. IV we show full controllability of NESS in terms of current and probability distribution. This is done constructively, by obtaining the potential and velocity that generate a given target current and probability distribution. In Sec. V we solve the analogous problem for SP; our construction requires several preliminary steps that illustrate crucial points of the analysis. In Sec. VI we consider the optimization problem of minimizing the entropy production for a given target current and probability distribution. The general case of this problem is discussed in the appendix, App.(A). We conclude in Sec. VII with discussions and proposals for further investigations.

## Ii Mathematical framework

We aim to compare two types of driving in diffusive systems: the first is performed by the breakage of detailed balance in a time-independent system, and the second concerns the time-periodic variations of parameters of a detailed balanced system. To this end, let us first consider a diffusion process on a ring for which detailed balance holds instantaneously. The state of the system at time is denoted by , using units such that the length along the ring is 1, and we identify with (periodic boundary conditions). The time-dependent probability density obeys the Fokker-Planck equation

 ∂tP = γ−1∂x[(∂xU)P]+D∂xxP, (1)

where is the time-periodic potential in which the system diffuses, and and are the damping coefficient and diffusion constant, respectively. Motivated by the modeling of molecular motors, we assume that the diffusion constant does not depend on position, . We also assume that and satisfy the fluctuation-dissipation relation , where is the inverse temperature.

For the system to satisfy the detailed balance condition at all times, the potential must be periodic in , namely for each . Indeed, if the potential were suddenly frozen, the system would relax to an equilibrium state described by the Boltzmann distribution, with vanishing probability currents. We denote the period of the driving by , i.e. . Eq.(1) sets the basic model for a diffusive system driven by periodic variations of external parameters, commonly referred to as a stochastic pump or as a thermal ratchet Parrondo (1998); Magnasco (1993). In this model, the time dependence of the driving is encoded in the temporal variations of the potential . By Floquet theory, the probability distribution of such a system converges with time to a unique solution that is periodic in both and . We denote this periodic solution by .

A probability distribution evolving under Eq.(1) can be associated to a probability current

 J(x,t)=−D[∂xP(x,t)+βP(x,t)∂xU(x,t)], (2)

such that the probability obeys a continuity equation,

 ∂tP(x,t)+∂xJ(x,t)=0. (3)

The current associated with the periodic solution is

 Jps(x,t)=−D[∂xPps(x,t)+βPps(x,t)∂xU(x,t)]. (4)

Integrating both sides of Eq.(3) over one period of driving, at fixed , gives

 Pps(x,T)−Pps(x,0)+T∂x¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯Jps(x,t)=0. (5)

where the overbar denotes temporal averaging over one cycle. By the temporal periodicity of , the first two terms cancel out, hence must be independent of . This agrees with the intuitive expectation that, over one cycle, the same total probability flux flows across any point .

In addition to the probability densities and currents which we consider as the desired outcome of the driving, we are interested in the cost of the driving, given by the environment’s entropy production rate Seifert (2005),

 ˙Sps(t)=∫10dxJps(x,t)2DPps(x,t). (6)

In order to compare the time-periodic scenario described above with time independent systems driven by an external force that violates detailed balance, we now introduce a description for the latter. Let us consider the Fokker-Planck equation

 ∂tP = (7)

where is spatially periodic, , as before, but time-independent. The term violates the detailed balance condition for any . The constant can be interpreted as a characteristic velocity of the probability flow, or alternatively as arising from an additional linear potential that breaks the spatial periodicity of , generating a non-conservative force.

For any evolving under Eq.(7) the instantaneous probability current is given by

 J(x,t)=−D[∂xP+βP∂x(U(x)−vβDx)] (8)

such that and satisfy the continuity equation Eq.(3). Note that Eq.(8) reduces to Eq.(2) when .

For finite and bounded , Eq.(7) has a unique steady state solution, denoted by , with an associated probability current

 Jss=−D[∂xPss+βPss∂x(U(x)−vβDx)]=−De−βU∂x(e+βUPss)+vPss (9)

which is independent of since .

The entropy production rate of a NESS is given by an expression similar to Eq. (6), which simplifies because does not depend on :

 ˙Sss=(Jss)2∫10dxDPss(x). (10)

Our main interest in what follows is the controllability of NESS and SP in terms of probability distribution, current and entropy production. As we have just shown, in contrast with discrete state models, for diffusive systems in NESS the current and probability distribution uniquely define the entropy production, Eq.(10). We next establish that if a given NESS and SP support the same probability and current (after time-averaging in the case of the SP), then the entropy production in the SP is no less than that in the NESS. This implies a lower bound, Eq. 13, for the time-averaged entropy production of a SP.

## Iii Entropy Production Inequality

To show that the entropic cost of a SP is at least as high as that of a NESS supporting the same averaged current and probability distribution, we first note that given , , and the values of the entropy production rates and are fully determined by Eqs.(6,10). Therefore, in diffusive systems with a uniform diffusion constant we cannot impose the entropy production as an independent condition, as was done for discrete states systems Raz et al. (2016a). This constitutes a fundamental difference between continuous and discrete-state systems: there is a minimal cost, in terms of entropy production, for driving a current through a diffusive system, whereas in discrete-state systems currents can have arbitrary small cost. We note that if the diffusion constant can be varied as a function of position and time, then the analysis in Horowitz and Jarzynski (2009) implies that diffusive systems would have the same behavior as discrete state systems. However, in contrast to the effective potential which can be manipulated by macroscopic parameters as light, concentrations, pH, temperature etc, manipulating the diffusion constant at the nano-scale is experimentally challenging, therefore we limit our discussion to systems in which it is constant.

Let us suppose, for the moment, full controllability of both NESS and SP in terms of their currents and probability distribution. Under this assumption, we can compare the entropy production of the two different scenarios, both supporting the same current and probability distribution. To this aim, consider the integral

 I=∫dtDT∫dx[Jps(x,t)Pps(x,t)−JssPss(x)]2Pps(x,t)≥0. (11)

Expanding the square in the integrand and rewriting each term, using Eqs.(6,10) along with simple manipulations, the following inequality can be derived:

 I=1T∫dt˙Sps(x,t)−˙Sss=¯¯¯¯¯¯¯¯¯¯¯¯¯¯˙Sps(t)−˙Sss≥0 (12)

Thus the entropy production rate of a NESS supporting a given steady state current and probability distribution sets a lower bound on the average entropy production rate of a SP supporting the same (after time-averaging) current and probability distribution . Using Eq.(10) we obtain, explicitly,

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯˙Sps(t)≥¯¯¯¯¯¯¯¯Jps2∫10dxD¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯Pps(x). (13)

In what follows we will show that for any non-singular , Eq(12) is a strict inequality. However, the entropy production under periodic driving can be arbitrarily close to the bound set by the NESS.

### iv.1 Current and Probability Distribution Controllability

We first show that NESS can support any target probability distribution and current as its steady state values. To this end, we aim at finding the velocity and potential for which and , defined in Eq.(9), are equal to the target values. This is achieved by inverting Eq.(9), which can be viewed as a linear equation for . This gives, up to an additive constant:

 U(x)=−JssβD∫xdyPss(y)−β−1logPss(x)+vxβD. (14)

To determine we impose periodicity on . Using the periodicity of this gives:

 v=Jss∫10dyPss(y). (15)

These equations show how to build a NESS with desired and .

### iv.2 Minimal entropy production in NESS

As we have just seen, the steady state current and probability distribution of a NESS can be chosen independently – the value of one of them does not constrain the value of the other. It is therefore natural to ask: given , what choice of probability distribution minimizes the entropy production? The dual question, namely given what minimizes the entropy production, is trivial: when (equilibrium conditions) there is no entropy production. The above question can be written as a simple minimization problem:

 minPss(x)(Jss)2D∫10[1Pss(x)+λPss(x)]dx (16)

where is a Lagrange multiplier associated with the normalization of . In principle, there is additional constraint in the problem, the positivity of , however, this is a non-holonomic constraint, and as we next show the optimal solution satisfies this constraint without having to impose it. The Euler-Lagrange equation for the above optimization problem is given by:

 −(Pss(x))−2+λ=0, (17)

which combines with normalization to give . Thus the minimal entropy production required to drive a steady current is, by Eq.(10),

 ˙Sssmin=(Jss)2D, (18)

which is achieved with a flat potential and .

## V Current and Probability Controllability in Stochastic Pumps

In the previous section we showed how to construct a NESS with a target current and probability distribution. This task was simple, since the NESS has an explicit solution for the current (Eq.8) in terms of the potential , the steady state distribution and the parameters, and . In the current section, we consider the problem of controlling the time-averaged current and probability distribution in a system driven by a time-periodic potential, in which detailed balance holds instantaneously. Unfortunately, there is no simple explicit solution for in terms of which can be inverted as in Sec. IV. However, we have considerable freedom in choosing the potential : as shown below, there are many choices that result in the same averaged probability distribution and current. An example of such a protocol can be constructed once the constraints set by detailed balance are taken into account.

To frame this discussion, it will be useful to imagine that we have already constructed a NESS with a desired current and probability distribution , and we want to design a SP that matches these values after time averaging, i.e. we aim to satisfy the conditions

 ¯¯¯¯¯¯¯¯Jps ≡ 1T∫T0Jps(x,t)dt=Jss (19a) ¯¯¯¯¯¯¯¯¯Pps(x) ≡ 1T∫T0Pps(x,t)dt=Pss(x) (19b)

(Recall from Sec. II that does not depend on .)

### v.1 Current Loop

In discrete systems, a useful constraint on periodic driving is set by the “no current loops” condition, which states that if a system satisfies the detailed balance condition at a given instant in time, then there can be no instantaneous current loops, regardless of the instantaneous probability distribution (see Raz et al. (2016a) for details). We now show that a similar constraint holds for 1D diffusive systems.

Given instantaneous values of and , a simple condition shows whether or not detailed balance is satisfied. Consider the integral

 J(t)=∫10J(x,t)P(x,t)dx, (20)

which has a natural physical interpretation: writing the current density as the probability density times a mean local velocity, Seifert (2012), is the instantaneous spatial average of this local velocity 111A similar idea was recently discussed in Weiss et al. (2017).. Using the spatial periodicity of and along with Eq.(8), we obtain

 J=−D∫10∂x[logP+βU−vDx]dx=v (21)

hence detailed balance is satisfied if and only if .

For the periodically driven SP that we consider here, detailed balance is satisfied at all times by assumption, hence . As is necessarily positive, this condition implies that changes its sign as a function of – this is the no-current-loops condition analogous to the one derived in Raz et al. (2016a). Thus we cannot satisfy Eq. 19a by demanding that ; rather, must depend non-trivially on both and .

An additional consequence of the condition is that the entropy production inequality, Eq.(12), is a strict inequality. By Eq.(11), only when

 Jps(x,t)Pps(x,t)=JssPss(x) (22)

for all and , which in turn implies that the sign of is the same as that of . This, however, is inconsistent with the requirement that change sign as a function of . We conclude that , hence .

Given an instantaneous probability distribution and current density satisfying , we can use Eq.(2) to obtain, up to an additive constant,

 U(x,t)=−1βD∫xJ(y,t)P(y,t)dy−β−1logP(x,t) (23)

which satisfies the condition . Eq.(23) gives the time-dependent potential that generates the current pattern for the probability distribution .

### v.2 Compatibility of P(x,t) with detailed balance

So far we have discussed the constraint between and imposed by the condition of detailed balance, namely , and we have shown how to construct from and , at any instant in time (Eq. 23). It is natural to ask next: given a smooth, normalized , does there always exist a time-dependent potential such that is a solution of Eq. 1? In other words, is any well-behaved compatible with detailed balance? Naively, one might expect the answer to be negative, as the detailed balance condition sets a constraint on and therefore on the time derivative of . Fortunately, this is not the case. It can be shown that an arbitrary well-behaved (smooth and normalized) can be driven by a time dependent detailed balance periodic potential.

To establish this result we first construct, given , the corresponding current that is compatible with detailed balance. From the continuity equation (Eq.3) we have:

 J(x,t) = J(0,t)−∫x0∂tP(x′,t)dx′ (24)

which necessarily satisfies the periodicity condition as for normalized probabilities. Thus the continuity equation dictates up to a time dependent function, .

Next, we impose the constraint of detailed balance, . Substituting Eq.(24) into Eq.(20) gives:

 J(t)=∫10J(0,t)−∫x0∂tP(x′,t)dx′P(x,t)dx, (25)

and by setting the right side to zero we arrive at

 J(0,t)=∫10∫x0∂tP(x′,t)dx′P(x,t)dx(∫101P(x,t)dx). (26)

In other words, under the assumption of detailed balance uniquely determines , and then the two together determine the potential , by Eq.(23).

The next challenge is to choose a periodic such that (i) its time average is equal to (Eq.(19b)), and (ii) the corresponding time-averaged current is equal to (Eq.(19a)). An explicit construction with these properties is shown in the next subsection.

### v.3 Constructing U(x,t) to generate a desired ¯¯¯¯¯¯¯¯¯Pps(x) and ¯¯¯¯¯¯¯¯Jps

We begin by defining a dimensionless time , and we consider how both and scale with the total period of cycling, , for a given choice of . We obtain:

 ¯¯¯¯¯¯¯¯¯Pps(x) = ∫10Pps(x,s)ds (27) ¯¯¯¯¯¯¯¯Jps = 1T∫10Jps(x,s)ds, (28)

using Eqs.(24) and (26) to construct from . Thus does not vary with , while scales as . Similarly, the time reversal of defined by has the same temporal average as , but the corresponding averaged current has opposite sign: and . Therefore, to satisfy Eqs.(19a) and (19b), we can choose a probability distribution with the desired temporal average and with a non-vanishing averaged current, and then match the averaged current by the rescaling of and its sign by time reversal. Lastly, given and , we can use Eq.(23) to construct .

Importantly, the construction above has a lot of freedom: the only constraints on are its time average, positivity, smoothness and a non-vanishing average current. This freedom implies that there exist many periodic potentials generating the same time-averaged current and probability distribution. We now illustrate this procedure with a simple example.

#### v.3.1 An Example for A Protocol

Let us construct that drives a time-averaged current and probability distribution

 ¯¯¯¯¯¯¯¯Jps=1,¯¯¯¯¯¯¯¯¯Pps(x)=1+0.5sin(2πx). (29)

As discussed above, can be chosen arbitrarily, provided it is positive, normalized and has the correct time average and non-vanishing current. The specific choice

 Pps(x,s)=1+0.5sin(2πx)+0.1sin(2π(s−x)), (30)

gives the desired time averaging . Eq.(24) implies in this case

 Jps(x,s)=1T(Jps(0,s)+0.1[sin(2π(s−x))−sin(2πs)])

where the expression for , although analytical, is cumbersome and is not given explicitly. To match the target , we further set . Figure 1 shows for this example, as well as the corresponding and ; the latter was calculated numerically using Eq.(23) with and .

## Vi Optimal driving Protocol

As we have seen, there is considerable freedom in constructing a protocol that drives a target and . Moreover, in Section III it was shown that the entropy production rate of a SP always exceeds that of a NESS, when both share the same time averaged probability distribution and current; see Eq.(12). It is therefore natural to look for the protocol that drives the target averages at the minimal entropy production cost. In other words, we would like to solve the following minimization problem:

 minU(x,t),T[1T∫T0˙Sps[U(x,t)]dt] (31)

under the constraints:

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯Pps[U(x,t)](x) = Ptarget(x) (32) ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯Jps[U(x,t)] = Jtarget (33)

Solving the optimization problem directly is challenging, but unnecessary: it is possible to construct a specific protocol that asymptotically approaches the bound. The construction of this protocol for generic is given in Appendix A. In this section a simple example of this construction is demonstrated.

Let us consider driving a given current, , with . This example is of special interest since for a current , the bound on , given by of a NESS with the same averaged probability and current, is minimal for a uniform probability distribution , as discussed in Section IV.2. To construct the driving, we consider a probability distribution of the form for a positive, normalized, non-uniform function . In other words, we consider a probability distribution with a fixed shape that moves at a constant velocity . The cycle time for this driving is , and the spatial symmetry implies that the temporal average of is . In this case, the continuity condition in Eq.(24) implies that

 Jps(x,t)=Jps(0,t)+u(f(x−ut)−f(−ut)), (34)

where is set by the detailed balance condition , Eq.(26), to be

 Jps(0,t)=−u(α−f(−ut)),α≡[∫10dxf(x)]−1∈(0,1). (35)

The averaged current is therefore given by

 ¯¯¯¯¯¯¯¯Jps=u(1−α). (36)

The target current is set to be , which gives us

 u=J01−α. (37)

Substituting the above results into Eq.(6), we get after some trivial algebra:

 ˙Sps = J20D(1−α), (38)

which – as one might have guessed by translation symmetry – does not depend on time. We see that is minimal when is maximal. But this integral is not bounded from above: for example, in the limit the integral diverges, and we then get hence . Comparing with Eq.(18) we see that, in this limit, the entropy production of the periodically driven state approaches the bound set by the corresponding steady-state value.

## Vii Discussion

In this work we discussed similarities and differences between two types of driving that maintain a diffusive system on a ring out of equilibrium: periodic variations of a potential along the ring, and static driving by breaking the detailed balance condition. We have shown that the two scenarios can drive any averaged current and probability distribution, but in contrast to discrete state Markovian systems there is no full control in terms of the averaged entropy production. Moreover, it was shown that the averaged entropy production of a steady-state driving is smaller than that of a system driven by periodic changes in the potential that achieves the same averaged current and probability distribution. In terms of applications, this implies that the common driving in biological molecular motors – burning fuel and reaching a steady state – is more efficient than the common driving of artificial molecular motors, namely periodic variation of external parameters. This result is different than what was obtained in a coarse-grained description of the same system – discrete state Markovian modeling.

Many important aspects were not discussed in this work and they could be subjects to future investigations. These include mapping between NESS and SP that matches other features (e.g. heat or work in heat engines Raz et al. (2016b), current fluctuations Rotskoff (2017) or entropy production fluctuations Ray and Barato (2017)), as well as comparison of these two types of driving to other non-equilibrium scenarios.

## Viii Acknowledgments

We thank R. Zia for pointing out the physical interpretation of . O.R. is supported by a research grant from Mr. and Mrs. Dan Kane and the Abramson Family Center for Young Scientists. C.J. acknowledges financial support from the U.S. Army Research Office under contract number W911NF-13-1-0390.

## Appendix A Minimizing Entropy Production for non-uniform ¯¯¯¯¯¯¯¯¯Pps(x)

In Sec. VI we analyzed a specific example where the entropy production rate of a stochastic pump can get arbitrarily close to that of a NESS with the same time averaged current and probability distribution. In this appendix we generalize this construction for arbitrary target and .

Analogously to the construction in Sec. VI, we choose the probability distribution to be a translating profile, , where changes monotonically from to over the interval . We first show that by appropriately choosing we can construct the target time averaged probability:

 ¯¯¯¯¯¯¯¯¯Pps(x)=1T∫T0f(x−x0(t))dt=1Tf∗1˙x0, (39)

where denotes the convolution of the functions and , and is the velocity expressed as a function of position, . To gain some intuition, consider the example . By controlling the speed at which this delta function moves across each point we can manipulate the time averaged probability at this point. Specifically, for this example Eq.(39) gives us . More generally, applying the convolution theorem of Fourier transforms to Eq.(39), we obtain

 1˙x0(x)=T∑nei2πnx¯¯¯¯¯¯¯¯¯Ppsnfn (40)

where and are the ’th discrete Fourier coefficients of and . Note that the above equation shows that not any can serve for our construction – for example, if the right hand side of the above equation vanishes for some then the corresponding diverges. This can be intuitively understood by considering the extreme scenario: if , then one cannot match any probability distribution by averaging over , namely over translated versions of . Nevertheless, given any , one can always choose appropriate which is narrow enough such that the expression in the right hand side of Eq.(40) is strictly positive, as is evident from the delta-function example above.

From the function , we construct , and then invert to obtain .

Next, let us consider the current. By Eq.(24),

 Jps(x,t)=Jps(0,t)+˙x0(t)[f(x−x0(t))−f(−x0(t))] (41)

For the detailed balance condition to hold, Eq.(26) implies that

 Jps(0,t)=˙x0(t)[f(−x0(t))−α] (42)

where, as in Eq.(35),

 α≡[∫10dxf(x)]−1. (43)

Eqs. 41 and 42 then give us

 Jps(x,t)=˙x0(t)[f(x−x0(t))−α]. (44)

Let us set the target time averaged current to be for arbitrary . With this choice the cycle time solves the equation

 J0=1T∫T0Jps(x,t)dt=1−αT, (45)

where in the last equality we changed the variable of integration from to .

Lastly, substituting Eq.(44) into Eq.(6), it can be shown that the entropy production rate at each instant is given by

 ˙Sps(t)=˙x20D(1−α) (46)

The time averaged total entropy production is therefore given by

 ¯¯¯¯¯¯¯¯˙Sps=1−αTD∫T0˙x20dt=J0D∫10˙x0(x0)dx0, (47)

using Eq.(45). In the limit Eqs.(39) and (43) give us

 ˙x0(x)→[T¯¯¯¯¯¯¯¯¯Pps(x)]−1,α→0 (48)

hence and

 ¯¯¯¯¯¯¯¯˙Sps→J0DT∫10[¯¯¯¯¯¯¯¯¯Pps(x)]−1dx→¯¯¯¯¯¯¯¯Jps2D∫10[¯¯¯¯¯¯¯¯¯Pps(x)]−1dx (49)

which is the bound on the entropy production rate of periodic driving with the corresponding time averaged current and probability (Eq.(13)).

## References

• Kubo (1957) R. Kubo, Journal of the Physical Society of Japan 12, 570 (1957).
• Zwanzig (2001) R. Zwanzig, Nonequilibrium statistical mechanics (Oxford University Press, USA, 2001).
• Montroll and Shuler (1957) E. W. Montroll and K. E. Shuler, The Journal of Chemical Physics 26, 454 (1957).
• Seifert (2012) U. Seifert, Reports on Progress in Physics 75, 126001 (2012).
• Jarzynski (2011) C. Jarzynski, Annual Reviews of Condensed Matter Physics 2, 329 (2011).
• Ge et al. (2012) H. Ge, H. Qian, and M. Qian, Physics Reports 510, 1 (2012).
• Astumian (2007) R. D. Astumian, Proceedings of the National Academy of Sciences 104, 19715 (2007).
• Kolomeisky and Fisher (2007) A. B. Kolomeisky and M. E. Fisher, Annu. Rev. Phys. Chem. 58, 675 (2007).
• Ray and Barato (2017) S. Ray and A. C. Barato, Phys. Rev. E 96, 052120 (2017).
• Knox (1969) R. S. Knox, Biophysical Journal 9, 1351 (1969), ISSN 0006-3495.
• Yasuda et al. (1998) R. Yasuda, H. Noji, K. Kinosita, and M. Yoshida, Cell 93, 1117 (1998).
• Gaspard and Gerritsma (2007) P. Gaspard and E. Gerritsma, Journal of Theoretical Biology 247, 672 (2007).
• Fisher and Kolomeisky (2001) M. E. Fisher and A. B. Kolomeisky, Proceedings of the National Academy of Sciences 98, 7748 (2001).
• Gingrich et al. (2014) T. R. Gingrich, G. M. Rotskoff, S. Vaikuntanathan, and P. L. Geissler, New J. Phys. 16, 102003 (2014).
• Brandner et al. (2015) K. Brandner, K. Saito, and U. Seifert, Phys. Rev. X 5, 031019 (2015).
• Uzdin et al. (2015) R. Uzdin, A. Levy, and R. Kosloff, Phys. Rev. X 5, 031044 (2015).
• Browne and Feringa (2006) W. R. Browne and B. L. Feringa, Nature nanotechnology 1, 25 (2006).
• Hernandez et al. (2004) J. V. Hernandez, E. R. Kay, and D. A. Leigh, Science 306, 1532 (2004).
• Leigh et al. (2003) D. A. Leigh, J. K. Y. Wong, F. Dehez, and F. Zerbetto, Nature Communications 424, 174 (2003).
• Raz et al. (2016a) O. Raz, Y. Subaş ı, and C. Jarzynski, Phys. Rev. X 6, 021022 (2016a).
• Rotskoff (2017) G. M. Rotskoff, Phys. Rev. E 95, 030101 (2017).
• Mandal (2014) D. Mandal, EPL 108, 50001 (2014).
• Maes et al. (2010) C. Maes, K. Netočnỳ, and S. R. Thomas, The Journal of chemical physics 132, 234116 (2010).
• Asban and Rahav (2014) S. Asban and S. Rahav, Phys. Rev. Lett. 112, 050601 (2014).
• Chernyak and Sinitsyn (2008) V. Y. Chernyak and N. A. Sinitsyn, Physical review letters 101, 160601 (2008).
• Mandal and Jarzynski (2011) D. Mandal and C. Jarzynski, Journal of Statistical Mechanics: Theory and Experiment 2011, P10006 (2011).
• Rahav (2011) S. Rahav, Journal of Statistical Mechanics: Theory and Experiment 2011, P09020 (2011).
• Rahav et al. (2008) S. Rahav, J. Horowitz, and C. Jarzynski, Phys. Rev. lett. 101 (2008).
• Horowitz and Jarzynski (2009) J. M. Horowitz and C. Jarzynski, Journal of Statistical Physics 136, 917 (2009).
• Parrondo (1998) J. M. R. Parrondo, Physical Review E 57, 7297 (1998).
• Magnasco (1993) M. O. Magnasco, Phys. Rev. Lett. 71, 1477 (1993).
• Seifert (2005) U. Seifert, Phys. Rev. Lett. 95, 040602 (2005).
• Raz et al. (2016b) O. Raz, Y. Subaş ı, and R. Pugatch, Phys. Rev. Lett. 116, 160601 (2016b).
• Weiss et al. (2017) J. Weiss, B. Fox-Kemper, D. Mandal, A. Nelson, and R. Zia, Dynamics and Statistics of the Climate System: An Interdisciplinary Journal [submitted] (2017).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters

224060

How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description