Efficiency bounds for nonequilibrium heat engines

# Efficiency bounds for nonequilibrium heat engines

Pankaj Mehta and Anatoli Polkovnikov Department of Physics, Boston University, 590 Commonwealth Ave., Boston, MA 02215
e-mail:asp@bu.edu; tel: 617-3584394; fax: 617-353-9393
###### Abstract

We analyze the efficiency of thermal engines (either quantum or classical) working with a single heat reservoir like atmosphere. The engine first gets an energy intake, which can be done in arbitrary non-equilibrium way e.g. combustion of fuel. Then the engine performs the work and returns to the initial state. We distinguish two general classes of engines where the working body first equilibrates within itself and then performs the work (ergodic engine) or when it performs the work before equilibrating (non-ergodic engine). We show that in both cases the second law of thermodynamics limits their efficiency. For ergodic engines we find a rigorous upper bound for the efficiency, which is strictly smaller than the equivalent Carnot efficiency. I.e. the Carnot efficiency can be never achieved in single reservoir heat engines. For non-ergodic engines the efficiency can be higher and can exceed the equilibrium Carnot bound. By extending the fundamental thermodynamic relation to nonequilibrium processes, we find a rigorous thermodynamic bound for the efficiency of both ergodic and non-ergodic engines and show that it is given by the relative entropy of the non-equilibrium and initial equilibrium distributions.These results suggest a new general strategy for designing more efficient engines. We illustrate our ideas by using simple examples.

nonequilibrium statistical mechanics — heat engines — Carnot efficiency

Heat engines are systems that convert heat or thermal energy into macroscopic work. Heat engines play a major role in modern technology and are crucial to our understanding of thermodynamics reif1965fundamentals (); fermi1936thermodynamics (). Examples of heat engines include conventional combustion engine such as those found in cars and airplanes, various light emitting devices, as well as naturally occurring engines such as molecular motors magnasco1993forced (). A conventional heat engine consists of two heat reservoirs, a hot reservoir that serves as a source of energy and a cold reservoir that serves as an entropy sink. The efficiency of such engines is fundamentally limited by the second law of thermodynamics providing an upper bound given by the efficiency of a Carnot engine operating at the same temperatures reif1965fundamentals (),

 ηc=1−TcTh, (1)

with and the temperature of the cold and hot reservoirs respectively.

Real engines often differ significantly from the idealized, two-reservoir engines considered in classical thermodynamics. They operate with a single bath, such as the atmosphere, that serves as an entropy sink. Instead of a high temperature bath, energy is suddenly deposited in the system at the beginning of each cycle and is converted into mechanical work. The most common example of this are combustion engines such as those found in cars where energy is deposit in the system through the combustion of a fuel. Currently, the most realistic models describing combustion engines are based on the Otto cycle reif1965fundamentals (), with a corresponding efficiency. which is less than with appropriately chosen temperatures and . One can ask some natural questions: is the Carnot efficiency a good bound for the efficiencies of such single-reservoir engines or are these engines better described by a different bound? Are there realistic processes that allow you to realize these bounds? Can we overcome the thermodynamic bounds if we use engines which are not completely ergodic?

To address these questions, we generalize the fundamental relations of thermodynamics to describe large, nonequilibrium quenches in systems coupled to a thermal bath. We use these relations to derive new bounds for the efficiency of nonequilibrium engines that operate with a single bath. We analyze our bounds in two different regimes, a local equilibrium regime where the system quickly thermalizes with itself (but not the bath), and a non-erdgodic regime where the thermalization times are much longer than time scales on which work is performed. We demonstrate our results using simple examples such as an ideal gas that drives a piston and a magnetic gas engine.

The paper is organized as follows. In Sec. I we formulate generalized thermodynamic identities, which extend the fundamental thermodynamic relations to arbitrary non-equilibrium processes and introduce the notion of the relative entropy (or Kullback-Leibler divergence). In Sec. II we apply these results for finding the maximum efficiency of the non-equilibrium engines. We separately discuss bounds for ergodic (equilibrium) engines, non-ergodic (nonequilibrium) incoherent engines and non-ergodic coherent engines. Then in Sec. III we illustrate our results using simple examples and show that non-ergodic engines can indeed have higher efficiency than the ergodic ones. In Sec. IV we give rigorous derivation of the thermodynamic identities of the paper. Then In Sec. V we give the details of the derivation of the efficiencies of the ergodic and non-ergodic engines.

## I Generalized Thermodynamic Identities

Most applications of thermodynamics are connected to the fundamental thermodynamic relation kardar2007statistical ()

 dE=TdS−Fdλ, (2)

where is the energy if the system, is the temperature, is the entropy, is some external macroscopic parameter, and is the generalized force. When is the volume stands for the pressure and the fundamental relation takes the most familiar form . The fundamental relation mathematically encodes the fact that the energy of a system in equilibrium is a unique function of the entropy and external parameters. For quasistatic processes, one can associate the first term with the supplied heat and the second term with the work done on the system by changing the parameter . The fundamental relation can also be integrated for quasistatic processes and one can explicitly compute the total work, heat etc. However, how to generalize these calculations to strongly nonequilibrium processes where changes in energy, entropy, etc. can be large, is still largely an open question. Using the second law of thermodynamics, one can prove various inequalities. In particular, if we prepare a system in a thermal state with temperature and let it equilibrate with a bath at temperature then the second law of thermodynamics implies two related inequalties: (see Sec. IV)

 TAΔSA−ΔEA≤0,TΔSA−ΔEA≥0. (3)

The first inequality is also applicable to the case where an energy is deposited in the system in a nonequilibrium fashion, for example, by an external energy pulse (then is the initial temperature of the system ), and the second inequality describes the relaxation of a system back to equilibrium. It implies that the free energy of the system can only go down during the relaxation kardar2007statistical ().

In this work we establish two main closely related results, which refine the inequalities (3) to arbitrary non- equilibrium protocols using the concept of relative entropy . Relative entropy, or the Kullback-Leibler Divergence, is well known in information theory cover1991elements (); mackay2003information () and appears naturally in statistical mechanics within the context of large deviation theory touchette2009large (). In deriving our results, we will use “quantum” notations and restrict ourselves to discrete probability distributions. Our results also equally apply to classical systems with continuous probability distributions and can be derived from the corresponding “quantum” results by multiplying all distributions by an appropriately chosen density of states (see Ref. [bunin2011universal, ] and Sec. IV). These general results valid for both quantum and classical systems are closely related to those recently obtained by S. Deffner and E. Lutz deffner2010generalized (); deffnerlutz2011 () for quantum systems but deviate in a way that is crucial to our discussion (see Sec. IV for details).

Consider a system with external parameter and a -dependent energy spectrum which is coupled to a thermal bath at temperature . We assume that the bath is insensitive to the parameter (Figure 1). Initially, the system is prepared in equilibrium with the bath and is described by a Boltzmann distribution of the form

 p(1)n=exp[−E(1)n/T]/Z1,with,Z1=∑ne−βE(1)n.

In stage , the system undergoes an arbitrary process where is changed from to , resulting in a new non-equilibrium state, characetrized by some generally non-equilibrium probability occupations of the energy eigenstates . We do not assume that during this process the system is thermally isolated. Then in stage , the system re-equilibrates with the bath, eventually reaching a new Boltzmann distribution with ,

 p(2)n=exp[−E(2)n/T]/Z2,Z2=∑ne−βE(2)n.

During stage , the total change in energy in the system can be divided into two parts, adiabatic work, , and heat, ,

 ΔEI=WIad+QI. (4)

Adiabatic work is defined as the change in energy that would result from adiabatically changing the parameters from to . Physically, it measures changes in total energy stemming form the parameter dependence of the energy spectrum (potential energy). By definition, the heat is the remaining contribution to the change in energy fermi1936thermodynamics (). Thus in our language heat includes both the non-adiabatic part of the work and the conventional thermodynamic heat. The heat generated during process can be explicitly calculated (see Sec. IV):

 QI=TΔSI+TSr(q||p(2))−TSr(p(1)||p(2)),ΔSI=S(q)−S(p(1)) (5)

where the is the diagonal entropy of a probability distribution polkovnikov2011microscopic () and

 Sr(q||p)≡∑nqnlog(qn/pn)

is the relative entropy between the distributions and . We have shown previously that for large ergodic systems, the diagonal entropy is equivalent to the usual thermodynamic entropy polkovnikov2011microscopic (). Note that for a cyclic process where the last term in Eq. (5) vanishes since . During process , the system re-equilibrates with the bath by exchanging heat, , with the reservoir. One can show that (see Sec. IV)

 QII=TΔSII−TSr(q||p(2)),ΔSII=S(p(2))−S(q). (6)

The importance of relative entropy for describing relaxation of nonequilibrium distributions has been discussed in previous for different setups both in quantum and classical systems levine1978information (); schlogl1980 (); deffnerlutz2011 (); qian2001relative (); mehta2008nonequilibrium (). Taken together, (4), (5), and (6) constitute the nonequilibrium identities that will be exploited next to calculate bounds for the efficiency of engines that operate with a single heat bath.

## Ii Maximum Efficiency of Engines

Figure 2 summarizes the single-reservoir engines analyzed in this work and compares them with Carnot engines (1). The engine is initially in equilibrium with the environment (bath) at a temperature and the system is described by the equilibrium probability distribution . In the first stage, excess energy, , is suddenly deposited into the system. This can be a pulse electromagnetic wave, burst of gasoline, current discharge etc. In second stage, the engine converts the excess energy into work and reaches mechanical equilibrium with the bath . Finally, the system relaxes back to the initial equilibrium state. Of course splitting the cycle into three stages is rather schematic but it is convenient for the analysis of the work of the engine. Such an engine will only work if the relaxation time of the system and environment is slow compared to the time required to perform the work. Otherwise the energy will be simply dissipated to the environment and no work will be done (see discussion in Ref. [LL5, ]).

The initial injection of energy, results in the corresponding entropy increase of the system, where is the diagonal entropy and describes the system immediately after the addition of energy. Because by assumption the environment is not affected during this initial stage, the total entropy change of the system and environment is also just . By the end of the cycle, the entropy of the system returns to its initial value. Thus, from the second law of thermodynamics, the increase in entropy of the environment must be greater than equal to . This implies that the minimal amount of heat that must be dissipated into the environment during the cycle is . An engine will work optimally if no extra entropy beyond is produced during the system-bath relaxation since then all of the remaining energy injected into the system is converted to work. Thus, the maximal work that can be performed by the engine during a cycle is . For a cyclic process such as the one considered here, substituting (5) into the expression for implies that the maximum efficiency of a nonequilibrium engine, , is given by

 ηmne=WmΔQ=1−T0ΔSIΔQ=T0Sr(q||p)ΔQ. (7)

Equation (7) is the main result of this paper. It relates the maximum efficiency of an engine to the relative entropy of the intermediate nonequilibrium distribution and the equilibrium distribution. We next consider various limits and applications of this result. We point that Eq. (5) also allows us to extend the maximum efficiency bound to a more general class of engines, like Otto engines, where during the first stage of the cycle one simultaneously changes the external parameter from to . In this case,

 ηmne=T0[Sr(q||p(2))−Sr(p(1)||p(2))]ΔQ, (8)

where and stand for equilibrium Gibbs distributions corresponding to the parameters and at the beginning and the end of the process I respectively. Since the second term is negative, changing the external parameter during the first stage can only reduce the engine efficiency, though this may be desirable for other practical reasons unrelated to thermodynamics.

### ii.1 Efficiency of Ergodic Engines

An important special case of our bound is the limit where the the relaxation of particles within the engine is fast compared to the time scale on which the engine preforms work (see Figure 2). This is the normal situation in mechanical engines based on compressing gases and liquids. In this case, after the injection of energy the particles in the engine quickly thermalize and can be described by a gas at an effective temperature that depends on the energy of the gas. It is shown in Sec. V, that in this case, (7) reduces to

 ηmt=1−T0ΔSIΔQ=1ΔQ∫E+ΔQEdE′(1−T0T(E′)). (9)

By definition is the true upper bound for thermal efficiency of a single reservoir engine.

It is easy to see that is the integrated Carnot efficiency and thus it is always smaller that the Carnot efficiency corresponding to the same heating (see Fig. 2). This efficiency bound becomes very simple for ideal gases where . Assuming that in the beginning of the cycle the system is in equilibrium with environment, one has that the maximal efficiency of an equilibrium engines that thermalizes is

 ηmt=1−1τlog(1+τ), (10)

where . For comparison the equivalent Carnot efficiency is

 ηc=ττ+1. (11)

It is interesting that the result for is valid for arbitrary ideal gases and does not depend on dimensionality or the type of dispersion (linear, quadratic etc.) or the number of internal degrees of freedom. It is also valid for mixtures of ideal gases with different masses and dispersion relations. The expression (10) can be extended to the situations where the initial temperature of the engine is different from that of the environment (see Sec. V).

### ii.2 Higher efficiency bound for non-ergodic distributions.

Another interesting limit is when the full thermalization time in the system is long compared to the time required to perform the work. We call engines that work in these parameter regime non-ergodic engines. This situation can be realized in small systems, integrable or nearly integrable systems with additional conservation laws or the systems where different degrees of freedom are weakly coupled like e.g. kinetic and spin degrees of freedom of molecules, electrons and phonons in metals and semiconductors and so on. In such systems the process of relaxation typically occurs in two stages. The system first undergoes a fast relaxation to a quasi steady-state, prethermalized distribution. Subsequently, the system then very slowly relaxes to the true equilibrium distribution. The notion of prethermalization mechanism was first suggested in the context of cosmology berges2004prethermalization (). Since it has been confirmed to occur both experimentally and theoretically in many physical situations including one and two dimensional turbulence gurarie1995probability (), weakly interacting fermions moeckel2010crossover (), quenches in low dimensional dimensional superfluids gring_12 () (see Ref. [polkovnikov2011colloquium, ] for additional examples).

Prethermalization is well known from standard thermodynamics where two or more weakly coupled systems first quickly relax to local equilibrium states and then slowly relax with each other. From a microscopic point of view, prethermalization is equivalent to dephasing with respect to a fast Hamiltonian where the density matrix effectively becomes diagonal with respect to the eigenstates of . It was also recently realized that thermalization can be also understood as dephasing with respect to the full Hamiltonian of the system through the eigenstate thermalization hypothesis deutsch1991quantum (); srednicki1994chaos (); rigol2008thermalization (). In the language of kinetic theory of weakly interacting particles, prethermalization implies a fast loss of coherence between particles governed by the noninteracting Hamiltonian followed by a much slower relaxation of the non-equilibrium distribution function to the Boltzmann form due to small interactions.

The efficiency of a non-ergodic engine is given by (7) with now representing the prethermalized distribution. A simple minimization shows that the numerator of (7) for a fixed energy increase has a minimum precisely for the Gibbs distribution (see Sec. V). Thus, any non-equilibrium state can only increase the maximum possible efficiency of the engine. Alternatively this statement can be understood from the fact that the Gibbs distribution maximizes the entropy for a given energy jaynes1957information (). Thus for thermalizing engines the unavoidable amount of heating of the environment is maximum. Finally, notice that the first equality in (7) implies that the maximum value of , which is unity, is achieved for a process when the prethermalized non-equilibrium state has the same diagonal entropy as the initial state i.e where the probabilities are permutations of the probabilities . Thus, in principle, it is possible to create a non-ergodic heat engine, with efficiency arbitrary close to unity even if it is incoherent. We discuss an example of such an engine in the next section (see Fig. 3 and related discussion).

### ii.3 Maximum efficiency of coherent non-ergodic engines.

Finally we briefly discuss the efficiency bound for coherent engines which preserve coherence between particles while performing macroscopic work. Such engines are sensitive not only to conserved or approximately conserved quantities (like energy and velocity distribution for weakly interacting gas) but also to non-conserved degrees of freedom (like precise positions of particles at a given moment of time). In practice, such engines can be realized only for very small, non-interacting systems with long coherence times or in the systems where some macroscopic degrees of freedom are decoupled from the rest, like e.g. center of mass motion in solids. Such engines have the highest efficiency bound still given by Eq. (7) but with standing for the full relative entropy of the non-equilibrium distribution with respect to the equilibrium distribution (see Sec. IV):

 Svnr(q||p)=Tr[ρ(log(ρ)−log(p))]. (12)

We will not further discuss such engines since they are not thermal. We only point that this bound based on relative von Neumann’s entropy explains why mechanical engines can have arbitrary high efficiency. Indeed the von Neumann’s entropy of a system of particles does not change if they start moving collectively implying that the bound given by Eq. (7) can reach unity.

## Iii Some simple examples

### iii.1 Ideal Gas Engine

#### iii.1.1 Ergodic engine

Let us start from the simplest ideal gas single reservoir engine which pushes the piston. The engine undergoes the Lenoir cycle as illustrated in Figure 2. First a pulse of energy is deposited to the gas via e.g. a gasoline burst. The gas immediately thermalizes at a new temperature corresponding to the energy and a new pressure. Then the gas undergoes adiabatic expansion pushing the piston and performing work until the pressure drops to the atmospheric value and finally the system relaxes back to the initial state at a constant pressure as the atmosphere pushes the piston back. Practically the engines based on the Lenoir cycle are not very efficient due to reasons unrelated to thermodynamics. We will use this cycle for illustration of our results because it is conceptually the simplest single heat reservoir engine.

The Lenoir cycle consists of three processes. Initially, the gas has pressure , volume, , and temperature . Next, energy is injected at constant volume so the effective temperature and pressure must rise. So after energy deposition, the system is described by pressure, , volume , and temperature . The system then performs work by adiabatically expanding until the pressure equalizes. The system is then described by pressure, , a volume , and a temperature . Finally the system relaxes back to the initial state by dropping temperature and volume back to and and a constant pressure . To calculate the efficiency, we calculate the work the system performs and divide by the total heat added:

 η=WΔQ. (13)

Denote the heat capacity ratio of an ideal gas by . It is related to the number of degrees of freedom per molecule by . We can write

 ΔQ=f2nR(TH−T0)=1γ−1nR(TH−T0). (14)

By definition, the work is

 W=∫V∗V0(P(V)−P0)dV, (15)

To calculate the work during the adiabatic expansion, we use the fact that for an adiabatic process the product is a constant. Thus, we can rewrite the equation above as

 W =P0Vγ∗∫V∗V0dV1Vγ−P0(V∗−V0) (16)

Explicitly performing the integral yields,

 W=P0V∗γ−1((V∗V0)γ−1−1)−P0(V∗−V0). (18)

We now use the relation and the ideal gas law to find , where . Then

 W=P0V0[τγ−1−γγ−1((1+τ)1/γ−1)]. (19)

Finally rewriting Eq. (14) as we find:

 ηγ=1−γτ((1+τ)1/γ−1). (20)

The efficiency is bounded by the maximum thermodynamic efficiency (10) as it should approaching this bound as and for (minimal possible value) the maximum efficiency goes to zero. For a monoatomic gas we have and the corresponding efficiency is plotted in Fig. 2. For a typical value where the temperature increases by a factor of 2 during the pulse for monoatomic gas we find and , i.e. the efficiency of such engine is significantly below the thermodynamic bound (which in turn is considerably less than the Carnot bound ). For , i.e. when the temperature jumps by a factor of three the situation is somewhat better while . For more complicated molecules with closer to one the efficiency is even less.

#### iii.1.2 Non-ergodic Engine

We now analyze performance of a non-ergodic ideal gas engine of the following form. Consider, the scenario where an energy pulse generates a fraction of very fast particles moving horizontally, which very slowly thermalize with the rest of the particles. In this case these particles can be treated as effectively a one dimensional gas with such that Eq. (20) applies. Microscopically this result can be understood by using the conservation of the adiabatic invariants LL1 (). Indeed, during the slow motion of the piston the fast particles approximately conserve adiabatic invariants equal to the product of the momentum, , and twice the distance between piston and the wall, which we denote (since in our setup the area of the piston does not change the length and the volume aree equivalent). This implies

 pV=C1, (21)

with a constant. Thus, we expect that

 p∝V−1 (22)

Furthermore, consider the pressure, , of such a gas can be thought of as the force per unit area or equivalently the energy density per unit volume,

 P∝p2/V (23)

Taken, together these relations imply that

 PV3=const. (24)

This is precisely the relationship for a gas adiabatically expanding with . Thus, the efficiency is equivalent to that of an adiabatic 1D gas with . The efficiency of this non-ergodic engine is still below thermodynamic bound because the latter does not depend on dimensionality, but it is much higher than the efficiency of the ergodic engine according to our general expectations (see Fig. 2). In particular and i.e. we are getting approximately 50 improvement of the efficiency compared to the ergodic gas. With this simple design it is impossible to exceed the thermodynamic bound because pressure is only sensitive to the overall kinetic energy not to details of the energy distribution.

Interestingly given by Eq. (20) also describes the efficiency of a photon engine where the piston is pushed by the photon pressure created by some light source like a bulb. In this case one should use in the ergodic case, where the photon gas is equivalent to the black-body radiation at a higher temperature and in the non-ergodic case where the photons are effectively one-dimensional. Again the non-ergodic setup allows one to increase the engine efficiency.

### iii.2 Magnetic Gas Engine

It is possible exceed the thermodynamic efficiency by considering more complicated engines with an additional magnetic degree of freedom. Then as we show below one can create a non-ergodic engine with efficiency higher than the thermodynamic bound and which can be arbitrarily close to . Assume that we have a gas composed of atoms which have an additional magnetic degree of freedom like a spin. For simplicity we assume that the spin is equal to , i.e. there are two magnetic states per each atom. As will be clear from the discussion, this assumption is not needed for the main conclusion and the calculations can easily be generalized to the case where we consider electric dipole moments or some other discrete or continuous internal degree of freedom instead of the spin.

The Hamiltonian of the system is then

 H0=∑jmv2j2−hzσzj, (25)

where are the Pauli matrices. To simplify notations we absorbed the Bohr magneton and the factor into the magnetic field. The first term in the Hamiltonian is just the usual kinetic energy and the second term is due to the interaction of the spin degrees of freedom with an external field in the -direction. Initially the system is in equilibrium at a temperature and a fixed magnetic field .

Now let us assume that via some external pulse we pump energy to the atoms by flipping their spins with some probability. This can be done by a resonant laser pulse or by e.g. a Landau-Zener process where we adiabatically turn on a large magnetic field in -direction then suddenly switch its sign and slowly decrease it back to zero. Ideally this process creates a perfectly inverse population of atoms (i.e. number of spin up and spin down particles is exchanged) but in practice there will be always some imperfections. In general unitary process the new occupation numbers can be obtained from a single parameter describing the flipping rate: , with

 q↑=p↑(1−R)+p↓R,q↓=p↓(1−R)+p↑R (26)

During such a process, the energy added to the system is

 ΔQ=2NμhzR(p↑−p↓)=2NμhzRtanhμhzT. (27)

As expected this energy is non-negative. As before we will discuss first the ergodic and then the non-ergodic engines.

#### iii.2.1 Ergodic Engine

In the equilibrium ergodic case the atoms are first allowed to relax to a thermal distribution corresponding to the new energy. This will result in a higher effective temperature for the magnetic gas. This temperature can be found from the equation relating temperature to energy:

 3TH2−NμhztanhμhzTH=E+ΔQ, (28)

where is the initial equilibrium energy of the system and is found in Eq. (27).

Work can be extracted in a similar manner to the ideal gas engine considered above by letting the gas adiabatically expand and push a piston until the pressures equilibrate. We know that the work done during such a process is

 W=∫V∗V0(P(V)−P0)dV, (29)

where we have adapted the notation of the last section.

During an adiabatic expansion the entropy must be conserved. The entropy as a function of the volume and temperature of magnetic gas is given by

 S(T,V)N=C+log(V)+log(T)γ−1+log[2coshμhzT]−μhzTtanhμhzT, (30)

where is an unimportant constant and is the volume. Notice that the entropy has contributions from both the kinetic and magnetic sectors. Additionally, we know that for the gas,

 P(V,T)=NT(V)/V, (31)

where the temperature, , is now considered a function of volume during the adiabatic expansion. can be solved for from the self-consistency condition for adiabatic expansion

 S(T(V),V)=S(TH,V0). (32)

Together, these relations allow us to numerically solve for the work performed by the engine during adiabatic expansion and the results are shown in Figure 3. Notice, that the additional spin contribution to the entropy makes the engine somewhat less efficient than the ideal gas engine because the additional entropy is eventually released in the form of heat. However, this difference can be very small if the initial temperature is small compared to the Zeeman energy splitting (see Figure 3).

#### iii.2.2 Non-ergodic Engine

In the non-ergodic setup spins are allowed to do work before they relax with kinematic degrees of freedom. The easiest setup we can imagine is to rotate the spins around the axis by the angle when , i.e. when there is an inverse spin population. This can be done by e.g. applying a strong magnetic field along the -axis for exactly half the period of the Larmor precession. This extra field does not do any work by itself since the magnetization is orthogonal to the -axis. We can extract a magnetic work from the system by coupling the magnetization of the spins to the source of the external z-component of the magnetic field. The amount of “magnetic” work generated from such a device is

 Wmag=N|M|hz=N(2q↓−1) (33)

Note that the same maximum work can be extracted from the system in the form of a coherent light pulse. The remaining excess energy can be used as in the previous part by allowing the gas to adiabatically expand and drive a piston. Thus, an additional work, can be extracted by using the equilibrium equations from the last section with Note that for we always have . This allows us to calculate the efficiency for this system using the epxression

 η=Wmag+WadΔQ. (34)

The resulting efficieny is plotted in Figure 3. This efficiency, as it should, is always bounded by the maximum nonequilibrium efficiency given by the relative spin entropy:

 ηmne=TSr(q||p)ΔQ=TN[q↑log(q↑p↑)+q↓log(q↓p↓)]ΔQ (35)

with given by and given by . Note that as the flip rate approaches unity, i.e. the up and down spins exactly exchange, the efficiency of this engine approaches unity. This is related to the fact that such a pulse does not generate the additional entropy in the system and once the spins are rotated by half the Larmor period around -axis performing the macroscopic work they are in equilibrium state with no additional relaxation required. It is easy to see that any imperfections like or small disorder in spin Larmor frequencies will decrease the engine efficiency. because there is always excess magnetization and excess entropy compared to the initial equilibrium state.

## Iv Derivation of Thermodynamic Identities

In this section, we derive the nonequilibrium fundamental thermodynamic relations established earlier As shown in Figure 1, we consider a process that proceeds in two parts: In part I), the system is prepared in equilibrium and is characterized by a Gibbs distribution with temperature and parameters . We denote the initial distribution by . The system then undergoes an arbitrary process, which brings it to the new possibly nonequilibrium state while simultaneously changing from to . Since the energy distribution only depends on the probabilities of occupying the eigenstates , which form the so called diagonal ensemble, we will be concerned only by these probabilities. The distribution can correspond to an effective thermal distribution at a higher temperature, or to a prethermalized, nonequilibrium distribution. For example for weakly interacting gas of particles is described by their possibly nonequilibrium momentum distribution. In stage II) the system prepared in the nonequilibrium state relaxes to a new equilibrium state with the bath and is described by an equilibrium distribution with . In principle during this relaxation one can still perform the work on but for simplicity we assume this does not happen.

We start by defining several important concepts. The first is the diagonal entropy of a distribution, , which we label . The diagonal entropy is the same as the Shannon entropy of the distribution and reduces to the thermodynamic entropy for large systems polkovnikov2011microscopic (). It is explicitly given by,

 S(p)=−∑npnlogpn. (36)

For stationary distributions the diagonal entropy is the same as the von Neumann’s entropy , with the corresponding density matrix. A second related concept is the relative entropy, between two distributions and , which is defined as

 Sr(q||p)=∑nqnlogqnpn. (37)

In general, the notion of relative entropy can be extended to full density matrices, and , rather than diagonal parts:

 Svnr(ρq||ρ)=Tr[ρq(log(ρq)−log(ρp))] (38)

As we already mentioned in this work we will be interested in only the diagonal part of the distribution and thus in the associated entropy and not .

Another related concept is the adiabatic work, (see also a related discussion in Ref. [polkovnikov2008heat, ]). Consider a system parameterized by with a -dependent energy spectrum, . Lets assume that the system is initially in equilibrium with a bath at temperature described by a Boltzmann distribution

 p(1)n=e−βE(1)nZ(1) (39)

with the usual partition function. Now consider a process where an external parameter is adiabatically changed from to . Since this is done adiabatically, the probability distribution does not change during the process. We define the total change in energy of the system during such an adiabatic process the adiabatic work, . It represents the minimum amount of work that can be done in changing parameter from to and is given by

 WIad=∑np(1)n(E(2)n−E(1)n). (40)

After defining these concepts let us consider process . The total energy change during this process is

 ΔEI=∑nqnE(2)n−p(1)nE(1)n. (41)

Now notice that we can rewrite

 ΔEI=WIad+∑n(qn−p(1)n)E(2)n=WIad−T∑n(qn−p(1)n)logp(2)n=WIad+T∑nqnlogqnp(2)n−T∑np(1)nlogp(1)np(2)n+T∑np(1)nlogp(1)n−T∑nqnlogqn=WIad+Sr(q||p(2))−Sr(p(1)||p(2))+T[S(q)−TS(p(1))]=WIad+Sr(q||p(2))−Sr(p(1)||p(2))+TΔSI (42)

. By definition, we have that the heat is just

 QI=ΔEI−WIad=TΔSI+TSr(q||p(2))−TSr(p(1)||p(2)) (43)

This is the first of the thermodynamic entities. For a cyclic process or a standard heating process where the parameter we have and the last term vanishes so

 QI=ΔEI−WIad=TΔSI+TSr(q||p(2)) (44)

Because the relative entropy is non-negative we immediately see that , which is the first inequality in Eq. (3). We emphasize again that here is the initial temperature of the system.

In process when the system relaxes, the total change in energy is by definition the heat, , exchanged with the reservoir. In this case, we can write

 QII=∑n(p(2)n−qn)E(2)n=T∑n(qn−p(2)n)logp(2)n+T∑nqnlogqn−T∑nqnlogqn=TS(p(2))−TS(q)−TSr(q||p(2))=TΔSII−TSr(q||p(2)) (45)

This yields the second thermodynamic identity, from which the second inequality in Eq. (3) immediately follows (now is the temperature of the bath where the initial distribution relaxes to). This result implies that the relative entropy between the arbitrary nonequilibrium and equilibrium distributions has a physical meaning of the total entropy generation in the system + bath during the relaxation of the nonequilibrium distribution to equilibrium. Indeed by energy conservation the heat dissipated to the bath is . Because the temperature of the bath does not change we can use the standard thermodynamic identity . Combining this with Eq. (45) and changing the notation we indeed find that

 Sr(q||p(2))=ΔSA+ΔSB (46)

As expected in accord with the second law of thermodynamics the total entropy change in the system and the bath is always non-negative as for arbitrary .

Let us finally point that the relations (43) and (45) are also valid if we use the von Neumann’s entropy (in contrast to the Diagonal entropy) to define and the corresponding von Neumann’s relative entropies given by Eq. (38). In this form the similar relations were obtained earlier in Ref. [deffnerlutz2011, ], see also Ref. [schlogl1980, ] for similar results in classical systems.

### Quantum - classical correspondence

In this section, we discuss the relationship between the“quantum” notation used in the paper and the classical thermodynamic quantities that usually depend only on energies. For macroscopic systems, we can replace a probability to be in a state by the probability, , that the system has energy by multiplying by an appropriate density of states, (see Ref. [bunin2011universal, ]). We make the usual identifications

 Wp(E)=p(E)Ω(E) (47)

and replace sums by integrals

 ∑n→∫dEΩ(E). (48)

This identification allows us to translate all expression in the text to usual thermodynamic expressions. As an illustration consider the diagonal entropy,

 S=−∑npnlogpn. (49)

Under the identification above this becomes

 S=−∫dEWp(E)logWp(E)+∫dEWp(E)logΩ(E). (50)

For macroscopic systems, we know that the density of states are well approximated by a Gaussian (see Ref. [bunin2011universal, ])

 Wp(E)=e−(E−¯E)2/2δE22πδE2. (51)

Therefore, we can calculate the entropy and one finds

 S(¯E) =log√2πeδE+∫dEWp(E)logΩ(E) (52) ≈log(√2πeδEΩ(¯E)) (53)

As discussed in section II.D of Ref. [polkovnikov2011microscopic, ] this is precisely the thermodynamic entropy. We note that there is a recent different derivation of this result based on the saddle point approximation gurarie2012entropy ().

## V Maximum Efficiency of Thermal Engines

### v.1 Ergodic engines: general thermodynamic considerations

We will start our analysis from the most straightforward setup where the system is macroscopic and the relaxation time within the system is the fastest time scale in the problem. This is the natural situation in gases, liquids and solids where the relaxation times are extremely fast. In this situation the initial energy increase results in the corresponding entropy increase . Because by assumption the bath is not affected during this initial stage, this entropy increase equals to the total entropy change of the system and environment. For remainder of the engine’s cycle, the total entropy change of the system and environment must be non-negative due to the second law of thermodynamics. By the end of the cycle, the entropy of the system returns to its initial value. Consequently, the entropy of the environment must increase by an amount or larger. Thus, the minimal heat dissipated to the environment during the cycle is where is the temperature of the environment. Thus the maximum amount of work that can be performed during the cycle is . Hence, the maximum thermodynamic efficiency of our engine is

 ηmt=1−T0ΔSIΔQI=1ΔQI∫E+ΔQIEdE′(1−T0T(E′)). (54)

where is the equilibrium temperature of the system. We emphasize that deriving this result we never assumed that the initial energy change in the system is quasi-static. We have only used the fact that the equilibrium entropy is a unique function of energy. The result (9) is very general since it is obtained with the single assumption of fast equilibration within the system. This assumption is justified for most practical heat engines like combustion engines. As discussed earlier, this efficiency bound becomes very simple for ideal gases where . Assuming that in the beginning of the cycle the system is in equilibrium with environment: , we recover the bound given by Eq. (10).

The result (9) still applies if in the beginning of the cycle the system is at some temperature , which is not the same as the temperature of the environment. This setup is e.g. realized in Otto engines, which are closest prototypes of real combustion engines. From the conceptual point of view this can happen if the engine does not have time to fully relax to equilibrium with the environment during one cycle. It is easy to see that in the limit when , i.e. when the excess impulse does not substantially change the temperature of the engine, the maximum efficiency is given by the Carnot limit. Practically, this situation is probably not advantageous since an engine which is hot most of the time will constantly radiate heat to the atmosphere.

For ideal gases with (9) generalizes to

 ηmt=1−τiτlog(1+τ),, (55)

where . When Eq. (55) obviously reduces to Eq. (10).

### v.2 Explicit derivation from general expression

In this section, we re-derive these results starting with the general expression

 η=TSr(q||peq)ΔQ. (56)

Rather than repeating and expanding the “quantum” derivation given earlier we will work here directly with continuous distributions to give the reader a feel of how these results can be derived directly from classical statistical physics. In this derivation we will make no assumptions about the form distribution and assume that the equilibrium distribution obeys the Bolztmann’s form:

 p(E)=1Zexp[−βE], (57)

where is the continuous partition function.

The key assumption in our derivation is that after the initial pulse of energy the engine reaches a stationary state with respect to the fast Hamiltonian , i.e. the relaxation with respect to is the fastest time scale in the problem. If is interacting and the system is large then such relaxation is equivalent to the thermalization in the engine at some higher temperature. But if is noninteracting or the engine is very small, consisting of very few degrees of freedom then such relaxation leads to some stationary nonequilibrium state. For integrable systems such state can be well described by a generalized Gibbs ensemble rigol2007GGE (). For a single particle in a chaotic cavity such relaxation means loss of memory about the position and direction of the momentum for a particle moving along a periodic orbit in a regular cavity the relaxation implies loss of memory of the coordinate along the trajectory (see Ref. [bunin2011universal, ] for additional discussion). The important for us mathematical result, which extends the Araki-Lieb’s subadditivity theorem arakilieb1970 () to the diagonal entropy, is that if two systems are prepared in stationary states and then coupled in an arbitrary way the sum of their diagonal entropies can only increase or stay the same polkovnikov2011microscopic (). Thus

 ΔSII+ΔSB≥0. (58)

In the same work (see also Sec. IV) it was also proven that for large systems like the bath the diagonal entropy reduces to usual thermodynamic entropy and that it obeys the fundamental thermodynamic relation: . From the energy conservation we have thus we find

 QII≤TΔSII. (59)

So the minimal amount of heat dissipated to the bath is equal to the difference between the diagonal entropies of the system in the nonequilibrium state and the equilibrium state. Therefore the maximum efficiency of any engine, equilibrium or nonequilibrium, is given by

 ηm=max[QI+QIIQI]=QI+TΔSIIQI (60)

Now let us evaluate these terms explicitly

 QI=∫dEΩ(E)E(q(E)−p(E))==∫dEE(Wq(E)−Wp(E))=T∫dE(Wp(E)−Wq(E))log(p(E)). (61)

where is the nonequilibrium energy distribution after the initial energy pulse. Similarly (see Ref. [polkovnikov2011microscopic, ])

 TΔSII=T∫dE[Wq(E)log(q(E))−Wp(E)log(p(E))] (62)

Thus we find

 QI+TΔSII=T∫dEWq(E)log[q(E)p(E)]=∫dEWq(E)log[Wq(E)Wp(E)]=Sr(q||p). (63)

This completes the proof of Eq. (56).

We can also generalize the result to “coherent” engines where the quantum coherence between particles is maintained. As we explained in the main text such engines either work on time scales faster than relaxation even with respect to the fast Hamiltonian or which preserve coherence during evolution, e.g. where all particles perform a collective oscillatory motion. For this class of engines the proof above stands if we use the von Neumann’s entropy instead of diagonal entropy for the system in Eq. (58) and Eq. (59) as well as the full relative entropy (see Eq. (38)) instead of in Eq. (63). The proof relies on subbtivity of the von Neumann’s entropy arakilieb1970 (), which for the system and the bath states . Using that  polkovnikov2011microscopic () and as well as the relation we prove Eq. (63) with . It is easy to check that the relative entropy can only increase if the matrix becomes off-diagonal so as expected the engines which preserve coherence during work cycle can be more efficient than engines which loose coherence due to dephasing.

### v.3 Increased Efficiency of non-ergodic engines

The physical reason for inevitable losses in engines is the second law of thermodynamics which states that the total entropy of the system and bath can only increase or stay the same. In the class of engines we consider, the entropy is generated during the initial pulse of energy and its dissipation to the bath leads to heating losses. Thus, it is intuitively clear that a engine can be made more efficient if the the entropy added to the system during the initial energy pulse is minimal. Because the equilibrium Gibbs distribution maximizes entropy for a given energy, it is clear that the nonequilibrium engines with smaller entropy have the potential to more efficient than equilibrium engines. Here we mathematically prove that this is indeed the case using the general result (56). In particular, below we prove that for a given energy change the relative entropy between and distributions is minimal when is described by the Gibbs form. The minimum of the relative entropy can be found by extremizing the expression:

 ∑nqn[log(qn)−log(pn)]+α1∑nEn(qn−pn)+α2∑nqn (64)

with respect to the set of , where and are the Lagrange multipliers enforcing the fixed energy and the probability conservation. Differentiating the result above with respect to a particular we find

 log(qm)+1−log(pm)+α1Em−α2=0,⇒qm=Cexp[−(β+α1)Em]. (65)

I.e. the distribution , which extremizes the relative entropy with respect to the Gibbs distribution is another Gibbs distribution corresponding to a different temperature fixed by the energy change. The fact that this extremum is minimum is obvious from considering the zero energy change case. This can be also checked explicitly by taking the second derivatives of the relative entropy with respect to the set of .

## Vi Discussion

In this work, we extended the fundamental thermodynamic relation to a large class of nonequilibrium phenomena where energy is suddenly injected into a system coupled to a thermal bath. Using these relations, we have derived new thermodynamic bounds for engines that operate with a single reservoir. In particular, we have shown that the efficiency can be related to the relative entropy of the nonequilibrium distribution immediately following the injection of energy. Our new bound has several striking implications. First, we find that the true thermal efficiency of single-reservoir engines is below the corresponding Carnot bound. This gives a better metric for measuring the efficiency of actually existing engines. Second, the efficiency of engines can be increased by using nonequilibrium (prethermalized) distributions. Taken together, this suggests some broad guidelines for building more efficient engines.

The Carnot engine has served as a major source of intuition for increasing the efficiency of real engines. It is our hope that the nonequilibrium efficiency (7) can provide similar intuition for nonequilibrium engines. Of course, the technological challenge of constructing a specific engine which can harness the additional efficiency possible with nonequilbrium engines remains an open question. Nonetheless, as illustrated with the simple examples considered here, this represents a tantalizing new possibility for future engine designs.

###### Acknowledgements.
We would like to acknowledge Luca D’Alessio, Sebastian Deffner, Shyam Erramilli, Yariv Kafri, and Clemens Neuenhahn for useful discussions and feedback on the paper. AP and PM were partially funded by a Sloan Research Fellowship. Work of AP was funded by BSF 2010318, NSF DMR-0907039, AFOSR FA9550-10-1-0110, and the Simons Foundation.

## References

• (1) Reif F (1965) Fundamentals of Statistical and Thermal Physics (McGraw-Hill Series in Fundamentals of Physics) (McGraw-Hill Science/Engineering/Math).
• (2) Fermi E (1956, c1936) Thermodynamics (Dover).
• (3) Magnasco M (1993) Forced thermal ratchets. Physical Rev. Lett. 71:1477–1481.
• (4) Kardar M (2007) Statistical physics of particles (Cambridge Univ Pr).
• (5) Cover T, Thomas J, Wiley J, et al. (1991) Elements of information theory (Wiley Online Library) Vol. 6.
• (6) MacKay D (2003) Information theory, inference, and learning algorithms (Cambridge Univ Pr).
• (7) Touchette H (2009) The large deviation approach to statistical mechanics. Physics Reports 478:1–69.
• (8) Bunin G, D’Alessio L, Kafri Y, Polkovnikov A (2011) Universal energy fluctuations in thermally isolated driven systems. Nature Physics 7:913–917.
• (9) Deffner S, Lutz E (2010) Generalized clausius inequality for nonequilibrium quantum processes. Phys. Rev. Lett. 105:170402.
• (10) Deffner S, Lutz E (2011) Nonequilibrium entropy production for open quantum systems. Phys. Rev. Lett. 107:140404.
• (11) Polkovnikov A (2010) Microscopic diagonal entropy and its connection to basic thermodynamic relations. Annals of Physics 326:486–499.
• (12) Levine R (1978) Information theory approach to molecular reaction dynamics. Annual Review of Physical Chemistry 29:59–92.
• (13) Schlögl F (1980) Stochastic measures in nonequilibrium thermodynamics. Phys. Rep. 62:267.
• (14) Qian H (2001) Relative entropy: free energy associated with equilibrium fluctuations and nonequilibrium deviations. Phys. Rev. E 63:042103.
• (15) Mehta P, Andrei N (2008) Nonequilibrium quantum impurities: From entropy production to information theory. Phys. Rev. Lett. 100:86804.
• (16) Landau L, Lifshitz E (1980) Statistical Physics Part I (Butterworth-Heinemann).
• (17) Berges J, Borsanyi S, Wetterich C (2004) Prethermalization. Physical review letters 93:142002.
• (18) Gurarie V., Probability density, diagrammatic technique, and epsilon expansion in the theory of wave turbulence, Nuclear Physics B 441, 569 (1995).
• (19) Moeckel M, Kehrein S (2010) Crossover from adiabatic to sudden interaction quenches in the hubbard model: prethermalization and non-equilibrium dynamics. New Journal of Physics 12:055016.
• (20) M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, J. Schmiedmayer, Relaxation and Pre-thermalization in an Isolated Quantum System, Science 337, 1318 (2012).
• (21) Polkovnikov A, Sengupta K, Silva A, Vengalattore M (2011) Colloquium: Nonequilibrium dynamics of closed interacting quantum systems. Reviews of Modern Physics 83:863.
• (22) Deutsch J (1991) Quantum statistical mechanics in a closed system. Physical Review A 43:2046.
• (23) Srednicki M (1994) Chaos and quantum thermalization. Phys. Rev. E 50:888.
• (24) Rigol M, Dunjko V, Olshanii M (2008) Thermalization and its mechanism for generic isolated quantum systems. Nature 452:854–858.
• (25) Jaynes E (1957) Information theory and statistical mechanics. ii. Phys. Rev. 108:171.
• (26) Landau L, Lifshitz E (1980) Classical Mechanics (Butterworth-Heinemann).
• (27) A. Polkovnikov, Microscopic expression for the heat in the diagonal basis, Phys. Rev. Lett. 101, 220402 (2008).
• (28) V. Gurarie, Large time dynamics and the generalized Gibbs ensemble, arXiv:1209.3816.
• (29) M. Rigol, V. Dunjko, V. Yurovsky, M. Olshanii, Relaxation in a completely integrable many-body quantum system: An ab initio study of the dynamics of the highly excited states of 1d lattice hard-core bosons, Phys. Rev. Lett. 98, 050405 (2007).
• (30) H. Araki and E. H. Lieb, Entropy Inequalities, Comm. Math. Phys. 18, 160, (1970).
Comments 0
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters

Loading ...
138504

You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test
Test description