Optimized finite-time information machine

Optimized finite-time information machine

Michael Bauer, Andre C. Barato and Udo Seifert II. Institut für Theoretische Physik, Universität Stuttgart, 70550 Stuttgart, Germany
Abstract

We analyze a periodic optimal finite-time two-state information-driven machine that extracts work from a single heat bath exploring imperfect measurements. Two models are considered, a memory-less one that ignores past measurements and an optimized model for which the feedback scheme consists of a protocol depending on the whole history of measurements. Depending on the precision of the measurement and on the period length the optimized model displays a phase transition to a phase where measurements are judged as non-reliable. We obtain the critical line exactly and show that the optimized model leads to more work extraction in comparison to the memory-less model, with the gain parameter being larger in the region where the frequency of non-reliable measurements is higher. We also demonstrate that the model has two second law inequalities, with the extracted work being bounded by the change of the entropy of the system and by the mutual information.

pacs:
05.70.Ln, 05.10.Gg

1 Introduction

The thermodynamics of information processing is a very active area of research. Whereas central concepts in this field have been developed a while ago [1, 2, 3], more recently the fluctuation relation obtained by Sagawa and Ueda [4] has shown that stochastic thermodynamics [5] provides a convenient framework to study the relation between information and thermodynamics. Moreover, ingenious experiments with small systems [6, 7] verifying second law inequalities that involve information have played an important role in triggering the recent avalanche of papers. These works deal with the derivation of fluctuation relations and second law inequalities [8, 9, 10, 4, 11, 12, 13, 14, 15, 16, 17, 18, 19] and the study of specific models [20, 21, 22, 23, 24, 25, 26, 27, 28, 29].

In finite-time thermodynamics the issue of optimal protocols is of central importance. A recent result within stochastic thermodynamics has been the observation that the optimal protocol has discontinuities at the beginning and end of the finite-time process [30, 31, 32, 33, 34, 35, 36]. In information processing, optimal protocols have so far been analyzed for the maximal work extraction in a feedback driven system described by an one-dimensional over-damped Langevin equation [37] and for the minimum dissipated heat in an erasure process [38, 39].

In this paper we study a paradigmatic discrete two-state model [28, 34, 40, 41], where the work extraction, performed by lifting and lowering one energy level, is driven by feedback. Besides applying the optimal protocol leading to the maximal work during one period, this information machine will also be optimized in the sense that the protocol takes the whole history of measurements into account.

We show that this optimized feedback strategy leads to more work extraction in comparison to a simple memory-less machine. Moreover, we observe a phase transition, where in one phase the machine always lifts the state indicated by the last measurement as empty and in the other phase the state measured as occupied is lifted with a certain frequency. The extracted work is observed to be bounded by two quantities: the familiar mutual information between system and controller and the change of the entropy of the system. While the second bound is valid for every measurement trajectory, the first becomes valid only after an average over measurement trajectories is taken. Finally, we show that the memory-less model allows for a different physical interpretation of the system interacting with a tape, i.e., a sequence of bits. This memory-less model then corresponds to a generalization of the model recently introduced in [42] (see also [41, 43, 44, 45]).

The paper is organized as follows. In Sec. 2 we obtain the optimal protocol for a single period. The full feedback driven models are defined in Sec. 3. The phase transition and gain parameter for the optimized model are studied in Sec. 4. In Sec. 5 the different second law inequalities valid for the model are analyzed. We conclude in Sec. 6.

2 Two-state finite-time process

2.1 Model

The model analyzed in this paper is a two-state system where the time dependent energy of the upper level can be controlled by an external protocol in the time interval . The lower level has always energy zero. This system is connected to a heat bath at temperature and a work reservoir. We consider a finite-time process with duration , where both energy levels are zero immediately before starting and immediately after finishing , see Fig. 1. These initial and final jumps of are a generic feature of optimal protocols [30].

Figure 1: Representation of the finite-time process. Initially, at , the entropy of the system is . At the level with lower probability is lifted. For this energy level is lowered with protocol . At time this energy level is set from back to extracting work if this level had been occupied at .

Denoting the occupation probability of the upper level by , the time derivative of the average internal energy reads

(1)

where the dot represents a time derivative throughout the paper. This is the first law of thermodynamics, where is identified as the rate of extracted work and as the rate of absorbed heat. This identification means that if a jump occurs heat is exchanged with the heat bath and if the energy level changes work is exchanged with the work reservoir. The extracted work in the time interval then becomes

(2)

where the boundary terms comes from the discontinuities in represented in Fig. 1. Since the variation of the internal energy is zero, the extracted work equals the heat absorbed from the heat bath, i.e.,

(3)

Even though the system is connected to a single heat bath and the variation of the internal energy is zero, it is still possible to extract work due to the increase in the entropy of the system. More precisely, the second law for such isothermal process establishes that the extracted work is bounded by the change of the entropy of the system, i.e.,

(4)

where . In this paper we set and, in order to have work extraction, we restrict to the case .

2.2 Optimal Protocol

The optimal protocol that leads to the maximal work extraction for given time interval and initial occupation probability is calculated in the remaining of this section. The master equation reads

(5)

where () is the time dependent transition rate to (from) the upper level. These rates must fulfill the detailed balance relation . For convenience, we choose

(6)

Following the analysis for a symmetric choice of rates [34], the optimal protocol and the corresponding maximal extracted work is found by considering the Lagrangian

(7)

where the work is then written as

(8)

Since does not explicitly depend on , we have the following constant of motion,

(9)

Introducing the variable , equation (9) becomes

(10)

The solution of this equation is

(11)

where is the lower branch of product logarithm [46]. Using relations , , and (11), the extracted work (8) becomes a function of the single variable . The maximal work is then obtained for , where . In this way, is given by the solution of the transcendental equation

(12)

For convenience the optimal protocol and corresponding maximal work are simply denoted by and , respectively. From (8) the maximal work that can be extracted for fixed and is

(13)

and the corresponding optimal protocol reads

(14)

where is given by the solution of equation (12). In Fig. 2, we plot the maximal work, the power and the discontinuities of the optimal protocol as a function of for given . The optimal work is a decreasing function of , with full knowledge of the initial state leading to the maximal work extraction for fixed . By increasing , the work increases whereas the power decreases, going to zero in the limit . The initial and final energy jumps decrease with , being maximal for . The initial jump increases with , while the final jump decreases with . More precisely, for we have and the difference between the jumps grows with , with reaching its maximal value and for .

Finally it is useful for the following discussion to give for the optimal protocol explicitly as

(15)
Figure 2: Maximal work for different values of on the left, with the power plotted in the inset. The initial energy jump and the final energy jump in the inset are plotted on the right.

3 Feedback driven machine

3.1 Imperfect measurements

An information driven machine periodically repeats the process explained in the previous section using feedback control. Measurements and feedback drive the work extraction by resetting the entropy of the system at the end of the time interval. We denote the state of the system just before starting period by and the measurement by , where () means that the left (right) state is occupied while () corresponds to measuring the left (right) state as being occupied. The conditional probability of the measurement is defined as

(16)

where is the measurement error. The machine never knows the real state of the system and has access only to the history of measurements . Hence, in all calculations that follow the state of the system is always averaged out.

3.2 Optimized machine

First we consider a feedback procedure with a protocol taking the whole measurement trajectory into account. We are interested in the probability of being at state given the history of measurements , which is denoted . For this feedback scheme the initial occupation probability of the level that will be raised at the beginning of period is

(17)

This means that contrary to the naive procedure of lifting the level independent of the measurement history it is also possible to make the unusual choice of lifting the level . In this second case, the level indicated by the last measurement as occupied is lifted: the measurement is judged to be not reliable. Moreover, the machine applies the protocol , which takes into account the whole history of measurements by using the history dependent initial probability .

In A we show that the initial probability fulfills a nonlinear recursion relation. Denoting by the probability at the end of the period that is obtained from the function (15) with initial probability , we define the functions

(18)

and

(19)

where

(20)

The recursion relation for then reads

(21)

As explained in A, the variable has the purpose of identifying whether the measurement outcome corresponds to the upper or the lower level of the interval , with if the upper level is and if the lower level is . We call this machine taking the history of measurements into account the optimized machine because, as we will see in Sec. 4, it leads to more work extraction then a simple memory-less machine which we define next.

3.3 Memory-less machine

A memory-less feedback scheme that only takes the last measurement into account would be to simply apply a protocol for which the level raised for the next period is just the state measured as empty. Hence, for a measurement outcome , the level is lifted at the beginning of period . As we show in B, where the memory-less machine is more explicitly defined, the average initial occupation probability of the upper level is , independent of the protocol. Therefore, the appropriate choice for a protocol that must be independent from the whole measurement history and corresponds to the memory-less version of the optimized machine is , which is obtained from (14) with .

4 Gain and phase transition

The work extracted during period with the optimized machine is denoted by . For a given measurement realization we define

(22)

The average work is obtained by considering the limit and averaging over all measurement trajectories, where the brackets denote this average over measurement trajectories. Numerical simulations for large enough indicate that (and other observables we calculate below) is independent of the numerically generated measurement history, i.e., self-averaging. Therefore, we calculate the average work by generating a single long measurement history.

For the memory-less machine the average work is just , as demonstrated in B. The improvement of the optimized in relation to the memory-less machine is quantified by the gain parameter

(23)

Naively one expects the optimized machine that takes the history of measurements into account to extract more work than the simple machine. This expectation is confirmed by numerical simulations, from which we observe that . For the work extraction in the memory-less model would be the same as in the optimized model and for the work extraction is much higher with the optimized model. In Fig. 3, we plot in the -plane. The gain approaches for small and close to , where non-reliable measurements are more likely to occur.

Figure 3: The gain parameter (left) and the order parameter (right) as functions of the time interval and the measurement error . The results are obtained by numerically generating a measurement trajectory of length . The full black critical line is obtained analytically from (53) and the dotted line on the right panel from (54).

It turns out that the optimized model displays a phase transition. The order parameter for this transition is the frequency at which the state is lifted, i.e.,

(24)

where if the measurement is not reliable and if the measurement is reliable (see A for a precise definition). The numerical calculation of this order parameter is also shown in Fig. 3. We can clearly see a phase transition with below a critical threshold . Numerics indicates a second order phase transition.

The optimized machine has two advantages in relation to the memory-less machine: it lifts the level if the last measurement is not reliable and it uses a history dependent protocol . By comparing with in Fig. 3, we see that in the phase there is a substantial gain. This means that the first advantage is the key feature leading to more work extraction for the optimized feedback scheme. Moreover, as shown in A, in the phase the average initial occupation probability is . Hence, in this phase arises from the fact that the function plotted in Fig. 2 is convex, implying , where the average is defined in (45).

As we show in C, the critical line can be obtained analytically from the transcendental equation (53). It is in perfect agreement with numerical results, as shown in Fig. 3.

5 Second law inequalities

5.1 Efficiency and power

The second law for feedback driven systems [10] states that the average extracted work is bounded by the average mutual information between system and controller due to measurements. The mutual information between the system and the controller due to the measurement is defined as

(25)

We denote the average mutual information by , so that the efficiency of the optimal machine reads

(26)
Figure 4: The efficiency and the average power as functions of the time interval and the measurement error . The results are obtained by numerically generating a measurement trajectory of length .

In Fig. 4 we show the numerically calculated efficiency and power for the optimized model in the -plane. Increasing the time period increases the efficiency but decreases the power of the machine. For fixed , the efficiency increases for increasing measurement error . Hence, maximum power is obtained for small and small , which is, however, a rather inefficient case with .

5.2 Two second law inequalities

Another bound on the extracted work is provided by the Shannon entropy change, as expressed in (4), which for the optimized model can be written as

(27)

where the Shannon entropy change in interval is

(28)

The inequality (27) is then valid for a fixed measurement trajectory, whereas the standard second law for feedback driven systems is valid only after an average over measurement trajectories is taken. By numerical inspection we observe that (or ) can be smaller than . Furthermore, by taking the average for large we find within numerical errors. This equality can be demonstrated with the following heuristic argument. Rearranging the terms in the mutual information we get

(29)

where the average is defined in (45). From this equation it is clear that the average mutual information and the average Shannon entropy change differ only by boundary terms, which for large should be irrelevant, implying .

5.3 Memory-less machine as a system interacting with a tape

As in B we now consider a memory-less machine using an arbitrary protocol . Equation (29) is also valid for the memory-less case and, therefore, the average Shannon entropy change should be equal to the average mutual information for large . This result was confirmed numerically for the protocol and for , which corresponds to the energy level held fixed during the whole time interval. Similar to (25), the mutual information depending on reads

(30)

where is defined in (50). Denoting by the solution of the master equation (5) with protocol and initial probability we define

(31)

where . Since (and ) is linear in , it follows that , where the average is defined in (51). From the fact that the Shannon entropy is concave we obtain that provides an upper bound on the average mutual information

(32)

As the average extracted work is equal to the work extracted in the first period (see B), if before the first measurement both states are equally probable, the average extracted work is also bounded by the Shannon entropy change in the first period

(33)

Comparing with the other bounds we have and, numerically, for the protocols and , we observe . We conjecture that this entropy change provides the strongest bound on the extracted work.

The inequality

(34)

for the protocol has been recently studied in [41]. In this reference it has been shown that the two-state model can also be interpreted as a tape, i.e., a sequence of bits, interacting with a thermodynamic system. In this interpretation the entropy change is dumped to a tape or information reservoir. The inequality (34) means that the extracted work is bounded by the Shannon entropy change of the tape, which is initially and becomes after the system has written information to it. This model for a tape interacting with a thermodynamic system has been introduced by Mandal and Jarzynski [42], for a model with six instead of two states and a protocol that is also held fixed during the whole time interval. By showing that inequality (34) is valid also for arbitrary protocols, we thus obtain that their model can be generalized to arbitrary time-dependent protocols.

6 Conclusion

We have studied a two-state finite-time optimized information-driven machine. Besides utilizing the optimal protocol, this machine is also optimized in the sense that the feedback scheme takes into account the whole history of measurements. We have shown that the optimized machine leads to more work extraction in comparison to a simple memory-less machine that does not take the full measurement trajectory into account.

This optimized model displays a phase transition with the frequency at which non-reliable measurements occur being the order parameter. In the region of the phase diagram where non-reliable measurements occur with a higher frequency the gain parameter, characterizing the improvement of the optimized in relation to the memory-less machine, was found to be high. Hence the possibility of lifting the state last measured as occupied if the measurement is non-reliable is the key feature that makes the optimized model perform better. Moreover, analyzing the recursion relations for the initial occupation probability of the upper level we have obtained the critical line exactly.

We have shown that the work extraction is bounded both by the Shannon entropy change and the mutual information. While the first bound is valid for every measurement trajectory the second is valid only after averaging over the measurements. In this case, both bounds become the same. Moreover, for the memory-less model we have demonstrated that the average extracted work is bounded by the Shannon entropy change of the first period. This inequality allows for an interpretation of the model as a thermodynamic system interacting with a tape, thus generalizing the model introduced in [42].

Appendix A Iterative relation for the optimized model

In this appendix, we obtain the nonlinear recursive relation for the initial occupation probability of the upper level of the optimized machine (21). From relations

(35)

and

(36)

we obtain

(37)

where we used the definition of measurement error (16). Using the relation

(38)

the conditional probability can then be written as

(39)

Depending on the past measurements the probability on the right side of this equation is either or , where is obtained from and equation (15). From equation (17), the state indicated by the measurement as occupied is lifted provided , which from (39) is equivalent to . Since , a necessary condition for lifting the state at the beginning of period is that .

It is convenient to define the variables , for , and , which takes the value () if the level () is lifted at the beginning of period , i.e.,

(40)

Furthermore, we define . This last variable identifies whether for given the probability is or :

(41)

In words, this equation means that if () then corresponds to the lower (upper) level of period .

For the initial condition before the first period we assume that the states are equally probable, hence, and . Numerical simulations of the measurement trajectory are then performed with the following algorithm:

  1. For period randomly choose a measurement according to the probability given by equations (37) and (41);

  2. with , the variable , and equations (17), (39) and (41) calculate and ;

  3. from relation (15) and calculate . Go back to the first step with the substitution .

This algorithm can be translated into a recursion relation for the initial probability. Using (39) and (41), relation (17) becomes

(42)

where

(43)

and

(44)

with . Note that the function is minimal when , which implies . Only in this case, the state measured as occupied can be lifted at the beginning of period .

Moreover, from (37), (41), and (42), the average initial occupation probability conditioned on is

(45)

If the machine never lifts the level , i.e., for all periods, the above average becomes .

Appendix B Extracted work for the memory-less machine

For the memory-less machine we denote the initial occupation probability at period by . The final occupation probability at period is : as the memory-less machine does not use the optimal protocol, is not obtained from (15) but rather it is the solution of the master equation (5) for a given protocol and initial condition .

Another difference in relation to the optimized model considered in A is that the variable is not necessary for the memory-less machine, since here for all . Hence, for the memory-less machine, equation (41) becomes

(46)

The iterative relation (42) then simplifies to

(47)

where

(48)

and

(49)

with

(50)

Similar to (45), the average initial probability for fixed measurement history is

(51)

We denote by the work that is obtained from (3) with protocol and initial probability . The average work is then given by

(52)

where in the first equality we used the fact that is linear in . Since, is independent of the history , it follows that the average work is simply . For the memory-less machine we compare with the optimized one , which is given by (14), and the average work is , which is given by (13). Moreover, assuming that before the first measurement both states are equally probable, which leads to , the average work equals the work extracted in the first period.

Appendix C Critical line

We now obtain the critical line exactly by analyzing the iterative relation for (21). As discussed in A, the level will be lifted at the beginning of interval only if . Hence, in the phase the condition must be fulfilled. The fixed points of these nonlinear maps are obtained from and . The possible trajectories in the cobweb diagram for the first five iterations of relation (21) are shown in Fig. 5. It is clear that does not go below the fixed point . Therefore, the critical line can be obtained analytically by setting

(53)

which leads to a cumbersome transcendental equation. Solving this equation we obtain the full black line in Fig 3, in perfect agreement with the numerical simulations.

Figure 5: Cobweb diagram of all possible first five iterations, starting with , of the relation (21). In the left panel and , in the central panel and , and in the right panel and . For the first two cases , whereas in the third case .

Moreover, the phase can be further separated by two distinct regions, where the line (dotted line in Fig. 3) separating these two regions is obtained from

(54)

In the region closer to the critical line with the machine does not lift the state measured as occupied because is not small enough, i.e., (see Fig. 5). However, in this region depending on the initial condition the machine might lift the state measured as occupied during some initial transient.

References

References

  • [1] Szilard L, 1929 Z. Phys. 53 840 – 856
  • [2] Landauer R, 1961 IBM J. Res. Dev. 5 183–191
  • [3] Bennett C, 1982 Int. J. Theor. Phys. 21 905–940
  • [4] Sagawa T and Ueda M, 2010 Phys. Rev. Lett. 104 090602
  • [5] Seifert U, 2012 Rep. Prog. Phys. 75 126001
  • [6] Toyabe S, Sagawa T, Ueda M, Muneyuki E and Sano M, 2010 Nature Phys. 6 988
  • [7] Bérut A, Arakelyan A, Petrosyan A, Ciliberto S, Dillenschneider R and Lutz E, 2012 Nature 483 187–189
  • [8] Touchette H and Lloyd S, 2000 Phys. Rev. Lett. 84 1156
  • [9] Touchette H and Lloyd S, 2004 Physica A 331 140–172
  • [10] Cao F J and Feito M, 2009 Phys. Rev. E 79 041118
  • [11] Ponmurugan M, 2010 Phys. Rev. E 82 031129
  • [12] Horowitz J M and Vaikuntanathan S, 2010 Phys. Rev. E 82 061120
  • [13] Abreu D and Seifert U, 2012 Phys. Rev. Lett. 108 030601
  • [14] Kundu A, 2012 Phys. Rev. E 86 021107
  • [15] Sagawa T and Ueda M, 2012 Phys. Rev. E 85 021104
  • [16] Sagawa T and Ueda M, 2012 Phys. Rev. Lett. 109 180602
  • [17] Ito S and Sagawa T, 2013 Phys. Rev. Lett. 111 180603
  • [18] Allahverdyan A, Janzing D and Mahler G, 2009 J. Stat. Mech. P09011
  • [19] Hartich D, Barato A C and Seifert U, 2014 J. Stat. Mech. P02016
  • [20] Cao F J, Dinis L and Parrondo J M R, 2004 Phys. Rev. Lett. 93 040603
  • [21] Horowitz J M and Parrondo J M R, 2011 EPL 95 10005
  • [22] Horowitz J M and Parrondo J M R, 2011 New J. Phys. 13 123019
  • [23] Abreu D and Seifert U, 2011 EPL 94 10001
  • [24] Granger L and Kantz H, 2011 Phys. Rev. E 84 061110
  • [25] Kish L B and Granqvist C G, 2012 EPL 98 68001
  • [26] Esposito M and Schaller G, 2012 EPL 99 30003
  • [27] Strasberg P, Schaller G, Brandes T and Esposito M, 2013 Phys. Rev. Lett. 110 040601
  • [28] Horowitz J M, Sagawa T and Parrondo J M R, 2013 Phys. Rev. Lett. 111 010602
  • [29] Granger L and Kantz H, 2013 EPL 101 50004
  • [30] Schmiedl T and Seifert U, 2007 Phys. Rev. Lett. 98 108301
  • [31] Then H and Engel A, 2008 Phys. Rev. E 77 041105
  • [32] Schmiedl T and Seifert U, 2008 EPL 81 20003
  • [33] Gomez-Marin A, Schmiedl T and Seifert U, 2008 J. Chem. Phys. 129 024114
  • [34] Esposito M, Kawai R, Lindenberg K and van den Broeck C, 2010 EPL 89 20003
  • [35] Aurell E, Mejia-Monasterio C and Muratore-Ginanneschi P, 2011 Phys. Rev. Lett. 106 250601
  • [36] Aurell E, Mejia-Monasterio C and Muratore-Ginanneschi P, 2012 Phys. Rev. E 85 020103
  • [37] Bauer M, Abreu D and Seifert U, 2012 J. Phys. A: Math. Theor. 45 162001
  • [38] Diana G, Bagci G B and Esposito M, 2013 Phys. Rev. E 87 012111
  • [39] Zulkowski P R and DeWeese M R, 2014 Phys. Rev. E 89 052140
  • [40] Kumar N, van den Broeck C, Esposito M and Lindenberg K, 2011 Phys. Rev. E 84 051134
  • [41] Barato A C and Seifert U, 2014 Phys. Rev. Lett. 112 090601
  • [42] Mandal D and Jarzynski C, 2012 Proc. Natl. Acad. Sci. U.S.A. 109 11641–11645
  • [43] Mandal D, Quan H T and Jarzynski C, 2013 Phys. Rev. Lett. 111 030602
  • [44] Barato A C and Seifert U, 2013 EPL 101 60001
  • [45] Deffner S and Jarzynski C, 2013 Phys. Rev. X 3 041003
  • [46] Corless R, Gonnet G, Hare D, Jeffrey D and Knuth D, 1996 Adv. Comput. Math. 5 329–359
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
139022
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description