Role of measurement-feedback separation in autonomous Maxwell’s demons

Role of measurement-feedback separation in autonomous Maxwell’s demons

Naoto Shiraishi, Sosuke Ito, Kyogo Kawaguchi, and Takahiro Sagawa Department of basic science, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, Japan Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan shiraishi@noneq.c.u-tokyo.ac.jp
Abstract

We introduce an information heat engine that is autonomous (i.e., without any time-dependent parameter) but has separated measurement and feedback processes. This model serves as a bridge between different types of information heat engines inspired by Maxwell’s demon; from the original Szilard-engine type systems to the autonomous demonic setups. By analyzing our model on the basis of a general framework introduced in our previous paper [N. Shiraishi and T. Sagawa, Phys. Rev. E 91, 012130 (2015).], we clarify the role of the separation of measurement and feedback in the integral fluctuation theorems.

  • 2019/7/13

1 Introduction

“Maxwell’s demon” is a thought experiment proposed by J. C. Maxwell [2]: if a thermodynamic system is subjected to feedback control at the level of thermal fluctuations, the second law can be apparently violated. A prominent example is the Szilard engine [3], which is a composite system of an engine and a memory. The memory measures the state of the engine and performs feedback to the engine. The feedback procedure allows positive work to be extracted from the engine through an isothermal cyclic process, which, under the usual circumstance, is prohibited by the second law of thermodynamics. Szilard characterized the two separated steps of measurement and feedback as the processes which change the extent of the correlation between the engine and the memory, and suggested the possibility of an extended framework of thermodynamics for systems with the change in correlation. The consistency of the second law of thermodynamics with the existence of Maxwell’s demon has been discussed vigorously [4]. It has been suggested that the key to understand the consistency is the change in the volume of the phase space [5], which characterizes the thermodynamic irreversibility [6]. Modern theories have revealed that this property is captured by the mutual information, which is a quantity that measures the correlation between the engine and the memory. The generalized thermodynamic relations with the mutual information has been discussed for a single feedback process [7, 8], continuous feedback processes [9, 10, 11], feedback cooling [12, 13, 14], and more general information processing [15, 16, 17].

Although these information-theoretic frameworks are expected to be useful in various problems in small fluctuating systems such as biochemical sensing [18, 19] and quantum mesoscopic systems [21, 20], there is a critical remark to be made; the original framework, which is applicable to the Szilard-type demons with step-by-step separated measurement and feedback processes, cannot be directly applied to autonomous stochastic information processes. This is because the processes of measurement and feedback can themselves occur stochastically at any time, and thus they are inseparable in autonomous setups. In purpose to extend the applicability of information thermodynamics to autonomous Maxwell’s demons, intensive researches on the model of autonomous demons [22, 23, 24, 21] and information flow [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35] have been recently made. Autonomous demons can be modeled as bipartite Markovian systems where the measurement and the feedback processes are not, in general, separated. The integral fluctuation theorem (IFT) obtained for the autonomous demons has distinct features from the relation obtained for the Szilard-type systems [16, 17]. The crucial difference comes from the fact that in the Szilard-type systems, the state of the engine (memory) is fixed externally during the measurement (feedback) phase, whereas in autonomous demons, it is not.

In this paper, we clarify the difference between Szilard-type demons and autonomous demons on the basis of a general framework introduced in our previous paper [35]. In purpose to build a bridge between these two different setups, we propose a new model of Maxwell’s demon, which is autonomous but possesses separated measurement and feedback processes. The key ingredient in the model is an additional stochastic variable, which plays a role of separating the measurement phase and the feedback phase. An IFT with the mutual information can be derived for this model, which is more similar to another IFT satisfied in Szilard-engine type demons [16, 17] than to that for autonomous bipartite demons [35].

This paper is organized as follows. In Sec. 2, we review IFTs for Szilard-type demons and autonomous demons, where the mutual information plays a crucial role. We also discuss a general framework that leads to IFT of autonomous and non-autonomous demons in a unified way, on the basis of the concept of the partial entropy production. In Sec. 3, we introduce a model of the autonomous demon with separated measurement and feedback phases, and derive an IFT for this model. On the basis of these discussions, in Sec. 4, we elucidate the role played by the separation of the measurement phase and the feedback phase.

2 Information thermodynamics: a brief review

2.1 Thermodynamics of small systems

Throughout this paper, we consider Markov jump processes during the time interval . The system is in the isothermal condition with inverse temperature , and the possible states of the system are denoted as . The transition from to is written as , where the transition rate at time is denoted by . We assume that for any states and any time there exists another state such that . The time evolution of the system is written as the master equation:

(1)

where is the probability distribution of at time , and represents the probability flux from to .

Jumps between states occur stochastically in each trajectory, where we denote the number of jumps by . The -th jump occurs at time (), and correspondingly the state changes from to . We write the initial and final time of the entire dynamics as and . The state at time is written as . We define the total entropy production as [36]

(2)

where

(3)

is the heat absorption by the system from the heat bath with the transition at time , and represents the stochastic Shannon entropy at the state and time . Here, we assumed the local detailed balance condition [37], and normalized the Boltzmann constant to . The total entropy production characterizes the irreversibility, and satisfies the IFT and the second law of thermodynamics  [38, 39, 40, 41], where represents the ensemble average over all trajectories.

2.2 General framework

We now describe a general theoretical framework introduced in Ref. [35], which is applicable to both Szilard-type and autonomous demons in a unified way, as discussed in the subsequent subsections. Under this framework, the entropy production for the total system is divided into contributions from subsets of individual transitions, where each partial entropy production satisfies an IFT.

We divide a set of all possible transitions into two subsets: and its complement . In Fig. 1(a), for example, the six red arrows represent the transitions included in and the other sixteen black arrows represent the transitions included in . We divide the total entropy production into the contributions from and , which are denoted by and , respectively. Note that the subset can be time-dependent in general.

Figure 1: (a): An example of the choice of . The six red arrows correspond to transitions in . (b): A schematic of a time-series of the stochastic Shannon entropy along a single trajectory. The initial state is , and the jumps and occur at times and , respectively. The dashed lines represent the change in the stochastic Shannon entropy induced by the jumps, and the solid lines represent that induced by the change in the probability distribution.

First, we divide the probability flux at the state as , where is defined as

(4)

and is defined in the same manner. Using Eq. (4), we define the partial entropy production as [35]

(5)

The first term of the right-hand side (rhs) of Eq. (5) represents the heat absorption accompanied by the transitions in :

(6)

where takes if and takes otherwise. The second term of the rhs of Eq. (5) represents the change in the stochastic entropy induced by the transitions in :

(7)

The first term of the rhs of Eq. (7) is defined as

(8)

which quantifies the contributions from jumps (dashed lines in Fig. 1(b)) within . The second term of the rhs of Eq. (7) evaluates the change induced by the probability flux (the solid lines in Fig. 1(b)) within , which is confirmed by

(9)

It is crucial that the partial entropy production satisfies

(10)

and the IFT [35]:

(11)

Applying Jensen’s inequality to Eq. (11), we obtain

(12)

In the following two subsections, we show two IFTs for Szilard-type demons (18) and that for autonomous demons (20) which are particular cases of Eq. (11).

2.3 IFT for the Szilard-type demon

Figure 2: Repeated measurement and feedback processes. The dashed lines indicate frozen phases and the red bold lines indicate movable phases.

In this and the next subsections, we consider a bipartite system that consists of subsystems and , where the state is written as , with the transition rates satisfying for and . We define the entropy production of subsystem as

(13)

where is the heat absorbed by subsystem , and is the change in the stochastic Shannon entropy of only . In contrast to the averaged entropy production of the total system , that of the subsystem can be negative. In the case of the Szilard engine [3], which is a composite system of an engine and a memory, the entropy production of the engine is negative on average corresponding to the extraction of work from the isothermal cycle.

In previous works [16, 17], it has been discussed that modified IFT and second law still hold under these setups, if the change in the correlation between the engine and the memory is taken into account. In the case of the Szilard engine, the measurement process increases the correlation, and the feedback process decreases the correlation. In general, the correlation is quantified by the mutual information. The stochastic mutual information between the state of the engine and the state of the memory is defined as

(14)

whose ensemble average

(15)

is the mutual information [42]. The mutual information is zero if there is no correlation, and becomes larger if the correlation is stronger.

Figure 3: The choice of in the proof of the IFT for Szilard-type demons (18).

In order to generalize the setup of the Szilard engine, we here suppose that there are -times of measurement and feedback processes in a composite system of and during the time interval from to . During the time interval () with , only () can evolve and () is fixed (see Fig. 2), corresponding to the measurement (feedback) phase. These conditions are written as

(16)
(17)

where we abbreviated the transition rates as and . This setup represents the repeated measurement and feedback processes between the engine and the memory . Here, by applying Eq. (11) with setting all transitions in as (see Fig. 3), the entropy production of the engine satisfies the following IFT and the generalized second law [17]:

(18)
(19)

where represents the change in the mutual information between before and after the -th feedback process. This IFT implies that if we consume the correlation between the engine and the memory (i.e., ), the entropy production in the engine can be negative (i.e., ) up to the amount of the consumption. In contrast, if we establish their correlation, we need the corresponding extra cost. Equality (18) is a special case of a more general IFT that is discussed below.

2.4 IFT for general measurement and feedback processes

In this subsection, we mention on the case of the general measurement and feedback processes with bipartite systems, which is applicable for example to autonomous information processing in biological systems [18, 19]. The transition rate of depends on the state of memory , and vice versa, although the time-separation of the measurement and feedback [Eqs. (16) and (17)] is not assumed.

Figure 4: The choice of in the proof of the IFT for general measurement and feedback processes (20).

We define the entropy production associated only with by the same form as Eq. (13). By applying Eq. (11) with setting all transitions in as (see Fig. 4), it has been shown that the following IFT holds [35]:

(20)

where represents the change in the mutual information contributed from  [27, 28, 29, 33] defined as

(21)

The first term represents the change in the mutual information with realized jumps of defined as

(22)

where is the Kronecker delta. Here, we used notation . The second term involves a time integral over the entire time interval; its integrand represents the change in the mutual information induced by the probability flux of , which is defined as

(23)

where .

3 Autonomous Maxwell’s demon with separated measurement and feedback process

In contrast to the IFT for the Szilard-type demons (18), the IFT for general measurement and feedback (20) includes a time-integral term over the entire time interval. The most important difference between these setups is whether the time interval of the measurement phase and the feedback phase are separated externally. Although autonomous systems are not controlled externally, two phases can be separated even in autonomous systems by introducing an additional stochastic variable which determines whether the present state is in the measurement phase or in the feedback phase. To clarify this point, we construct an autonomous model with separated measurement and feedback phases, and derive an IFT satisfied in this system.

3.1 Model and setups

Figure 5: (a): The state space of the 4-state model. If a particle is (is not) in the site, the wall tends to move right (left). (b): The state space of the 8-state model. For , only the state of the wall (the particle) can change. The additional variable also changes stochastically.

Let us start from simple models with and without measurement-feedback separation. We first consider the autonomous 4-state model introduced in Ref. [35], where the measurement phase and the feedback phase are not separated. The 4-state model is a bipartite system which transports particles from a dense particle bath L to a dilute particle bath R (see Fig. 5(a)). There is a site between two baths with at most one particle, where represents the presence or the absence of a particle at the site. We then introduce a wall which prohibits jumps of particles. If the wall is between the site and bath L (R), we denote the state of the wall as . The position of the wall tends to be () if a particle exists (does not exist) at the site. With these processes, particles are transported against the chemical potential gradient.

Now we construct an autonomous model named the 8-state model with the measurement-feedback separation, by adding a new stochastic variable to the 4-state model (see Fig. 5(b)). The role of is to determine whether the present state is in the measurement phase or the feedback phase. When , only particles (the wall) can move and the wall (particles) is fixed, which corresponds to the feedback (measurement) phase. Since the probability distributions and transition rates are independent of time, we omit the argument of . These conditions are written as , . We assume that the transition rates of are independent of and , and write and .

We can easily generalize this setup to the case with more than eight states. Suppose a composite system with three variables; , , and an additional variable , which is in the stationary distribution with time-independent transition rates. We assume that only one of changes within a single transition. The transition rates also satisfy the following conditions (see also Fig. 6(a)):

(24)
(25)
(26)

3.2 IFT for separated autonomous demons

We now discuss an IFT for autonomous demons with separated measurement and feedback (e.g., the 8-state model). The entropy production of engine is defined as

(27)

By using the stationary distribution , we defined the stochastic entropy of as . Under the setup described in Sec. 3.1, satisfies the following IFT:

(28)

Here, is defined as

(29)

which subtracts (adds) the mutual information at when the state changes from the measurement (feedback) phase to the feedback (measurement) phase . The conditional mutual information is defined as

(30)

where . The definition of in Eq. (18) is regarded as the change in the mutual information between the endpoint of the measurement processes and the endpoint of the feedback processes. is defined as , which counts the mutual information at the initial and the final states if these states are in .

Figure 6: (a): The state space of a system which satisfies the setup in Sec. 3.1. For , only can change, and for , only can change. (b): The same state space seen along axis. Since does not change for , holds in stationary state.

We now derive Eq. (28) from Eq. (11) by setting to (see also Fig. 7). In this case, since the system is in the stationary state, is calculated as

(31)

where we used for all . The difference between and is equal to the heat absorption accompanying the transitions from to :

(32)

where we used the stationary condition for such that . Since is fixed while and is fixed while , it is easy to show that

(33)

We also note that

(34)
(35)

for all (see Fig. 6(b)), which follows from the fact that and are constant and independent of . Finally, we arrive at

(36)

which implies Eq. (28).

Figure 7: The choice of in the proof of Eq. (28). Transitions are set as , which are colored by red.

3.3 State-space reduction of autonomous demon

In this subsection, we show that the IFT for non-separated autonomous demons (20) in stationary states can be naturally derived from the IFT for separated autonomous demons (28), by considering the reduction of the variable . Suppose that there are two models, and . Model has three variables , and satisfies the setup described in Sec. 3.1. Model has two variables , and satisfies the setup described in Sec. 2.4. Hence, Eq. (28) holds in model and Eq. (20) holds in model . In the following, superscript () represents quantities in model (). By using the transition rates of model , we set the transition rates of model to

(37)
(38)

Here, we introduce a quantity :

(39)

which characterizes the typical rate of the transition in . By taking the limit in model with fixed , , , and , we obtain a reduced Markovian dynamics with the variables , which is equivalent to the dynamics of model .

Figure 8: In the setup of Sec. 3.1, we take the limit that changes quickly, and obtain the reduced model with two variables and .

Now we compare Eq. (28) for model in the limit and Eq. (20) for model . For sufficiently large , the number of back-and-forth transitions between and is per unit time, where the second term represents the fluctuation. Then, the total count of per unit time during the stay at is calculated as

(40)

In the second line, we used Eq. (34). In the third and fifth lines, we used

(41)

for . In the fourth line, we used the stationary condition at :

(42)

In the last line, we used the fact that the second term of the rhs of Eq. (23) is zero in the stationary state due to

(43)

From (40), it is straightforward to find that Eq. (20) for model is equivalent to Eq. (28) for model in the limit . This result indicates that the time-integral term in Eq. (20) appears due to the measurement phase and the feedback phase being unseparated in the setup of Sec. 2.4

4 Role of separated measurement and feedback in IFTs

We discussed in the previous section that the IFT (28) satisfied in our separated autonomous demon model is similar to that of the Szilard-type systems (18). In contrast, the IFT (20) for the general measurement and feedback systems contained a time-integral term , which we found an interpretation as the mutual information flow in the fast switching limit as shown in Eq. (40). We here aim to clarify the relation between the existence of the time-integral term in the IFT and the separation of measurement and feedback dynamics.

Let us first recall how we set when we derived the IFTs, from the viewpoint of the general framework in Sec. 2.2. First, we define a set of transitions whose final state is as , where represents the set of all possible states and represents the set of all possible transitions. Then, the common feature of the choices of for Eq. (18) and Eq. (28) is that satisfies or for any . In the case of Eq. (18), is time-dependent, and all satisfy () in the feedback (measurement) phase (see Fig. 3). In the case of Eq. (28), for with (), satisfies () (see Fig. 7). This property implies the separation of the measurement phase and the feedback phase. We call this condition the separation condition. Conversely, the measurement phase and the feedback phase are unseparated, when we can find some such that and (see Fig. 4), as in the case of the general measurement and feedback setup including the reduced dynamics discussed in Sec. 3.3.

Figure 9: An example of the division of the state space. The subset of states is . We set the ten red arrows to in this case.

To confirm that the separation condition plays a crucial role for the time-integral term in IFT, we show another IFT for the division in a state space. Corresponding to a subset with separation condition, we divide the possible states into two groups (see Fig. 9):

(44)
(45)

Owing to the separation condition, is indeed the complement of . Although is time-dependent in general, in the following we assume that is time-independent for simplicity. We now show that can be transformed into a form without any time-integral term. By focusing on the change in the stochastic Shannon entropy, Eq. (7), the integrand of the time-integral term can be written as

(46)

which is the crucial property of the separation condition. Adding the contribution from the jumps, we have

(47)

where takes 1 if , and 0 otherwise. Note that holds for with the separation condition. We also introduced the entropy exchange between and through the -th transition

(48)

and the entropy associated with :

(49)

The entropy exchange represents the initial and the final entropy of the dynamics in , which corresponds to in Eq. (28). Therefore, the IFT for the case of divided states does not include the time-integral term. We note that the contribution from the entropy exchange in Eq. (47) corresponds to the change in the mutual information, which appeared in the IFTs for Szilard-type systems and the model with separated measurement and feedback phases.

5 Conclusion

In this paper, we clarified the difference between the Szilard-type demons and autonomous bipartite demons. By introducing another type of autonomous demon, in which the measurement phase and the feedback phase are separated, we showed that the presence of a time-integral term in IFTs is related to the unseparated measurement and feedback phases. Since the Szilard-type demons and the 8-state model have the separated measurement and feedback phases, the IFTs for the Szilard engine type demons [Eq. (18)] and for the 8-state model [Eq. (28)] do not contain any time-integral term. In contrast, since the autonomous bipartite demons have the unseparated measurement and feedback phases, the IFT for the autonomous bipartite demons [Eq. (20)] contains a time-integral term.

On the basis of the general framework in Sec. 2.2, we clarified the concept of separation as the condition on the choice of the subset of transitions . The separation condition leads to the absence of time-integral terms, which is clearly shown in Eq. (47). Understanding the separation of the measurement phase and the feedback phase is important to analyze the difference between ideal information processing systems (e.g., the Szilard engine) and demons in the real world such as biochemical networks.

The authors thank S.-i. Sasa and H. Tasaki for fruitful discussion. The authors also thank U. Seifert for helpful comments. NS, SI, KK were supported by Grant-in-Aid for JSPS Fellows Number 26-7602, 24-8593, 24-8031 respectively, and TS was supported by JSPS KAKENHI Grant Number 25800217, 22340114. NS and TS are also supported by Platform for Dynamic Approaches to Living System from MEXT, Japan.

References

References

  • [1]
  • [2] J. C. Maxwell, Theory of Heat (Longmans, London, 1871).
  • [3] L. Szilard, Z. Phys. 53, 840 (1929).
  • [4] H. S. Leff and A. F. Rex, eds., Maxwell’s Demon: Entropy, Information, Computing (Princeton University Press, New Jersey, 1990).
  • [5] M. O. Magnasco, Europhys. Lett. 33, 583 (1996).
  • [6] R. Kawai, J. M. R. Parrondo, and C. Van den Broeck, Phys. Rev. Lett. 98, 080602 (2007).
  • [7] H. Touchette and S. Lloyd, Phys. Rev. Lett. 84, 1156 (2000).
  • [8] T. Sagawa and M. Ueda, Phys. Rev. Lett. 104, 090602 (2010).
  • [9] F. J Cao and M. Feito, Phys. Rev. E 79, 041118 (2009).
  • [10] J. Horowitz and S. Vaikuntanathan, Phys. Rev. E 82, 061120 (2010).
  • [11] T. Sagawa and M. Ueda, Phys. Rev. E 85, 021104 (2012).
  • [12] K. H. Kim and H. Qian, Phys. Rev. E 75, 022102 (2007).
  • [13] S. Ito and M. Sano, Phys. Rev. E 84, 021123 (2011).
  • [14] T. Munakata and M. L. Rosinberg, Phys. Rev. Lett. 112, 180601 (2014).
  • [15] L. Granger and H. Kantz, Phys. Rev. E 84, 061110 (2011).
  • [16] T. Sagawa and M. Ueda, Phys. Rev. Lett. 109, 180602 (2012).
  • [17] T. Sagawa and M. Ueda, New J. Phys. 15, 125012 (2013).
  • [18] F. Tostevin and P. R. ten Wolde, Phys. Rev. Lett. 102, 218101 (2009).
  • [19] G. Lan, P. Sartori, S. Neumann, V. Sourjik, and Y. Tu, Nature Phys. 8, 422 (2012).
  • [20] J. V. Koski, V. F. Maisi, T. Sagawa, and J. P. Pekola, Phys. Rev. Lett. 113, 030601 (2014).
  • [21] P. Strasberg, G. Schaller, T. Brandes, and M. Esposito, Phys. Rev. Lett. 110, 040601 (2013).
  • [22] K. Sekimoto, Physica D 205, 242 (2005).
  • [23] D. Mandal and C. Jarzynski, Proc. Nat. Ac. Sci. 109, 11641 (2012).
  • [24] J. M. Horowitz, T. Sagawa, and J. M. R. Parrondo, Phys. Rev. Lett. 111, 010602 (2013).
  • [25] H. Marko, IEEE Trans. Commun. 21, 1345 (1973).
  • [26] T. Schreiber, Phys. Rev. Lett. 85, 461 (2000).
  • [27] X. S. Liang and R. Kleeman, Phys. Rev. Lett. 95, 244101 (2005).
  • [28] A. J. Majda and J. Harlim, Proc. Nat. Ac. Sci. 104, 9668 (2007).
  • [29] A. E. Allahverdyan, D. Janzing, and G. Mahler, J. Stat. Mech. P09011 (2009).
  • [30] S. Ito and T. Sagawa, Phys. Rev. Lett. 111, 180603 (2013).
  • [31] A. C. Barato and U. Seifert, Phys. Rev. Lett. 112, 090601 (2014).
  • [32] D. Hartich, A. C. Barato, and U. Seifert, J. Stat. Mech. P02016 (2014).
  • [33] J. Horowitz and M. Esposito, Phys. Rev. X 4, 031015 (2014).
  • [34] J. Horowitz and H. Sandberg, New J. Phys. 16, 125007 (2014).
  • [35] N. Shiraishi and T. Sagawa, Phys. Rev. E 91, 012130 (2015).
  • [36] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
  • [37] N. G. van Kampen, “Stochastic Processes in Physics and Chemistry” (North Holland, 1992).
  • [38] D. J. Evans, E. G. D. Cohen, and G. P. Morriss. Phys. Rev. Lett. 71, 2401 (1993).
  • [39] J. Kurchan. J. Phys. A 31, 3719 (1998).
  • [40] G. E. Crooks, Phys. Rev. E 60, 2721 (1999).
  • [41] C. Jarzynski, J. Stat. Phys. 98, 77 (2000).
  • [42] T. M. Cover and J. A. Thomas, “Elements of Information Theory” (John Wiley and Sons, New York, 1991).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
200342
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description