Uncertainty relation in the presence of information measurement and feedback control

# Uncertainty relation in the presence of information measurement and feedback control

Tan Van Vu    Yoshihiko Hasegawa
###### Abstract

We study the uncertainty of dynamical observables for classical systems manipulated by repeated measurements and feedback control. In the presence of an external controller, the precision of observables is expected to be enhanced, but still be limited by the amount of information obtained from the measurement. We prove that the fluctuation of arbitrary observables that are antisymmetric under time reversal is constrained from below by the total entropy production and an informational quantity. This informational term is the sum of mutual entropy production and a Kullback–Leibler divergence which characterizes the irreversibility of measurement outcomes. The result holds for finite observation times and for both continuous- and discrete-time systems. We apply the derived relation to study the precision of a flashing ratchet, which is a type of Brownian ratchets.

## Introduction

During the last two decades, substantial progress has been made in stochastic thermodynamics (ST), resulting in a comprehensive theoretical framework for studying small systems. ST enables us to investigate the physical properties of nonequilibrium systems, leading to a broad range of applications in physics and biology [1]. One of the central results is the fluctuation theorems [2, 3, 1], which express universal properties relevant to the symmetry of the probability distributions of thermodynamic quantities such as heat, work, and entropy production.

Recently, a trade-off between the precision of currents and thermodynamic cost, which is known as thermodynamic uncertainty relation (TUR), has been found for various Markovian processes [4, 5, 6, 7, 8, 9]. Generally, the TUR states that at a finite time in the steady-state systems, the relative fluctuation of arbitrary currents is lower bounded by the reciprocal of total entropy production. In other words, the TUR quantifies that it is impossible to attain higher precision without more thermal cost. Many studies of the TUR for different contexts have been intensively carried out [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. In recent studies [29, 30], we have revealed that a generalized TUR can be derived from the fluctuation theorem, which shows an intimate connection between these universal relations.

Feedback control using an external protocol that depends on the measurement outcome is ubiquitous in physics and biology, and plays important roles in the study of nonequilibrium systems. The thermodynamics of feedback control [31, 32, 33, 34, 35, 36, 37, 38, 39] provides a key framework for analyzing systems under the presence of Maxwell’s demon, who can extract work from the system beyond the limit set by the conventional second law. By utilizing the information obtained from the measurement on the system, the system performance can be significantly enhanced. Moreover, it is expected that information can improve the precision of observables such as the displacement of a molecular motor [40, 41]. Therefore, it is pertinent to ask how the relative fluctuation of arbitrary observables is constrained in the presence of feedback control.

In this Letter, we study the TUR for steady-state systems involving repeated measurements and feedback control. In particular, we derive a lower bound for the fluctuation of arbitrary dynamical observables that are antisymmetric under time reversal. We prove that is lower bounded by a function of for arbitrary observable , where and denote the mean and variance of observable , respectively, and [cf. Eq. (Fluctuation theorem)] is a quantity reflecting the thermal cost and the mutual entropy production. Due to the presence of information flow, the fluctuation of observables is bounded from below not only by the thermal energy consumed in the system, but also by an information quantity obtained from the external controller. The inequality is valid for arbitrary observation times and for discrete- or continuous-time systems since underlying dynamics is not required in the derivation.

We apply the derived result to study a flashing ratchet [42, 43, 44, 45], in which the asymmetric potential is switched between on and off to induce a directed motion. Flashing ratchet is a nonequilibrium Brownian ratchet, which has been used for modeling biological processes such as actin polymerization [46] and ion transportation [47]. To examine the uncertainty of observables in the presence of feedback control, we consider a flashing ratchet using imperfect information about the system’s state to rectify the motion of a diffusive particle. The flashing ratchet acts as a Maxwell’s demon by utilizing information obtained from measurements to maximize the instant velocity. Besides the mean velocity which is the most common quantity used to characterize the transport, the relative fluctuation of the displacement, which reflects its precision, is another important attribute. We empirically verify the TUR for the displacement of a discrete-state ratchet. To the best of our knowledge, it is the first time that a lower bound on the precision of information ratchet is provided.

## Model

We consider a classical Markovian system manipulated using repeated feedback, where an external controller utilizes information obtained from the measurement to evolve the system. An observable dependent on the system’s state is measured discretely along a trajectory at predetermined times. The control protocol is updated depending on the measurement outcome . The time and state space of the system can be discrete or continuous. We assume that every transition is reversible, i.e., if transition probability from state to state is nonzero, then its reversed transition from to also occurs with a positive probability.

### Discrete measurement and feedback control

Now, let us apply the measurement and feedback control to the system from time up to time . For convenience, we define . Assuming that we perform measurements at the predetermined times and the measurement outcomes are . The measurement times are , where denotes the time gap between the consecutive measurements. Let be the system states during the control process, where denotes the system state at time for each ; then, the measurement and feedback scheme is as follows. First, at the time , the observable is measured with outcome , and the system is then driven with the protocol from to . At each subsequent time , the measurement is performed and the corresponding outcome is . Instantly, the protocol is changed from to and remains unchanged till the time . The procedure is repeated up to the time , which ends with the protocol . Here, it is assumed that the time delay that needed for measuring and updating the protocol can be ignored. The measurement error is characterized by a conditional probability , where is the actual system state and is the measurement outcome at the time . This means that the outcome depends on only the system’s state immediately before the measurement. Hereafter, we assume that the system is in the steady state under the measurement and feedback control.

### Fluctuation theorem

In the forward process, the system is initially in state with probability distribution . For each , the system changes from the state at to the state at time under protocol with the transition probability . Then, the joint probability of observing a trajectory in the forward process is expressed as

 PF(X,M)=PF(x0)N−1∏i=0p(mi|xi)w(xi+1,ti+1|xi,ti,mi). (0)

Using the same construction for the reversed process as in Ref. [48], here we derive a strong detailed fluctuation theorem for the system. Let us consider a time-reversed process, in which the measurements are performed at the times for each . The measurement and feedback control in the reversed process are analogous to those in the forward process as described in the following. At time , the system starts from state , which is chosen with respect to probability distribution . The observable is then measured with the outcome , and the control protocol is changed to , keeping up to time . Subsequently, at each time , the system is in the state and the measurement outcome of the observable is . The control protocol is updated immediately to , which remains unchanged until . At the end time , the system is in state and the control protocol is . The probability of observing a trajectory and measurement outcomes , , is given by

 PR(X†,M†) (0) =PR(x†0)N−1∏i=0p(m†i|x†i)w(x†i+1,t†i+1|x†i,t†i,m†i).

We note that the distribution and the transition probability in the reversed process are the same as those in the forward process. For each path in the forward process, we consider a conjugate counterpart , where and , in the reversed process. Besides that, we choose as the system is in the steady state. Here, is the steady-state distribution of the system. Since the control protocol is time-symmetric, we have . Then by taking the ratio of the probabilities of the forward path and its conjugate counterpart, we obtain

 P(X,M)P(X†,M†)=eΔs+Δsm+Δsi, (0)

where each term in the right-hand side of Eq. (Fluctuation theorem) is expressed as follows:

 Δs =lnPss(x0)Pss(xN), (0) Δsm =ln[N−1∏i=0w(xi+1,ti+1|xi,ti,mi)w(xi,T−ti|xi+1,T−ti+1,mi)], Δsi =ln[N−1∏i=0p(mi|xi)p(mi|xi+1)].

The first term and second term represent the change in the system entropy and the medium entropy, respectively. Let us interpret the meaning of the remaining term, , in Eq. (Fluctuation theorem). This term involves the probability that characterizes the error in measurements; thus, it can be consider an information quantity. In the feedback control system involving information measurement, the mutual information between trajectories and is defined as

 I(X,M) =lnp(m0|x0)p(m1|x1)⋯p(mN−1|xN−1)P(M) (0) =lnP(M|X)P(M),

where is the probability of observing the outcome trajectory in the system. Then, can be expressed in the terms of mutual information as

 Δsi=IF−IR+lnP(M)P(M†), (0)

where and denote the mutual information in the forward and reversed processes, respectively. The term represents the difference between the mutual informations in the forward process and in its time-reversed counterpart; thus being identified as mutual entropy production [49]. The remaining term, , whose average is Kullback-Leibler divergence between distributions and , is a measure of the irreversibility of measurement outcomes. When is a uniform distribution, the measurement is completely random and does not provide any valuable information. In this case, , which indicates that the system does not obtain any information from measurements. Defining

 σ=Δs+Δsm+Δsi, (0)

one can obtain a strong detailed fluctuation theorem (DFT) for from Eq. (Fluctuation theorem) as

 P(σ)P(−σ)=eσ, (0)

where is the probability distribution of , defined by . From Eq. (Fluctuation theorem), by applying Jensen’s inequality to , one can obtain . This inequality can be considered the second law of thermodynamics for the full system (e.g., the system and the controller), and can be identified as its total entropy production. Since , we have . This implies that the total entropy production of the system can be negative due to the effect of measurement and feedback control.

## Uncertainty relation

Here, we derive a lower bound on the fluctuation of arbitrary dynamical observables that are antisymmetric under time reversal. Specifically, we derive a bound on , where is an observable that satisfies the antisymmetric condition, . The current-type observables satisfy this condition.

In Ref. [29], we have demonstrated that a generalized TUR can be derived from the DFT. The derivation does not require detailed underlying dynamics and can be applied flexibly to other systems as long as the strong DFT is valid. Following Ref. [29], we prove that the fluctuation of observables is lower bounded by as

 Var[O]⟨O⟩2≥csch2[f(⟨σ⟩2)], (0)

where is the inverse function of . This bound is the same as in Ref. [50], where the TUR is derived from exchange fluctuation theorems for heat and particle exchange between multiple systems. The inequality in Eq. (Uncertainty relation) implies that the precision of arbitrary observables is constrained not only by the total entropy production but also by the information obtained from the measurement on the system. Such precision-cost trade-offs have attracted significant interest and have been studied in various systems such as cellular computation [51] and coupled oscillators [52]. The detailed derivation of Eq. (Uncertainty relation) can be seen at Appendix.

Since , the bound in Eq. (Uncertainty relation) is tighter than that in Ref. [17], where the TUR was derived for discrete-time Markovian processes in the long-time limit. We note that the derived bound is not tight as the conventional bound, . This is because the derived inequality holds for both continuous- and discrete-time systems, while the conventional TUR holds only for continuous-time dynamics [53]. In the following, we show that the conventional bound is actually violated for a discrete-time model with some parameter settings.

## Example

We apply the TUR to study the precision of a flashing ratchet, which is a model of Brownian ratchet. First, we describe the conception of the flashing ratchet in the presence of external controller which utilizes information obtained from measurements to rectify a directed motion. After that, we present a discrete-state model of flashing ratchet which is used to validate the derived TUR. Both continuous- and discrete-time ratchets are considered for the validation.

### Flashing ratchet

Let us introduce a flashing ratchet comprising from an overdamped Brownian particle in contact with an equilibrium heat bath at temperature . The particle evolves under an external asymmetric potential , which can be either on or off depending on the feedback control. The dynamics of the particle is described by the Langevin equation:

 γ˙x(t)=λ(t)F(x(t))+ξ(t), (0)

where is the position of the particle at time , is the friction coefficient, and is white Gaussian noise with zero mean and time correlation . The force is given by , where is periodic, , and is the period of the potential. The term is a control protocol that takes value or , corresponding to switching on or off the potential.

In previous studies [44, 54], the protocol is determined as , where is the Heaviside function given by if and otherwise. This means that the potential is turned on only when the net force applied to the particles is positive. The measurements in these works are assumed to be perfect, i.e., there is no error in the measurement outcome of the sign of . This feedback control strategy was shown to be the best possible strategy for maximizing the average velocity of one particle. However, it is not the best strategy for collective flashing ratchet, where more than one particle exist. Taking a more realistic model into account, Refs. [55, 56] studied the flashing ratchet with imperfect measurement. The error in the estimate of the sign of occurs with a probability . Equivalently, the potential is switched wrongly with probability , i.e., the potential can be turned off when or turn on when with probability .

In the studies discussed so far, the measurements are executed continuously, which is difficult from a viewpoint of experimental realization. Moreover, there is a redundancy in the information obtained from continuous measurements; thus, leading to an inefficient implementation from the aspect of energetic cost. In what follows, we consider a discrete-state flashing ratchet with discretely repeated measurements and feedback control.

### Continuous-time discrete-state model

We consider a one-dimensional discrete-state flashing ratchet, which has been studied in Refs. [43, 57] without using feedback control. The ratchet comprises a Brownian particle and has discrete states located at position , where is the distance between neighbor states. The particle is allowed to jump only between adjacent states, i.e., the particle cannot instantly transit from state to state for . The periodic potential is approximated by states, as illustrated in Fig. 1. For each , is defined as the remainder of the Euclidean division of by . Suppose that the particle is in state , then the potential should be turned off if and turned on otherwise (i.e., if ). This is an ideal control protocol, which maximizes the instant velocity of the particle. However, when the measurement is performed, there exist an error due to the noise and the potential is switched wrongly with a probability . Specifically, the conditional probability that characterizes the measurement error is given as follows:

 p(s|x)=⎧⎨⎩rif s=1 and 0≤¯¯¯x

where and indicate that the potential is switched on and off, respectively, and is the system state when executing the measurement.

When the potential is on, the transition rate from state to state is given by

 Γn+1,n =κ1, Γn,n+1=k+κ−11, ∀n:¯¯¯n=0,…,N1−1, (0) Γn+1,n =κ−12, Γn,n+1=k+κ2, ∀n:¯¯¯n=N1,…,N−1, Γn,m =0, ∀ |m−n|>1.

Here, reflects the asymmetry in transitions due to a load force, is the peak of the potential, and

 κ1=exp[−Vm2N1kBT], κ2=exp[−Vm2N2kBT]. (0)

When , i.e., there is no load force, the transition rates satisfy the local detailed balance

 Γn+1,nΓn,n+1=exp(Vn−Vn+1kBT), (0)

where is the potential at state , given by

 Vn={Vm¯¯¯n/N1if ¯¯¯n=0,…,N1−1,Vm(N−¯¯¯n)/N2if ¯n=N1,…,N−1. (0)

From now on, we set . In the continuous limit, i.e., , the discrete potential converges to the following continuous sawtooth potential:

 V(x)={Vmx/(aL),if 0≤x≤aL,Vm(L−x)/[(1−a)L],if aL

where is a given constant. Let be the probability for the system being at state at time . Then, the probability distribution is governed by the master equation

 ∂tPn(t)=∑m[Γn,mPm(t)−Γm,nPn(t)]. (0)

When the potential is off, the dynamics of the particle becomes a continuous-time random walk with forward and backward transition rates equal to and , respectively.

Let be the probability that the system is at state at time with the measurement outcome and being in state at time . Since the system is periodic, we define the probability distribution for and as follows:

 \mathbbmssQ(x0,0,s;x1,Δt)=∑m,n\mathbbmssP(n,0,s;m,Δt)δ¯¯¯n,x0δm−n,x1−x0. (0)

We note that is normalized, i.e., . Because the system is in the steady state, the average of the system entropy production is equal to zero. Therefore, the quantity can be evaluated as

 ⟨σ⟩=N⟨lnw(x1,Δt|x0,0,s)w(x0,Δt|x1,0,s)+lnp(s|x0)p(s|x1)⟩\mathbbmssQ, (0)

where the average is taken with respect to the distribution .

### Discrete-time discrete-state model

Here, let us consider a discrete-time model of flashing ratchet, where the control protocol is the same as in the continuous-time model. Its dynamics is described by a Markov chain

 Pn(t+τ)=∑mΛn,mPm(t), (0)

where is the time step and is the transition probability from state to . For the consistency with the continuous-time model, the probability is set as follows. When the potential is on, the ratchet transits between states with the following probabilities:

 Λn,m={τΓn,mif m≠n,1−τ(Γn+1,n+Γn−1,n)if m=n. (0)

When the potential is off, the ratchet becomes a discrete-time random walk with the transition probabilities given by , and for all . The time step must be properly chosen to ensure the positivity of transition probabilities, i.e.,

 τ≤min{11+k+,1maxn[Γn+1,n+Γn−1,n]}. (0)

Besides that, the gap time between consecutive measurements should be a multiple of time step, i.e., . The term can be evaluated analogously as in Eq. (Continuous-time discrete-state model).

### Bound on the precision of the ratchet

Now, we verify the derived TUR for the following observable:

 O[X]=xN−x0. (0)

This observable is a current, which represents the distance traveled by the particle. The relative fluctuation, , reflects the precision of the ratchet. We run stochastic simulations for both continuous- and discrete-time models and numerically evaluate the precision and the bound term . For each random parameter setting, , we collect realizations for the calculation. The parameter ranges are shown in the caption of Fig. 2. We plot as a function of in Fig. 2(a), where the circular and triangular points represent the results in continuous- and discrete-time models, respectively. We depict the saturated case of the derived TUR and the conventional TUR by solid line and dashed line, respectively. As seen, all points are located above the solid line; thus, empirically verifying the validation of the derived TUR. On the other hand, several triangular points lie below the dashed line, which implies the violation of the conventional bound.

We plot uncertainty in the observable and entropy production as functions of measurement-error parameter in Fig. 2(b). When decreases, more information is obtained from measurement; thus, resulting in higher entropy production and lower uncertainty. It is interesting that uncertainty in observable declines exponentially when is reduced.

## Conclusion

In summary, we have derived the TUR for steady-state systems in which repeated measurements and feedback control are performed. We have demonstrated that the relative fluctuation of arbitrary observables that are antisymmetric under time reversal is bounded from below by a sum of the total entropy production and the mutual information. We have empirically validated the derived TUR for the displacement of the flashing ratchet.

The TUR has been derived from the fluctuation theorem, which holds for both continuous- and discrete-time systems. Although measurements are performed discretely, we have not seen any violation of the conventional bound in the stochastic simulations of continuous-time ratchet, i.e., holds for all parameter settings in continuous-time model. Proving this inequality would significantly improve the bound and requires further investigation.

This work was supported by MEXT KAKENHI Grant No. JP16K00325.

## Appendix

### Derivation of uncertainty relation

First, we show that the joint probability distribution of and , , obeys the strong DFT. The proof can be readily obtained as in the following:

 P(σ,O) (0) =∫DZδ(σ−σ(X,M))δ(O−O[X])P(X,M) =∫DZδ(σ−σ(X,M))δ(O−O[X])eσ(X,M)P(X†,M†) =eσ∫DZδ(σ−σ(X,M))δ(O−O[X])P(X†,M†) =eσ∫DZ†δ(σ+σ(X†,M†))δ(O+O[X†])P(X†,M†) =eσP(−σ,−O).

Here, and . Inspired by Ref. [58], where statistical properties of entropy production were obtained from the strong DFT, we derive the TUR solely from Eq. (Derivation of uncertainty relation). By observing that

 1 =∫∞−∞dσ∫∞−∞dOP(σ,O) =∫∞0dσ∫∞−∞dO(1+e−σ)P(σ,O), (0)

we introduce a probability distribution , defined over . The first and second moments of and can be expressed with respect to the distribution as follows:

 ⟨σ⟩ (0) ⟨O⟩

where denotes the expectation with respect to . Applying the Cauchy-Schwartz inequality to , we obtain

 (0)

The last term in the right-hand side of Eq. (Derivation of uncertainty relation) can be further upper bounded. We find that

 ⟨tanh2(σ2)⟩Q =⟨tanh2[f(σ2tanh(σ2))]⟩Q (0) ≤tanh2[f(⟨σ⟩/2)].

The equality in Eq. (Derivation of uncertainty relation) is obtained from the fact that is the inverse function of . The inequality in Eq. (Derivation of uncertainty relation) can be obtained as follows. First, we show that is a concave function over . Indeed, using the relation and performing simple calculations, we obtain

 d2χ(x)dx2=4(4f(x)−sinh[4f(x)])(2f(x)+sinh[2f(x)])3. (0)

Since , we have ; thus, implying that is a concave function. Applying Jensen’s inequality to this function, we obtain

 ⟨tanh2[f(σ2tanh(σ2))]⟩Q (0) ≤tanh2[f(⟨σ2tanh(σ2)⟩Q)]=tanh2[f(⟨σ⟩/2)].

From Eqs. (Derivation of uncertainty relation) and (Derivation of uncertainty relation), we have

 ⟨O⟩2≤⟨O2⟩tanh2[f(⟨σ⟩/2)]. (0)

By transforming Eq. (Derivation of uncertainty relation), we obtain the derived TUR [Eq. (Uncertainty relation)] for the observable .

## References

• [1] Seifert U., Rep. Prog. Phys., 75 (2012) 126001.
• [2] Gallavotti G. Cohen E. G. D., Phys. Rev. Lett., 74 (1995) 2694.
• [3] Jarzynski C., Phys. Rev. Lett., 78 (1997) 2690.
• [4] Barato A. C. Seifert U., Phys. Rev. Lett., 114 (2015) 158101.
• [5] Gingrich T. R., Horowitz J. M., Perunov N. England J. L., Phys. Rev. Lett., 116 (2016) 120601.
• [6] Pietzonka P., Barato A. C. Seifert U., Phys. Rev. E, 93 (2016) 052145.
• [7] Polettini M., Lazarescu A. Esposito M., Phys. Rev. E, 94 (2016) 052104.
• [8] Horowitz J. M. Gingrich T. R., Phys. Rev. E, 96 (2017) 020103.
• [9] Dechant A. Sasa S.-i., J. Stat. Mech., 2018 (2018) 063209.
• [10] Barato A. C. Seifert U., J. Phys. Chem. B, 119 (2015) 6555.
• [11] Falasco G., Pfaller R., Bregulla A. P., Cichos F. Kroy K., Phys. Rev. E, 94 (2016) 030602.
• [12] Pietzonka P., Barato A. C. Seifert U., J. Stat. Mech., 2016 (2016) 124004.
• [13] Rotskoff G. M., Phys. Rev. E, 95 (2017) 030101.
• [14] Garrahan J. P., Phys. Rev. E, 95 (2017) 032134.
• [15] Gingrich T. R. Horowitz J. M., Phys. Rev. Lett., 119 (2017) 170601.
• [16] Hyeon C. Hwang W., Phys. Rev. E, 96 (2017) 012156.
• [17] Proesmans K. den Broeck C. V., EPL, 119 (2017) 20001.
• [18] Chiuchiù D. Pigolotti S., Phys. Rev. E, 97 (2018) 032109.
• [19] Brandner K., Hanazato T. Saito K., Phys. Rev. Lett., 120 (2018) 090601.
• [20] Hwang W. Hyeon C., J. Phys. Chem. Lett., 9 (2018) 513.
• [21] Barato A. C., Chetrite R., Faggionato A. Gabrielli D., New J. Phys., 20 (2018) 103023.
• [22] Barato A., Chetrite R., Faggionato A. Gabrielli D., arXiv preprint, (2018) arXiv:1810.11894.
• [23] Hasegawa Y. Vu T. V., arXiv preprint, (2018) arXiv:1809.03292.
• [24] Koyuk T., Seifert U. Pietzonka P., J. Phys. A: Math. Theor., 52 (2019) 02LT02.
• [25] Carollo F., Jack R. L. Garrahan J. P., Phys. Rev. Lett., 122 (2019) 130605.
• [26] Vu T. V. Hasegawa Y., arXiv preprint, (2019) arXiv:1901.05715.
• [27] Proesmans K. Horowitz J. M., arXiv preprint, (2019) arXiv:1902.07008.
• [28] Mayank Shreshtha R. J. H., arXiv preprint, (2019) arXiv:1903.01972.
• [29] Hasegawa Y. Vu T. V., arXiv preprint, (2019) arXiv:1902.06376.
• [30] Vu T. V. Hasegawa Y., arXiv preprint, (2019) arXiv:1902.06930.
• [31] Sagawa T. Ueda M., Phys. Rev. Lett., 100 (2008) 080403.
• [32] Sagawa T. Ueda M., Phys. Rev. Lett., 104 (2010) 090602.
• [33] Horowitz J. M. Parrondo J. M. R., EPL, 95 (2011) 10005.
• [34] Sagawa T. Ueda M., Phys. Rev. E, 85 (2012) 021104.
• [35] Horowitz J. M. Esposito M., Phys. Rev. X, 4 (2014) 031015.
• [36] Hartich D., Barato A. C. Seifert U., J. Stat. Mech., 2014 (2014) P02016.
• [37] Barato A. C. Seifert U., Phys. Rev. Lett., 112 (2014) 090601.
• [38] Parrondo J. M., Horowitz J. M. Sagawa T., Nature Phys., 11 (2015) 131.
• [39] Potts P. P. Samuelsson P., Phys. Rev. Lett., 121 (2018) 210603.
• [40] Serreli V., Lee C.-F., Kay E. R. Leigh D. A., Nature, 445 (2007) 523.
• [41] Lau B., Kedem O., Schwabacher J., Kwasnieski D. Weiss E. A., Mater. Horiz., 4 (2017) 310.
• [42] Schimansky-Geier L., Kschischo M. Fricke T., Phys. Rev. Lett., 79 (1997) 3335.
• [43] Freund J. A. Schimansky-Geier L., Phys. Rev. E, 60 (1999) 1304.
• [44] Cao F. J., Dinis L. Parrondo J. M. R., Phys. Rev. Lett., 93 (2004) 040603.
• [45] Craig E., Kuwada N., Lopez B. Linke H., Ann. Phys., 17 (2008) 115.
• [46] Mogilner A. Oster G., Eur. Biophys. J., 28 (1999) 235.
• [47] Siwy Z. Fuliński A., Phys. Rev. Lett., 89 (2002) 198103.
• [48] Kundu A., Phys. Rev. E, 86 (2012) 021107.
• [49] Diana G. Esposito M., J. Stat. Mech., 2014 (2014) P04010.
• [50] André M. Timpanaro, Giacomo Guarnieri J. G. Landi G. T., arXiv preprint, (2019) arXiv:1904.07574.
• [51] Mehta P. Schwab D. J., Proc. Natl. Acad. Sci. U.S.A., 109 (2012) 17978.
• [52] Hasegawa Y., Phys. Rev. E, 98 (2018) 032405.
• [53] Shiraishi N., arXiv preprint, (2017) arXiv:1706.00892.
• [54] Dinis L., Parrondo J. M. R. Cao F. J., EPL, 71 (2005) 536.
• [55] Feito M. Cao F. J., Eur. Phys. J. B, 59 (2007) 63.
• [56] Cao F., Feito M. Touchette H., Physica A, 388 (2009) 113 .
• [57] Zhou Y. Bao J.-D., Physica A, 343 (2004) 515 .
• [58] Merhav N. Kafri Y., J. Stat. Mech., 2010 (2010) P12022.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters