Transient fluctuation theorems for the currents
and initial equilibrium ensembles
We prove a transient fluctuation theorem for the currents for continuous-time Markov jump processes with stationary rates, generalizing an asymptotic result by Andrieux and Gaspard [J. Stat. Phys. 127, 107 (2007)] to finite times. The result is based on a graph-theoretical decomposition in cycle currents and an additional set of tidal currents that characterize the transient relaxation regime. The tidal term can then be removed by a preferred choice of a suitable initial equilibrium ensemble, a result that provides the general theory for the fluctuation theorem without ensemble quantities recently addressed in [Phys. Rev. E 89, 052119 (2014)]. As an example we study the reaction network of a simple stochastic chemical engine, and finally we digress on general properties of fluctuation relations for more complex chemical reaction networks.
pacs:05.70.Ln, 02.50.Ga, 82.29.-w, 82.29.-s
Fluctuation theorems (FT’s in the following) have dominated the last twenty years of research in nonequilibrium statistical mechanics. Proceeding from the landmark formulation by Bochkov and Kuzovlev , a host of variations on the theme have been elaborated depending on the theoretical setup, the observables of interest and the time specifics. This paper inscribes in the line of inquiry of FT’s for stochastic dynamics [2, 3, 4], with special regard to the observables related to the cycle decomposition of Markov processes .
The relevance of cycle currents and their conjugate affinities to nonequilibrium thermodynamics was investigated by Hill and Schnakenberg [6, 7]. The intuitive picture is that a cycling process performed by a system is capable of transducing and transforming energy across the environment. As an example, the Otto cycle in the stationary performance of a car engine transforms the fuel’s chemical energy into the vehicle’s kinetic energy. Hence, a full characterization of the cycle structure of the system allows for the characterization of the thermodynamic behavior of nonequilibrium steady states, e.g. as regards their insurgence from a minimum entropy production principle . In this setting, Andrieux and Gaspard have derived an asymptotic FT for the now-called Schnakenberg cycle currents  and applied it to chemical reactions . Further insights on FT’s and large deviations for cycle currents can be found in Refs. [11, 12, 13].
Under the assumption of local detailed balance  for quantum systems coupled with several heat and particle reservoirs, upon which cycle currents acquire a simple physical interpretation, recently Bulnes-Cuetara et al.  have shown that a fluctuation relation for the currents also holds at finite times, provided that the processes are sampled from one specific initial equilibrium ensemble. We also refer to Ref.  for some earlier results, Ref.  for an analysis of heat vs. work FT’s, Ref.  for further elaboration and Ref.  for the derivation of a similar result in a deterministic setting.
In this paper we provide the general theory underlying transient FT’s for time-homogeneous Markov jump processes. In particular, we generalize the result of Andrieux and Gaspard by including in the description certain tidal currents that complement the cycle currents. The result is based on an algebraic graph-theoretical analysis investigated by one of the authors in Ref. . We can then generalize the initial-ensemble result, extending it to time-homogeneous Markov processes on graphs without the requirement of local detailed balance. As an example, we analyze a simple chemical reaction network.
The paper is structured as follows. In Sec. 2 we anticipate the forms taken by the various fluctuation relations. In Sec. 3 we initialize the example of a chemical reaction network. In Sec. 4 we provide preliminary results from graph theory, and in Sec. 5 we give the general results from direct manipulations of the probability density of Markov jump processes, while for completeness in A the same results are derived in the Feynman-Kac formalism for the moment generating function. In Sec. 6 we look back at the example under a new light, before coming to conclusions.
2 A recap on fluctuation relations
Before moving to the full treatment, it is useful to make the statements in the introduction slightly more precise. The simplest fluctuation relation takes the form
Here, is the value taken by a stochastic variable called the reservoir entropy production of a process (sometimes denoted , etc.), which accounts for the flux of entropy towards the environment. In our setting, the entropy production is a stochastic process with probability , and denotes the long time limit (in the following we will not distinguish between probabilities an probability densities). Then Eq. (1) states that at sufficiently large times the probability of measuring a positive entropy production is exponentially favored with respect to the probability of measuring a negative entropy production. Since the entropy production is odd under time reversal, the fluctuation relation provides a formulation of the second law of thermodynamics and a characterization of the arrow of time.
To the entropy production of a system several mechanisms may contribute. Then, the fluctuation relation can be specialized as follows
where are the values taken by some physical observables that (almost surely) grow linearly in time, e.g. time-integrated heat fluxes, charge or matter currents, or any thermodynamic flux. The quantities are non-fluctuating intensive variables conjugate to the . If one adopts an abstract characterization of thermodynamic processes as generic Markov processes on a discrete state space, then count the net number of times the process has performed certain elementary cyclic paths.
Asymptotic relations can be extended to finite times by conditioning both the forward and the backward processes to some fixed initial state ,
where is a suitable state function. Unfortunately, from an experimental viewpoint conditioning a process to one exact initial state is problematic. However, notice that if one could sample both the forward and the backward processes with probability ( the normalization factor) one obtains an exact FT for the currents valid at all times
where we marginalized out . Yet, again, preparing the system in a given ensemble might also be awkward, unless it is of a very special kind. Indeed, for certain classes of systems it has been found that this ensemble is the equilibrium ensemble of the system where all forces producing cycles currents are momentarily disconnected. Physically, this corresponds to the situation where first one prepares the system by letting it relax to equilibrium, and then all of a sudden connects the external forces.
3 Example: network of chemical reactions
In this section we consider a simple reaction network. We derive a meaningful expression for the total entropy produced after an arbitrary sequence of reactions, writing it in terms of macroscopic physical currents of certain external species called chemostats, and in terms of an equilibrium initial ensemble. The reader eager to learn the full theory might want to skip this section. For sake of simplicity we set .
Let and be two chemical species of observational interest that partake to three reversible chemical reactions, one that produces or consumes , one that produces or consumes , and one that converts into and vice versa:
Here are (assemblies of) chemostats, that is, substrate species that are independently administered by the environment and whose concentrations do not vary in time. A complete treatment of the thermodynamics of chemostatted networks has been provided by the authors in Ref. . This reaction scheme is a simple model of a molecular engine, where reactions 1 and 2 provide the working substances and , and reaction 3 performs chemical work by transforming molecules of into molecules of , while completing a thermodynamic cycle within the system. The observable of interest is the rate at which this latter reaction proceeds. The reaction network can be represented by a graph whose edges are the complexes of the species of observational interest, as follows
Under several assumptions (Boltzmann’s Stosszahlansatz, well-stirred solution etc.), the number of variable molecules undergoes a continuous-time Markov jump process satisfying the random-time change equation
where we collected the two variable species in a vector , and is the vector of stoichiometric coefficients of the -th reaction,
Each time a reaction proceeds the populations increase by an amount . Hence, the state space where this random process takes place is the lattice (that we call the chemical lattice) generated by the three vectors , limited to the sector of positive populations, as depicted in Fig. 2(a). Notice that the generating vectors are not independent, as
The quantity , counting the number of times reaction occurs up to time , is distributed with a unit-rate Poisson distribution  according to
The ’s are the rates at which reaction proceeds. By the law of mass-action these rates are proportional to the products of the abundances of the reactants,
where for sake of simplicity we set all proportionality constants to unity.
In the following we will drop all explicit time dependencies. We define the currents as the stochastic variables that count the net number of transitions between site and a neighboring site,
Notice that . Each transition decreases the Gibbs free energy of the system by an amount
i.e. , , . Notice that both the currents and the Gibbs free energy differences are antisymmetric by inversion of the orientation of the transition,
A crucial observation is that the Gibbs free energy differences satisfy Kirchoff’s Loop Law (KLL)
where the affinity is the total Gibbs free energy decrease around a cyclic process that starts at and moves by amount , then , then to return to by virtue of Eq. (9). Quite importantly, it is peculiar to chemical networks with mass-action law that the affinity does not depend on the state where the cycle is based, which will allow a significant simplification.
Finally we introduce the total entropy production
Notice that we restricted the sum to the positive verse of the reactions to avoid double-counting. This expression simplifies in view of KLL,
where we introduced
respectively with the meaning of total increase of species at fixed and total increase of species at fixed . We now introduce the second crucial ingredient, namely Kirchhoff’s Current Law (KCL). Since the trajectory is continuous, the total current out of a given state visited by the trajectory must be zero, but for states and that are respectively a source and a sink of a unit current. KCL can then be integrated to give
where is the Heaviside step function on a discrete set 111Defined as , for , the Kroenecker delta. (see Fig. 1(b) for clarification). Employing KCL we obtain for the entropy production
To give an interpretation of the latter term, we resort to the chemical master equation that rules the evolution of the probability of being at at time
where on the right-hand side we introduced the generator , which of course depends on the chemostats’ concentrations. The claim in Sec. 2 is that the second term in Eq. (20) should be obtained as the equilibrium distribution of the system where the third reaction is inhibited, which is achieved by setting . We then look for the solution of
which is easily seen to be a Poissonian
satisfying detailed balance
Finally, we obtain the desired result
FT’s for the chemical master equation in the form of Eqs. (1)-(4) can be derived by standard techniques, hence supporting the result that exact FT’s for the currents hold when the initial state is sampled from the equilibrium ensemble obtained by disconnecting the mechanisms that drive the system to nonequilibrium. In the next sections we will provide the full theory and in Sec. 3 we come back to this example to discuss how the general theory allows is to generalize these observations to arbitrary chemical networks.
4 Tools: Cycle and cocycles in network thermodynamics
We will be involved with continuous-time Markov jump process on a finite state space. The state space of the system can be viewed as an oriented graph, the trajectory followed by the jump process as a sequence of oriented edges connecting vertices. All thermodynamic observables associated to the trajectory (current, free energy increase etc.) are weights assigned to every edge of the graph, antisymmetric by inversion of the orientation of the edge. In this section we briefly refresh the ensuing algebraic graph-theoretical picture; a broader treatment can be found in Ref. .
4.1 Cycle/cocycle decomposition of a graph
The state space of the system is a connected oriented graph (without loops, allowing multiple edges) with oriented edges connecting distinct vertices . Let be the number of edges and that of vertices. The orientation is arbitrary, by we represent the inverse orientation of an edge. The graph is completely characterized by the matrix prescribing the incidence relations between edges and vertices:
We will make constant reference to the following example:
Real combinations of edges are denoted by a vector in Dirac notation . We also introduce the transpose vector and the scalar product .
Cycles are, as intuitive, successions of oriented edges (a tail for each tip, at every vertex). Cycles are algebraically characterized as integer right null vectors of the incidence matrix,
Therefore they form a vector space. A preferred basis of cycles can be constructed by a standard procedure that was employed by Schnakenberg for the analysis of network thermodynamics . We briefly refresh it. A spanning tree is a maximal set of (unoriented) edges that contains no cycles. We choose one such arbitrary spanning tree,
The choice of a spanning tree is arbitrary from a mathematical point, while physically it corresponds to the choice of a different set of relevant observables. An important property of spanning trees is that there exists a unique oriented path connecting any vertex to any other vertex of the graph belonging to the spanning tree.
Edges not belonging to the spanning (dotted, above) are called the chords . Their number is given by Euler’s formula . Adding chord to the spanning tree identifies a unique cycle with orientation along the verse of the chord:
It can be proven that the set of cycles so generated is a basis for the null space of the incidence matrix .
Orthogonal to the set of cycles is the set of cocycles (or cuts), generated by the corresponding cochords. A cochord is an edge belonging to the spanning tree. Their number is . Removing cochord from a spanning tree disconnects the graph into two basins. The set of edges that connect one basin to the other, oriented in the verse of the generating cochord, is a cocycle:
In this example, vertices in the source basins are disks, in the target basins are circles, edges of the spanning tree that connect them are dotted.
We can now give a vector representation of chords, cycles, corchords, and cocycles as linear combinations of edges of the graph. We denote them respectively , , . In our example, we have
A crucial result proven in Ref.  is that the identity over the edge space can be decomposed as
where denotes the outer product of two vectors, yielding an matrix. We will repeatedly employ this identity in the following sections. As a side comment, and are oblique complementary projectors, , , , which gives rise to an elegant formulation of network thermodynamics based on projectors.
4.2 Tidal and cycle currents and their dual variables
In network thermodynamics one assigns two observables to each oriented edge, the current and its conjugate force . They are required to be antisymmetric by edge inversion, , . We collect their values in two vectors and . The entropy production is the bilinear form
We immediately apply the identity decomposition Eq. (33) to the currents to obtain
The second line defines the cycle currents and the tidal currents . The former are well-known from Schnakenberg’s analysis. They quantify the cycling of a process. The latter give the total flux from a set of source vertices to a set of target vertices, as visualized in Eq. (31). This decomposition is somewhat analogous to the Helmholtz decomposition of a vector field into a curl and a gradient (modulo a harmonic term).
Now, plugging Eq. (36) into the entropy production we obtain
where we introduced the affinities as observables conjugate to the cycle currents, and the potential drops as observables conjugate to the tidal currents.
5 Results: Fluctuation theorems for the currents
5.1 Transient FT for joint tidal and cycle currents
We consider a continuous-time Markov jump process , starting at state and performing transitions in time to state . The rate of a jump from state to is . The process visits state for an interval before jumping to state , up to time . The joint probability density of the states visited by the trajectory is given by
where is the Dirac delta and . The marginal probability density for the states is given by
The actual dependence on is difficult to compute and not relevant for what follows.
We define the time-reversed process as that process where the succession of states and time intervals are inverted, and . Notice that for the time-reversed process we have
As a consequence, the following fluctuation relation between forward and backward successions of states holds
The above expression can be further marginalized. We define the (time-integrated) edge current along as a stochastic variable counting the net number of transitions from to ,
It satisfies the antisymmetry relations and . Eq. (43) can then be written in terms of the currents as follows
where the entries of are the thermodynamic forces . We can then finally marginalize for the currents taking values . It must be here noted that the expressions of the current and of the probability measure are conditioned to a fixed total number of transitions . Although, experimentally one usually has access to the total number of transitions between two states irrespective of the total number of transitions that the trajectory performs. The probability of observational values of the currents up to time is given by
where is the probability that a total number of transitions occurs in time . Since the time-reversed process performs the same number of jumps, we do not need to compute it, and we obtain
We can now apply the cycle/cocycle decomposition exposed in Sec. 4. To do this, we should first identify a spanning tree of the graph such that the chord currents are currents of physical relevance to the specific model at hand. We can then define stochastic cycle and tidal currents
The first counts the number of times the -th cycle is enclosed, the second counts the number of times the process jumps from the source to the target basin of the -th cut. Since cycle and tidal currents are one-to-one to the edge currents, by a simple coordinate transformation (which can be proven, but here is irrelevant, to have unit Jacobian) we can move to the probability of the former, which by Eq.(38) obeys the joint FT
This equation generalizes the result of Andrieux and Gaspard to finite-times. An important observation is that we do not need to condition this FT to an initial and a final state, since conditioning is implicit. In fact, knowledge of the tidal currents implies knowledge of the initial and final states.
5.2 Asymptotic FT for the cycle currents
We now focus on the tidal term. An important fact is that a Markov jump process on a graph is continuous, i.e. it can be drawn without lifting the pencil. As an important implication, tidal currents can only take values in , while cycle currents take values in . Intuitively, while a process can wind arbitrarily many times around a cycle in a preferential direction, the only way to increase a tidal current is to move from the source to the target basin of the cocycle, after which by continuity only the inverse can occur, restoring the tidal current to its initial value. In fact, orienting all edges of the graph in such a way that the initial state is in the source basin of all cocycles, then tidal currents can only take values in .
Let be the rows of the incidence matrix. Then by continuity of the trajectory
This also shows that knowledge of the complete set of currents retains the infomation about the initial and final states, that is, de facto the FT Eq. (49) is conditioned to its boundary states. Since by definition the kernel of is the cycle space, then the row space of the incidence matrix spans the cocycle space. Then there exists a linear transformation such that , given by
The terms are given by the fact that the rows of the incidence matrix are not linearly independent, and therefore one has to adjust a double counting. Then
which is if both and are in the target or in the source, if is in the source and in the target, and vice versa.
It follows from this discussion that tidal currents are bounded, while cycle currents typically increase with time according to
where is the current per time, and when referred to a stochastic variable means asymptotically, almost surely. Then in the long time limit the asymptotic FT of Andrieux and Gaspard is obtained
Unless a restoring force intervenes (e.g. periodic driving, time-dependent protocols etc.), tidal forces are doomed to disappear. Indeed, their effect is so week that they do not affect any statistical property of the currents .
5.3 Unconditional transient FT for the cycle currents (without ensembles)
Let us define a function over the vertices
After Eq. (52) we obtain
which means that the tidal contribution is a state function. An intuitive way to picture this is the following. Suppose the trajectory moves from state to along the spanning tree. Then the tidal term is increased by the potential drops within the tree and the cycle term is untouched. Now, instead, suppose that and are connected by a chord, and that the trajectory travels along that chord. Then, one will account one full cycle and consequently will have to subtract terms from the tidal accounting. As far as the cocycle term is concerned, the result of these two operations is the same, that is, the tidal term only cares about where the trajectory is and not how it got there, because every time a cycle is enclosed that contribution is thrown in the cycle term.
Then, we can express the joint FT in terms of the cycle currents, conditioned to the boundary states:
Now suppose the initial state is sampled with probability , and that the initial state of the time-reversed processes is sampled with probability . The choice
clearly de-conditions the above expression with respect to the boundary states, which can then be marginalized yielding the finite time FT for the cycle currents, with given initial state
Finally, let us give a clear interpretation of the special distribution from which boundary states must be sampled to attain an exact finite-time FT. By definition is the potential drop across the generating cochord, which belongs to the spanning tree. Fixing a reference state , let be the vector representative of the unique oriented path in the spanning tree that connects to . Then one has
It is then well known that the state is the equilibrium steady state of the network where the chords are completely removed. Changing reference state amounts to shifting the potential by a constant ground value.
Let us recapitulate this important message. Consider a continuous-time Markov jump process on a graph. Choose a spanning tree of the graph. The criterium is that the currents flowing across the chords (i.e. edges not belonging to the spanning tree) should be of particular physical relevance. Then, such cycle currents satisfy an exact transient fluctuation relation if the processes are sampled from the equilibrium ensemble reached by the Markov process with all rates along chords set to zero.
Finally, as is well-known equilibrium ensembles can be obtained by a maximum entropy procedure  with suitable constraints that incorporate the information available about the system before the experiment is conducted, which is used to build up a prior probability (on the role of priors in nonequilibrium statistical mechanics at a foundational level, see Refs. [26, 27] by one of the authors). Then, it is interesting to note that the ensemble that needs to be prepared for an observation of the FT at all times is precisely the maximum entropy ensemble (the state of lowest information) with respect to the experimental apparatus that is going to measure the currents. The initial ensemble is dictated uniquely by the topology of the graph, expressed by Eq. (51) and by the potential Eq. (55) whose average plays the role of the maximum entropy constraint according to the theory pioneered by Jaynes .
6 Example: network of chemical reactions revisited
The graph-theoretical method exposed in Secs. 4 and 5 can be fruitfully applied to the chemical network analyzed in Sec. 3, conjecturing that all results can be extended to the infinite case in some mathematically rigorous way. The chemical lattice admits an infinite number of spanning trees, most of which have no regularity. We choose the comb depicted in Fig. 2(a), consisting of the edges along the axis and of all the vertical edges. With reference to Fig. 2(b), there are two kinds of cycles: Cycles of kind generated by chords have null affinity; Cycles of kind generated by chords have affinity . Hence the cycle term reads
yielding the first term in Eq. (20).
There are two types of cocycles. Horizontal cochords carrying potential drop generate cocycles of type in Fig. 2(c) with current . As regards the vertical set of cochords of type , notice that all those that are based at the same carry the same potential drop . Then a resummation occurs, as depicted in Fig. 2(d), and one obtains an effective cocyle carrying current . We then obtain
which is the second term in Eq. (17).
Finally, the initial equilibrium ensemble that makes the finite-time FT for the currents hold is the steady state of a Markov process occurring on the comb, which is obtained by eliminating reaction 3 from the reaction scheme. It is interesting to note that, due to the fact that the affinities of type 0 in Fig. (2) all vanish, any spanning tree that only consists of reaction steps of kind 1 and 2 will give rise to the same initial ensemble. Hence, while in principle the choice of the spanning tree affects the FT, chemical networks enjoy certain regularity properties that boil down the great generality of Schnakenberg’s analysis to actual physical currents. In the specific case of chemical networks, this possibility is granted by the mass-action law and by the fact that the topology of the chemical lattice in Fig. 2(a) is simply obtained by shifting and reproducing the chemical reaction network in Eq. (6). While we postpone a full discussion of chemical networks to a future publication, it is interesting to note that not all chemical reaction schemes allow for such great simplification depending on certain topological properties related to the concept of deficiency of the network .
In this paper we collected several results about finite-time FT’s for the currents for stationary Markov jump processes on a finite state space, giving a unified framework based on certain algebraic graph-theoretical techniques that allow to decompose any thermodynamic observable in terms of cycle and cocycle observables, by virtue of the fundamental identity Eq. (33). In particular, we generalized the result of Andrieux and Gaspard  for the so-called Schnakeberg currents to finite times, both by direct manipulation of the probability density function of Markov jump trajectories, and by the generating function approach exposed in A.
One major limitation of our results that calls for further generalization is the requirement that transition rates are time-independent. Indeed, the FT without ensemble quantities discussed by Bulnes-Cuetara et al.  was formulated for time-dependent protocols. We mention, without further discussion, that a full generalization of their result to arbitrary Markov processes on finite state spaces in terms of cycles and cocycles is significantly more complicated. The resulting expressions defy a clear physical interpretation. Partial results can be obtained under more restrictive assumptions, e.g. that the affinities are constant in time. Furthermore, as regards linear chemical networks we point out that there exists a finite-time FT with the initial state sampled from the steady nonequilibrium ensemble, with a time-dependent effective affinity . We leave these issues and the treatment of general chemical reaction networks to future inquiry.
We thank G. Bulnes-Cuetara for discussion and A. Wachtel for comments on the manuscript. The research was supported by the National Research Fund Luxembourg in the frame of project FNR/A11/02 and of Postdoc Grant 5856127.
Appendix A Generating function approach
As an addendum, we will prove the “initial ensemble” FT exposed in Sec. 5.3 by the commonly employed method of the generating function, generalizing the treatment explored by Bulnes-Cuetara in Ref.  and by Andrieux et al. in Ref. .
Let be a set of counting fields defined on the chords of the graph and the unit vector of length . It is well-known  that the moment generating function for the cycle currents is given by
where evolves by the Feynman-Kac type of equation
Here, is the tilted generator with entries
and the initial condition is given by
where is the initial probability density over states. It is important that does not depend on . Physically, this reflects the fact that the preparation of the system cannot depend on the output of the counting experiment.
The tilted generator obeys a crucial time-reversal symmetry relation. Let us consider the generator with entries
By definition, the -th affinity is the circulation of the force around a cycle comprising chord and the unique path that is internal to the spanning tree and that goes from state to state . Then,
Moreover, notice that for all edges internal to the spanning tree
We then obtain
yielding the symmetry relation
where , being the normalization factor.
Let us also consider
In general, the two are not related unless
in which case
Eq. (74) is nothing but the requirement that the initial state is the equilibrium state described in Sec. 5.3, Eq. (75) is well-known to imply the fluctuation relation when moving from the generating function picture to the probability density picture.
-  Bochkov G. N. and Kuzovlev Y. E., Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics: I. Generalized fluctuation-dissipation theorem, 1981 Physica A 106, 443; Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics: II. Kinetic potential and variational principles for nonlinear irreversible processes, 1981 Physica A 106, 480.
-  Kurchan J., Fluctuation theorem for stochastic dynamics, 1998 J. Phys. A.: Math. Gen. 31, 3719.
-  Maes C., The fluctuation theorem as a Gibbs property, 1999 J. Stat. Phys. 95, 367.
-  Lebowitz J. L. and Spohn H., A Gallavotti-Cohen-Type Symmetry in the Large Deviation Functional for Stochastic Dynamics, 1999 J. Stat. Phys. 95, 333.
-  Kalpazidou S., 1995 Cycle representations of Markov processes (Berlin: Springer).
-  Hill T. L., 2005 Free Energy Transduction and Biochemical Cycle Kinetics (New York: Dover).
-  Schnakenberg J., Network theory of microscopic and macroscopic behavior of master equation systems, 1976 Rev. Mod. Phys. 48, 571.
-  Polettini M., Macroscopic constraints for the minimum entropy production principle, 2011 Phys. Rev. E 84, 051117.
-  Andrieux D. and Gaspard P., Fluctuation theorem for currents and Schnakenberg network theory, 2007 J. Stat. Phys. 127, 107.
-  Andrieux D. and Gaspard P., Fluctuation theorem and Onsager reciprocity relations, 2004 J. Chem. Phys. 121, 6167.
-  Wachtel A., Vollmer J. and Altaner B., Determining the Statistics of Fluctuating Currents: General Markovian Dynamics and its Application to Motor Proteins, 2014 arXiv:1407.2065.
-  Faggionato A. and Di Pietro D., Gallavotti-Cohen-Type Symmetry Related to Cycle Decompositions for Markov Chains and Biochemical Applications, 2011 J. Stat. Phys 143, 11.
-  Bertini L., Faggionato A., D. Gabrielli D., Flows, currents, and cycles for Markov Chains: large deviation asymptotics, 2014 arXiv:1408.5477.
-  Van den Broeck C. and Esposito M., Three faces of the second law. I. Master equation formulation, 2010 Phys. Rev. E 82, 011143.
-  Bulnes-Cuetara G., Esposito M. and Imparato A., Exact fluctuation theorem without ensemble quantities, 2014 Phys. Rev. E 89, 052119.
-  Andrieux D., Gaspard P., Monnai T. and Tasaki S., 2009 The fluctuation theorem for currents in open quantum systems New J. Phys. 11, 043014.
-  Kim K., Kwon C. and Park H., Heat fluctuations and initial ensembles, 2014 arXiv:1406.7084.
-  Fogedby H. C. and Imparato A., Heat fluctuations and fluctuation theorems in the case of multiple reservoirs, 2014 arXiv:1408.0537.
-  Campisi M., Hänggi P. and Talkner P., Colloquium: Quantum fluctuation relations: Foundations and applications., 2011 Rev. Mod. Phys. 83, 771.
-  Polettini M., Cycle/cocycle oblique projections on oriented graphs, 2014 arXiv:1405.0899.
-  Seifert U., Stochastic thermodynamics: principles and perspectives, 2008 Eur. Phys. J. B 64, 423.
-  Polettini M. and Esposito M., Irreversible thermodynamics of open chemical networks I: Emergent cycles and broken conservation laws, 2014 J. Chem. Phys. 141, 024117.
-  Ethier S. N. and Kurtz T. G., 1986 Markov processes: Characterization and convergence (New York: John Wiley & Sons).
-  Nakanishi N., 1971 Graph Theory and Feynman Integrals, (New York: Gordon and Breach).
-  Jaynes E. T., Information theory and statistical mechanics, 1957 Phys. Rev. 106, 620.
-  Polettini M., Of dice and men. Subjective priors, gauge invariance, and nonequilibrium thermodynamics, 2013 Proceedings of the 12th Joint European Thermodynamics Conference.
-  Polettini M., Nonequilibrium thermodynamics as a gauge theory, 2012 Eur. Phys. Lett. 97, 30003.
-  Feinberg M., Chemical reaction network structure and the stability of complex isothermal reactors-I. The deficiency zero and deficiency one theorems, 1987 Chem. Eng. Sci. 42, 2229.
-  G. Bulnes-Cuetara, Fluctuation theorem for quantum electron transport in mesoscopic circuits, arXiv:1310.0620 (2013).
-  D. Andrieux and P. Gaspard, Temporal disorder and fluctuation theorem in chemical reactions, Phys. Rev. E 77, 031137 (2008).