Statistical analysis of the first passage path ensemble of jump processes
The transition mechanism of jump processes between two different subsets in state space reveals important dynamical information of the processes and therefore has attracted considerable attention in the past years. In this paper, we study the first passage path ensemble of both discrete-time and continuous-time jump processes on a finite state space. The main approach is to divide each first passage path into nonreactive and reactive segments and to study them separately. The analysis can be applied to jump processes which are non-ergodic, as well as continuous-time jump processes where the waiting time distributions are non-exponential. In the particular case that the jump processes are both Markovian and ergodic, our analysis elucidates the relations between the study of the first passage paths and the study of the transition paths in transition path theory. We provide algorithms to numerically compute statistics of the first passage path ensemble. The computational complexity of these algorithms scales with the complexity of solving a linear system, for which efficient methods are available. Several examples demonstrate the wide applicability of the derived results across research areas.
Key words. jump process, non-ergodic process, non-exponential distribution, first passage path, transition path theory
(Markov) jump process has been extensively studied in the past decades and nowadays it becomes a standard mathematical model that is widely applied to problems arising from physics, chemistry, biology, etc [7, 24]. Among many important topics in the study of jump processes, the first passage paths have attracted considerable attention within different disciplines, where the main purpose is to understand the transition mechanism of the system between different subsets in state space and to e.e. access how much time the transitions typically take . Examples include reaction networks in chemistry [38, 37], phenotypic switches in cell biology , conformational changes in molecular dynamics [17, 25], disease spreading within certain geographical areas [4, 6], as well as spread of information on social networks . In these contexts, studies of the first passage paths are often very helpful to understand the underlying processes and to foster- or prevent the transition events.
A common situation that one often encounters in the study of many real-world applications is that the system exhibits metastability and transition events become very rare [27, 15, 30]. To study the transition events in these scenarios, transition path theory (TPT) has been developed both for diffusion processes on continuous state space [9, 8, 34] and for Markov jump processes on discrete state space [23, 5]. It provides a probabilistic framework to analyze the statistical properties of system’s reactive trajectories (following the terminology in ) and enables us to answer the questions which are important in order to understand the transition mechanism of the system. In the discrete state space setting, TPT can be applied to study Markov jump processes which are ergodic with a unique invariant measure [23, 35].
Meanwhile, inspired by the wide applications that have emerged due to the rapid development of network science, scientific interest has been extended to study processes which go beyond ergodic Markov jump processes [26, 29, 28]. One simple example when ergodicity is violated is the random walk on a directed graph which contains sink-states (states with no outward edges). Another example of a non-Markovian process is the continuous-time random walk with possibly non-exponential waiting time distributions. These kind of processes have been applied to model the burst and memory effects in real-world networks [13, 20, 10]. The above described processes are no longer ergodic Markov jump processes and therefore TPT can not be applied directly. However, interesting insights can be obtained when studying the transition mechanism from one subset to another, to determine how much time the transitions typically take and to identify key nodes or edges for the transition events. To our best knowledge, these questions have not been systematically addressed in the non-Markovian and non-ergodic setting (see related study in ).
In the current work we consider the first passage path ensemble of both continuous-time and discrete-time jump processes on a discrete, finite state space. While we are strongly influenced by TPT and will study similar quantities such as the probability of visiting each node and probability fluxes on each edge, we emphasize that both the subject being studied and the setting are different from the study of TPT [23, 5]. Specifically, we will study the first passage path by dividing it into nonreactive and reactive segments. While in the ergodic case the reactive segments coincide with the reactive trajectories in TPT, in this work we also study the statistics of nonreactive segments and their relations with the statistics of the entire first passage paths. Furthermore, in our study the processes are allowed to be non-ergodic, and in the continuous-time scenario the waiting time on each node can be also non-exponentially distributed. The main contributions of this paper can be summarized as follows. Firstly, statistical properties of both nonreactive and reactive path ensembles have been analyzed, which enables us to compute several important quantities associated to the reactive- and nonreactive ensembles. Their relations to the entire first passage path ensemble are obtained subsequently. Secondly, since the processes are not necessarily ergodic or in stationary anymore, our analysis (of the reactive segments) can be viewed as an extension of TPT to nonequilibrium path ensembles. When further assuming that the processes are ergodic Markov jump processes and the first passage path ensemble is in equilibrium, our analysis recovers TPT and allows to elucidate the statistical connection between transition paths in TPT and the first passage paths. Thirdly, the numerical implementation of the derived results is discussed, which is useful in many applications.
The paper is organized as follows: The first passage path ensemble of discrete-time Markov jump processes is studied in Section LABEL:sec-reactive. The first passage path ensemble of continuous-time jump processes with general waiting time distributions is considered in Section LABEL:sec-ctjp. Connections of our analysis with TPT are discussed in Section LABEL:sec-ergodic where we recover the results of TPT when the processes are ergodic and the path ensemble is in equilibrium. Algorithmic issues are discussed in Section LABEL:sec-algo and several numerical examples are presented in Section LABEL:sec-example to illustrate our analysis framework. Conclusions and further discussions are present in Section LABEL:sec-conclusion. Finally, some supplementary analysis related to Section LABEL:sec-reactive and Section LABEL:sec-ctjp is provided in Appendix LABEL:appsec-1 and Appendix LABEL:appsec-2.
For the reader’s convenience, important notations which will be used in the current work are summarized in Table LABEL:tab-notation.
|set of neighbor nodes||set of sink nodes|
|subset of node set||subset of node set|
|committor function||backward committor function|
|invariant measure of discrete process||normalization constant|
|nonreactive trajectory ensemble||reactive trajectory ensemble|
|last hitting time of set||initial distribution on set|
|initial distribution of reactive trajectories||transition probability of discrete process|
|probability density of the waiting time along an edge||transition probability of the nonreactive trajectories|
|average number of times that the first passage paths visit a node||transition probability of the reactive trajectories|
|average number of times that nonreactive trajectories (except the last node) visit a node||average number of times that the first passage paths visit an edge|
|average number of times that nonreactive trajectories visit a node||average number of times that nonreactive trajectories visit an edge|
|average number of times that reactive trajectories visit a node||average number of times that the reactive trajectories visit an edge|
|probability that system stay at a node longer than certain amount of time||average total time of the first passage paths|
|probability density that the system jumps along an edge at a certain time||average total time of the nonreactive trajectories|
|average amount of time that the system stays at a node||average total time of the reactive trajectories|
2 Analysis of the first passage path ensemble : discrete-time Markov jump processes
In this section, we study the first passage path ensemble when the system is a discrete-time Markov jump process. After introducing various useful notations in Subsection LABEL:subsec-prepare, we will analyze the statistics of the nonreactive ensemble and the reactive ensemble in Subsection LABEL:subsec-nonreactive and Subsection LABEL:subsec-reactive, respectively. Their connections with the whole first passage path ensemble are discussed in Subsection LABEL:subsec-fpp.
We also emphasize that although several quantities that we will consider are strongly influenced by TPT, the main difference is that they are related to the first passage path ensemble which may be out of equilibrium and therefore do not reply on the existence of invariant measure. Connections of our analysis with TPT will be further discussed in Section LABEL:sec-ergodic.
We start by introducing the processes we will consider and the notations that will be used in this paper. Let be the graph representation of a directed network , where is the set of nodes and is the set of edges. We assume that is a finite set and, without loss of generality, for some . We will write if and denote as the set consisting of all nodes which can be directly reached from node . For simplicity, we assume that there are no loop edges in , i.e. , for . Since we are also interested in the case when the network contains sinks, we allow the case when for some node and denote to be the set of sink nodes.
Suppose that two disjoint nonempty subsets are given. And let be a probability distribution on set . Consider the discrete-time Markov process defined by the jump probability for , which is the probability that the system will jump to state if its current state is . Given node , we define two stopping times related to sets
and set (or ) if the process will never reach set (or ) from . Especially, we have for and for . Assume the process starts from some node and consider the path until it reaches set . Denote the path as , then we have , , where and for . Such a path is called the first passage path [6, 21]. We also define the last hitting time of set as
and set , if for . In the following, we will use the notations , and for simplicity if we do not explicitly emphasize the initial state .
Given any first passage path starting from a state in set (therefore ), we can split this path into two segments using the last hitting time . The first segment , which will be termed as nonreactive trajectory or nonreactive segment, consists of the consecutive nodes visited by the process before it eventually leaves set . The second segment , which is called reactive trajectory in transition path theory , is the transition pathway of the process from set to set (see Figure LABEL:graph-1(a) for illustration). We will call it either reactive trajectory or reactive segment. Equivalently, the reactive trajectory is the segment of the first passage path starting from some node in which is hit by the process for the last time. Clearly, we have and for . The ensembles of the nonreactive trajectories and reactive trajectories are denoted by
respectively, where goes over all first passage paths with initial state and .
Given a node , we say that set (or ) is reachable from , if there is a path of finite length , where , such that , (or ), , , and , . Throughout the paper, we will make the following assumption.
. For each node , either set or set is reachable from . Furthermore, set is reachable from each node .
Under the above assumption, we can conclude by contradiction that with probability for all . Therefore, the committor function,
corresponding to sets is well defined and will play an important role in the following study. It is known that it satisfies the equations 
We also define two subsets
Clearly, we have and , where the latter is implied by Assumption LABEL:assump-1 together with the following result.
Let be the last hitting time defined in (LABEL:sigma-def) and subsets be defined in (LABEL:v-minus-plus). We have
iff and set is reachable from node .
iff and set is reachable from node .
Proof. We will only prove the first two conclusions since the third one can be obtained using a similar argument as the second one.
By definition of , we know that the two events and are equivalent. It follows from the definition of in (LABEL:committor-q) that .
Suppose . Using the definition of , we know with a positive probability, which implies that set is reachable from . Conversely, suppose and set is reachable along the path . If , then and . Applying equation (LABEL:committor), we obtain . As for , we can repeat the argument and obtain . However, this contradicts the fact that since . Therefore we have .
From Proposition LABEL:prop-0, we know that Assumption LABEL:assump-1 implies and .
2.2 Nonreactive ensemble
In this subsection, we study the nonreactive ensemble defined in (LABEL:ensembles). Given a nonreactive trajectory , where and is the last hitting time defined in (LABEL:sigma-def), it is clear that for (Proposition LABEL:prop-0). Our aim is to define a Markov jump process whose path ensemble coincides with . For this purpose, we first introduce a virtual node to mark the end of the path and consider the extended node set . A jump from some node to node indicates that is the last node of the nonreactive trajectory. We also define the transition probability from node to as
and . Using (LABEL:committor), we can verify that is a probability matrix on set with row sum one. Furthermore, we have
The ensemble can be generated by trajectories (without the end node ) of the Markov jump process on space with transition probability matrix in (LABEL:p-bar) and initial distribution .
Proof. Let be one nonreactive trajectory such that . For and , using the Markov property and the definition of the committor function , we can compute
For , event is equivalent to the event that the original process (on ) hits set first (before it returns to ) after it leaves set from node . The probability of the latter is equal to and therefore coincides with the probability that the process (on ) jumps from to . The case for can be argued using a similar argument.
Given , let be the average number of times that node is visited by the nonreactive trajectories, i.e.
where is the indicator function, denotes the ensemble average of , or equivalently, the ensemble average of the first passage paths with the initial distribution . It is straightforward to verify
Furthermore, we have
For , function defined in (LABEL:theta-bar) satisfies the equation
where probabilities are given in (LABEL:p-bar).
Proof. Using Proposition LABEL:prop-1, we can rewrite the transition probabilities using the ensemble average, i.e.
where and we have set , which marks the end of the nonreactive trajectory. Therefore, using the definition of in (LABEL:theta-bar) and summing up , we obtain
Since , we know and therefore
where we have used the definition of again and the fact that .
Especially, summing up in (LABEL:theta-bar-eqn) and using the fact that has row sum one, we can verify the equality
It is also useful to introduce
which is the probability distribution of the last hitting node on set and satisfies (see Figure LABEL:graph-1 (b) for illustration). Furthermore, in analogy to (LABEL:theta-bar), we define
which coincides with on and the summation above is interpreted to be zero when . The following relations are straightforward.
Let and the probabilities be defined in (LABEL:theta-bar) and (LABEL:p-bar), respectively. Then for ,
Proof. We only need to prove the first equality in (LABEL:mu-r-theta-bar). Notice that (LABEL:mu-r) can be written as
where we have used the convention that . Similarly as in (LABEL:p-bar-path), applying Proposition LABEL:prop-1, we have
Therefore, we conclude that
We also introduce the nonreactive probability flux, which is defined as
and equals zero for other edges in . From (LABEL:theta-bar) and (LABEL:p-bar-path), can be written as a path ensemble average
i.e. the average number of times that edge is visited by the nonreactive trajectories. Applying Proposition LABEL:prop-2 and Proposition LABEL:prop-3, we have
2.3 Reactive ensemble
In this subsection, we turn to study the reactive ensemble in (LABEL:ensembles). First of all, from the definition (LABEL:ensembles), we know that the probability distribution of the initial state of the ensemble coincides with the probability distribution of the end node in the ensemble . See (LABEL:mu-r) and Proposition LABEL:prop-3 (an alternative derivation can be found in Appendix LABEL:appsec-1).
Now recall the definition of set in (LABEL:v-minus-plus), Proposition LABEL:prop-0, and also notice that . In analogy to the previous subsection, our aim is to construct a Markov jump process on space whose path ensemble coincides with . To do so, we define the transition probabilities
where , and is the delta function centered at . We have the following result (the proof is omitted since it is similar to Proposition LABEL:prop-1; Note that a similar result under the assumption of ergodicity has been obtained in ).
The ensemble can be generated from the trajectories of the Markov jump process on space which is defined by the transition probabilities in (LABEL:p-tilde) and the initial distribution on .
For , let be the average number of times that node has been visited by reactive trajectories, i.e.
where is a first passage path and denotes the corresponding ensemble average. Clearly when . Summing up in (LABEL:theta-tilde), we can obtain
Similar to Proposition LABEL:prop-2, we have
Let be the transition probability defined in (LABEL:p-tilde). We have for , and
Proof. Since nodes of set appear in reactive trajectories only as the starting state with probability distribution , we have on . Equation (LABEL:theta-tilde-2) can be proved in an analogous way to Proposition LABEL:prop-2, by rewriting as the ensemble average and applying Proposition LABEL:prop-4. See (LABEL:p-bar-path). By the definition of the stopping time , we know for . Therefore, from (LABEL:theta-tilde),
Summing up and using the fact that , we conclude
Notice that (LABEL:theta-tilde-2) holds for node as well. For numerical computation, we can first compute for by solving the linear system (LABEL:theta-tilde-2), and then obtain the end node distribution for each from (LABEL:theta-tilde-2).
For each edge , , , we define the reactive probability flux as
and set it to zero for other edges in . In analogy to (LABEL:j-bar-path), we have the ensemble average representation
where is a first passage path and denotes the corresponding ensemble average. Applying Proposition LABEL:prop-5, we can obtain
Let be the reactive probability flux defined in (LABEL:J-r-flux) and be given in (LABEL:theta-tilde). We have
Proof. It follows directly by applying (LABEL:J-r-flux), Proposition LABEL:prop-5, and the fact that row sums of matrix are one.
2.4 First passage path ensemble
Based on the previous analysis of the nonreactive and reactive trajectories, we study the whole first passage path ensemble in this subsection.
Given , we define as the mean first hitting time of set for the discrete-time Markov jump process starting from , i.e. . Using Markovianity, we can write it as
It is also well known that satisfies the equations
Accordingly, if the probability distribution of the initial state on set is , the average length of the first passage path is then given by . Also, let be the average number of times that node has been visited by the first passage path with initial distribution , i.e.
In analogy to Proposition LABEL:prop-2 and Proposition LABEL:prop-5, we know it satisfies
Furthermore, taking into account the definitions of , and in (LABEL:theta-bar), (LABEL:theta-bar-prime) and (LABEL:theta-tilde), as well as relations (LABEL:theta-bar-sum), (LABEL:theta-tilde-sum), we can verify the following relations
In fact, the following explicit relations hold.
For any ,
Proof. Clearly if . Hence we only need to consider the case when . Using the Markov property, we can compute the probability from the first passage ensemble. Specifically, let be a first passage path. Conditioning on where , the event is actually equivalent to . Therefore,
which implies . The expression of follows from (LABEL:f-theta-bar-theta-tilde).
We also introduce the probability flux for edge , which is defined as
As in (LABEL:j-bar-path) and (LABEL:j-tilde-path), we have
And the following result is straightforward.
For , we have , or equivalently,
Proof. (LABEL:J-flux) follows directly from the ensemble average representations of fluxes in (LABEL:j-flux-path), (LABEL:j-bar-path), (LABEL:j-tilde-path).
3 Analysis of the first passage path ensemble : continuous-time jump processes
In this section, we turn to study the first passage path ensemble of continuous-time jump processes with general waiting time distributions. After introducing the process and some notations, we will derive a discrete-time Markov jump process which encodes the dynamical information of the original continuous-time process. Applying the analysis in Section LABEL:sec-reactive to this discrete-time Markov jump process, we are able to analyze the first passage path ensemble of the continuous-time jump process.
We consider a continuous-time jump process on network . We will follow the derivations in  and some extra notations are needed in order to introduce the process. A related study on the continuous-time jump processes can also be found in .
Following Subsection LABEL:subsec-prepare, let us first assume that node and consequently . From node , the system may jump to one of the nodes in after staying at for a certain duration of time. We assume that the jump events at each node are independent of each other, and denote the probability density of the waiting time as , conditioning on that the system jumps from to another node . As a probability density function, it satisfies . Let be the probability that the system stays at node for a time longer than before it leaves. Also denote the probability density that the system jumps from node to at time by . Since we have assumed that the jump events to different nodes are independent of each other, it is straightforward to verify that
and they satisfy the equation
We also know that the average time the system will stay at a node is
Let be the probability that the system jumps from node to , regardless of when the jump event occurs, then we have
and , for . From (LABEL:a-b-formula) it follows and, taking the limit in (LABEL:a-b-identity), we can obtain
We also assume that the process will terminate once it reaches one of the sink nodes. Correspondingly, for , we set , , and therefore the equality is still satisfied. In other words, is a probability matrix and therefore defines a discrete-time Markov jump process on . As further presented in Appendix LABEL:appsec-2, this discrete-time Markov jump process encodes the dynamical information of the original continuous-time process. And it will be useful in the following Subsection LABEL:subsec-continuous-analysis.
For general probability densities , numerical integrations are needed in order to compute transition probabilities from (LABEL:a-b-formula) and (LABEL:p-b-formula). In the following, we discuss three special cases [10, 32] when the analytical expressions of can be obtained (see Figure LABEL:fig-dist-cmp).
Exponential distributions. The probability densities are
for some , where . It corresponds to the continuous-time Markov jump process and we can define the generator matrix of the process whose entries are
for , and , , when . From (LABEL:a-b-formula), (LABEL:p-b-formula) we can obtain that, for ,
Weibull distributions. In this case, we assume
for some and , where . From (LABEL:a-b-formula), (LABEL:p-b-formula) we can obtain that, for ,