On the degree distribution of horizontal visibility graphs associated to Markov processes and dynamical systems: diagrammatic and variational approaches

# On the degree distribution of horizontal visibility graphs associated to Markov processes and dynamical systems: diagrammatic and variational approaches

Lucas Lacasa School of Mathematical Sciences
Queen Mary University of London, Mile End, E14NS London, UK
June 29, 2019
###### Abstract

Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical data analysis and signal processing and (ii) characterising classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.

preprint: Preprint

## I Introduction

The Horizontal visibility algorithm seminalPRE () is a mapping by which an ordered set of real numbers maps into a graph with nodes and adjacency matrix . Nodes and are connected through an undirected edge () if and have so called horizontal visibility, that is, if every intermediate datum follows

 xq

The set of graphs spanned by this mapping are called Horizontal Visibility Graphs (HVGs). These are noncrossing outerplanar graphs with a Hamiltonian path Gutin (); prodinger (), subgraphs of a more general mapping pnas () that have been recently used in the context of time series analysis and signal processing review () (see figure 1 for an illustration). The methodology proceeds by analysing the topological properties of and, according to that information, characterise the structure of and its underlying dynamics. Periodic dynamics are retrieved from the mean degree of the associated graphs, and some recent applications include the description of correlated stochastic and low-dimensional chaotic series toral (), processes that seem to cluster as HVGs with different exponential degree distributions (we recall that the degree distribution describes the probability of a node chosen at random to have degree ). The canonical routes to chaos (Feigenbaum, quasiperiodic, and Pomeau-Manneville scenarios) have also been described in graph theoretical terms chaos (); quasi (); intermitencia (), and in particular sensibility to initial conditions has been related, via Pesin identity, to the Shannon and the block entropies of the degree distribution of the associated graphs. Both the horizontal and the original version of the mapping are currently extensively used for data analysis purposes (as feature extraction algorithm for feature-based classification) in several disciplines, such as biomedicine bio () or geophysics geo () (see review () for a recent review).

The fingerprint of the arrow of time and time asymmetries in stationary dynamics can be assessed within this framework by redefining the following node labelling in the transformation: if in the HVG one distinguishes the ingoing degree of a node (where node has an ingoing edge from if ) from its outgoing degree (where node has an outgoing edge to if ), the graph converts into a digraph, so called Directed Horizontal Visibility Graph (DHVG) irrev (). In a DHVG the ingoing degree distribution (the probability that a node chosen at random from has ingoing links) and the outgoing degree distribution , defined as the probability that a node chosen at random from has outgoing links, are in general different distributions (see figure 1 for an illustratation). Recent works point that different measures of the distinguishability between and irrev () (amongst other graph properties irrev2 ()) can distinguish statistically reversible processes (white noise, stochastic processes with linear correlations, measure preserving chaotic maps) from statistically irreversible stationary processes, such as some dissipative chaotic maps or thermodynamic systems driven out of equilibrium, and are shown to be computationally efficient methods to quantify such asymmetries.
In summary, it appears that the degree distribution of visibility graphs, along with some derived functionals (moments, entropy, etc) carry much information of the signal and the underlying dynamics.

Despite these recent applications, the majority of results using these transformations are numerical and/or heuristic. In this work we focus on the analytical properties of the degree distribution (and ), when the HVG (or DHVG) is associated to some important classes of dynamical processes. The amount of closed analytical results on these degree distributions is also so far scarce, most of them being found for HVGs which are nontrivial graph theoretical fixed points of some renormalisation group transformation, associated with critical dynamics generated at accumulation points of the canonical routes to chaos chaos (); quasi (); intermitencia (). Almost no exact results exist for other nonperiodic dynamical processes, with the exception of uncorrelated random processes seminalPRE (); irrev (). Here we build on previous results and explore in section II how to compute these degree distributions for any given dynamical process with well defined invariant measure. To this end, we propose a general diagrammatic theory similar in spirit to Feynman’s approach in quantum theory. Each degree probability can be computed as a ’perturbative’ expansion, where the ’coupling constant’ is the number of hidden variables , and for each a different number of diagrams must me summed up (corrections to the free field). For each , the free field () can be summed up exactly, and diagrams of order yield a correction of order O() in the probability amplitude. In section III we show that the results on uncorrelated processes (for both HVG and DHVG) represent a very special case in this formalism, where all orders of the theory can be summed up exactly, as the joint distributions completely factorise. We then address in section IV the theory for dynamical processes that fulfill the Markov property and give explicit formulae for a arbitrary diagrams. As case studies we focus on stochastic stationary Markovian processes (Ornstein-Uhlenbeck) in section V and in one dimensional deterministic maps (both chaotic and quasiperiodic) in section VI. For a few terms of the distribution, their diagram expansions can be summed up exactly (up to all orders), but in the general case a convergent perturbative approach should be followed, to calculate each degree probability up to arbitrary precision. In the particular case of chaotic maps, the existence of forbidden patterns forbidden () drastically decreases the number of allowed diagrams to a finite number, up to a given order, speeding up the convergence of the perturbative analysis. Then, in section VII we present a general variational approach, by which we can derive analytical results for the entire distribution provided some entropic optimisation hypothesis holds. We show that the entire distributions of chaotic and stochastic Markovian (Ornstein-Uhlenbeck) processes is well approximated if we assume the graphs associated to these processes are maximally entropic. In section VIII we conclude.

## Ii A general diagrammatic formalism for degree distributions

Consider a stationary dynamical process with an underlying invariant density (where , and they can be either finite -finite support- or diverge -unbounded support-). is just a probability density for stationary stochastic processes, or an invariant measure for dynamical systems. Consider also a series of variables extracted from , which can be either a realisation of a stochastic process with underlying density , or a trajectory of some deterministic dynamical system with invariant measure . In both situations, each time series realisation of variables has a joint probability (we may call this the propagator). For each realisation of the dynamical process, this ordered set will have an associated HVG/DHVG with a given topology, and in particular with a given degree distribution. As already acknowledged, previous numerical research suggests that such distribution encapsulates and compresses much of the series structure and variability. Consequently, is there any general approach to derive such distributions in a constructive way? The response is positive for large (), that is, when we consider bi-infinite series and their associated graphs (note, nonetheless, that finite size effects decrease fast with , and therefore whereas theory can only be built in the asymptotic limit, finite size systems converge to the asymptotic solution very fast seminalPRE ()).
In what follows we present such a constructive approach. We recall that each datum in the ordered data set is associated with a node with label in the HVG/DHVG. With a litle abuse of language, from now on we will use to label both the datum and the node (and we will quote as variable), although we will make clear when necessary that they represent different mathematical objects.

Consider a datum (node) chosen at random from the bi-infinite sequence , that we label without loss of generality. To calculate the degree distribution of the HVG (or the outgoing degree distribution of the DHVG) is equivalent to calculate the probability that node has degree (or outdegree ) respectively. For each , a different number of variable configurations (relative positions of data at the right hand side and left hand side of ) are allowed. Each of these configurations can in turn be schematised as a different diagram and will contribute with a different correction to the total probability, as will be shown.
For illustrative purposes, consider of an HVG (as HVGs are undirected and connected, by construction ). The only configuration that allows to have degree requires, by construction, that the variables on the left and right hand side are larger than . Label these bounding variables and respectively. Accordingly, this unique configuration (diagram) has an associated contribution to

Incidentally, note at this point that, by construction, the HVGs are outerplanar and have a Hamiltonian path, and therefore is directly related to the probability that a node chosen at random forms a 3-clique (triplet). Therefore equation LABEL:cluster calculates the global clustering coefficient of a HVG.

A similar calculation can be made for DHVGs. Indeed, (the DHVG is connected) and is equivalent to calculate the probability that :

For (or ) an arbitrary large number of different contributions should be taken into account. Consider first the HVG: if has degree , then besides the bounding variables, (visible) inner variables should be distributed on the right and left hand side of . Due to visibility constraints, there are exactly different possible configurations , where the index determines the number of inner variables on the left-hand side of Accordingly, corresponds to the configuration for which inner variables are placed at the left-hand side of , and inner variables are placed at its right-hand side. Each of these possible configurations have an associated probability that will contribute to such that

 P(k)=k−2∑i=0pi. (3)

In the case of a DHVG, there is only one such configuration , such that .

Now, among the inner variables, an arbitrary number of hidden variables may appear (see figure 2 for a graphical illustration of this situation for ). In summary, between each pair of inner variables variables , an arbitrary (eventually infinite) number of hidden variables may take place. For each , the different combinations of inner and hidden variables yield different contributions. As we will show, in some cases it is very convenient to represent such set of contributions as a series expansion

 P(k)=∑αP(α)(k), (4)

where denotes the number of hidden variables. Up to each order, a set of different configurations can take place. Each of these configurations is indeed a different diagram. Following Feynman’s approach, the free field theory takes into account the contributions spanned by configurations with no hidden variables (), whereas the interaction field introduces corrections of all orders in the number of hidden variables, which is here the coupling constant. Accordingly, accounts for the diagrams with no hidden variables, account for all the diagrams with one hidden variable, etc.
The same formalism can be extended to DHVGs, where for concreteness we focus on the out degree distribution

 P→(k)=∑αP(α)→(k), (5)

In figures 3 and 4 we represent some contributing diagrams up to third order corrections for both HVG and DHVGs.

Two general questions arise:
(i) Can we explicitely calculate the contribution of any particular diagram contributing to or ?
(ii) What can we say about the convergence properties of the series in (4, 5)?

Regarding (i), to compute closed form solutions for the entire degree distributions and is a hopeless endeavour in the general case, mainly because the n-point propagators (where is also arbitrarily long) cannot be calculated explicitely. However, in several particular cases this propagator factorises and therefore some theory can still be constructed. This is for instance the case of an uncorrelated random process with an underlying probability density . For this large class of stochastic processes, the propagator completely factorises

 f(x0,x1,x2,...,xn)=f(x0)f(x1)f(x2)⋯f(xn).

In the next section we show that this simplification is the key aspect that permits us to calculate and in closed form.

On relation to (ii), note that all expressions of the form 4 or 5 should be converging series as they are probabilities. In particular, a diagram in the correction to the free field of order has hidden variables. This diagram is considering the possibility that the seed () and the bounding variable () are separated by a total of intermediate variables. Now, in a previous work seminalPRE () it was shown that the probability of two variables separated by intermediate variables are connected decreases asymptotically as for uncorrelated random processes. For a constant value of the degree , this means that the correction of the diagrams of order decreases at least as fast as , hence the perturbation series is convergent for the uncorrelated case. For other processes this is not rigorously proved, however we will see that processes with fast decaying correlations are likely to also have associated converging perturbation series.

## Iii Uncorrelated processes: exact results for asymptotic distribution

When the dynamical process under study is a random uncorrelated one, we are in a special case where we don’t actually need explicit diagrams to compute the entire degree distribution of both HVG and DHVG, and therefore we don’t actually need to follow a perturbative approach such as eqs 4 and 5. We first recall seminalPRE () the result for HVGs, and further extend this to DHVGs.

### iii.1 Hvg

Theorem 1.
Let a real valued bi-infinite time series created from a random variable with probability distribution , with , and consider its associated Horizontal Visibility Graph . Then,

 P(k)=13(23)k−2, k=2,3,…, ∀ f(x) (6)

Sketch of the proof.
The proof proceeds by induction on . Let us begin by computing some easy terms:

 P(k=2)=Prob(x−1,x1≥0)=∫10f(x0)dx0∫1x0f(x1)dx1∫1x0f(x−1)dx−1. (7)

Now, the cumulative probability distribution function of any probability distribution is defined as

 F(x)=∫x0f(x′)dx′, (8)

where , and . In particular, the following relation between and holds:

 f(x)Fn−1(x)=1ndFn(x)dx. (9)

We can accordingly rewrite and compute equation 7 as

 P(k=2)=∫10f(x0)[1−F(x0)]2dx0=13, ∀f(x) (10)

In the case , two different configurations arise: , in which has 2 bounding variables ( and respectively) and a right-hand side inner variable (), and the same for but with the inner variable being place at the left-hand side of the seed:

 P(k=3)=p(C0)+p(C1)≡p0+p1.

Notice at this point that an arbitrary number of hidden variables can eventually be located between the inner data and the bounding variables, and this fact needs to be taken into account in the probability calculation. The geometrical restrictions for the hidden variables are for and for . Then,

 p0 = Prob((x−1,x2≥x0)∩(x1

Now, we need to consider every possible hidden variable configuration. In the particular case of an uncorrelated process, all these contributions can be summed up exactly:

 p0=∫10f(x0)dx0∫1x0f(x−1)dx−1∫1x0f(x2)dx2∫x00f(x1)dx1+ ∞∑r=1∫10f(x0)dx0∫1x0f(x−1)dx−1∫1x0f(x2)dx2∫x00f(x1)dx1r∏j=1∫x10f(nj)dnj

where the first term corresponds the contribution of a configuration with no hidden variables and the second sums up the contributions of hidden variables. Making use of the properties of the cumulative distribution we arrive to

 p0=∫10f(x0)dx0∫1x0f(x−1)dx−1∫1x0f(x2)dx2∫x00f(x1)1−F(x1)dx1, (12)

where we also have made use of the sum of a geometric series. We can find an identical result for , since the last integral on equation III.1 only depends on and consequently the configuration provided by is symmetrical to the one provided by . We finally have

 P(k=3)=2p0=−2∫10f(x0)(1−F(x0))2ln(1−F(x0))dx0=29 ∀f(x), (13)

where the last calculation also involves the change of variable .

Hitherto, we can deduce that a given configuration contributes to with a product of integrals according to the following rules:

• The seed variable [S] provides a contribution of .

• Each boundary variable [B] provides a contribution of .

• An inner variable [I] provides a contribution .

These ’Feynman rules’ allow us to schematize in a formal way the probability associated to each configuration. For instance in the case , has a single contribution represented by the formal diagram , while for , where ’s diagram is and ’s is . It seems quite straightforward to derive a general expression for , just by applying the preceding rules for the contribution of each . However, there is still a subtle point to address that becomes evident for the case (see figure 2). While in this case leads to essentially the same expression as for both configurations in (and in this sense one only needs to apply the preceding rules to derive ), and are geometrically different configurations. These latter ones are configurations formed by a seed, two bounding and two concatenated inner variables, and concatenated variables lead to concatenated integrals. For instance, applying the same formalism as for , one come to the conclusion that for ,

 p0=∫10f(x0)dx0∫x00f(x1)dx11−F(x1)∫x0x1f(x2)dx21−F(x2)∫1x0f(x3)dx3∫1x0f(x−1)dx−1. (14)

While for the case every integral only depended on (and consequently we could integrate independently every term until reaching the dependence on ), having two concatenated inner variables on this configuration generates a dependence on the integrals and hence on the probabilities. For this reason, each configuration is not equiprobable in the general case, and thus will not provide the same contribution to the probability ( was an exception for symmetry reasons). In order to weight appropriately the effect of these concatenated contributions, we can make use of the definition of . Since is formed by contributions labelled where the index denotes the number of inner data present at the left-hand side of the seed, we deduce that in general the inner variables have the following effective contribution to :

• has concatenated integrals (right-hand side of the seed).

• has concatenated integrals (right-hand side of the seed) and an independent inner data contribution (left-hand side of the seed).

• has concatenated integrals (right-hand side of the seed) and another 2 concatenated integrals (left-hand side of the seed).

• has concatenated integrals (left-hand side of the seed).

Observe that is symmetric with respect to the seed.

Including this modification in the Feynman rules, we are now ready to calculate a general expression for . Formally,

 P(k)=k−2∑j=0[S][B]2[I]j[I]k−2−j, (15)

where the sum extends to each of the configurations, the superindex denotes exponentiation and the subindex denotes concatenation (this latter expression can be straightforwardly proved by induction on the number of inner variables). The concatenation of inner variable integrals reads

 [I]n=∫x00f(x1)dx11−F(x1)n−1∏j=1∫x0xjf(xj+1)dxj+11−F(xj+1). (16)

which can be proved by induction (using the properties of the cumulative distribution and using appropiate change of variables) to reduce to

 [I]n=(−1)nn![ln(1−F(x0))]n. (17)

According to the formal solution 15 and to equation 17, we finally have

 P(k) = k−2∑j=0(−1)k−2j!(k−2−j)!∫10f(x0)[1−F(x0)]2[ln(1−F(x0))]k−2dx0 = 31−kk−2∑j=0(k−2)!j!(k−2−j)!=13(23)k−2, ∀f(x)    □

Surprisingly, we can conclude that for every probability distribution , the degree distribution of the associated horizontal visibility graph has the same exponential form (note that this result parallels the one for the spectrum of white noise, which is universally flat independently of the underlying probability density of the noise). This is an exact result for sufficiently long uncorrelated random processes (to avoid finite-size or border effects), that is, we consider bi-infinite series and the calculations address the asymptotic shape of the degree distribution . However, it should be noted that numerical results for finite size series converge fast to the asymptotic solution.

### iii.2 Dhvg

Theorem 2.
Let be a bi-infinite sequence of independent and identically distributed random variables extracted from a continuous probability density . Then, the out degree distributions of its associated directed horizontal visibility graph is

 P→(k)=(12)k, k=1,2,3,…, ∀f(x) (18)

Sketch of the proof.
The proof follows a similar path as for HVG, such that instead of equation 15 one gets that for a DHVG and for

 P→(k)=[S][B][I]k−1=−1k−1(k−1)!∫10f(x0)(1−F(x0))ln[1−F(x0)]k−1dx0=(12)k, ∀f(x)

where we have used the change of variables and the formal solution for the concatenation of inner variable integrals (eq. 17)

When variables in are not uncorrelated anymore, the propagator does not factorise and clean, closed form solutions are more difficult to obtain. In what follows we present a general theory for correlated Markovian dynamics. From now on, for concreteness we focus on the DHVG, although a similar formalism can be extended to HVGs.

## Iv Markovian dynamics: a constructive solution

The second ’easiest’ class of dynamical processes are Markovian processes with an integrable invariant measure . For these systems the propagators of the n-joint probabilities factorise into conditional probabilities

 f(x0,x1,x2,...,xn)=f(x0)f(x1|x0)f(x2|x0)⋯f(xn|xn−1)

Examples include the Ornstein-Uhlenbeck process kampen () as an example of a stationary stochastic process, the fully chaotic logistic map as an example of an ergodic chaotic map with smooth invariant measure, or irrational rotation as an example of a zero entropy (nonchaotic), ergodic deterministic process. In what follows we show how to treat this special (although very relevant) class of processes, we develop a formal theory to calculate in terms of diagrammatic expansions, and we apply it to compute the distributions in the aforementioned examples.

For the sake of exposition, let us work out a case by case analysis. For the simplest case , the only possible diagram (see figure 4) describes the situations where datum bounds () and therefore

 P↣(1)=∫b1f(x0)dx0∫bx0f(x1|x0) (19)

Second, let us consider . In this case we need to take into account the situations where datum sees and , that is: is a bounding variable () and an arbitrary number of hidden variables can be placed between and . There are an infinite number of diagrams that can be labelled as , where determines the number of hidden variables in each diagram. Accordingly,

 P↣(2)=∞∑i=0[zi]≡∞∑α=0P(α)↣(2) (20)

where in this case each diagram each correcting order in the perturbative expansion only includes a single diagram that reads

 [z0]=∫baf(x0)dx0∫x0af(x1|x0)dx1∫bx0f(x2|x1)dx2
 [z1]=∫baf(x0)dx0∫x0af(x1|x0)dx1∫x1af(z1|x1)dz1∫bx0f(x2|z1)dx2
 [z2]=∫baf(x0)dx0∫x0af(x1|x0)dx1∫x1af(z1|x1)dz1∫x1af(z2|z1)dz2∫bx0f(x2|z2)dx2

and for ,

 [zi]=∫baf(x0)dx0∫x0af(x1|x0)dx1∫x1af(z1|x1)dz1[i−1∏p=2∫x1af(zp|zp−1)dzp]∫bx0f(x2|zi)dx2. (21)

The third component is slightly more involved, as there can appear arbitrarily many hidden variables between and and between and . Each diagram accounts for a particular combination with hidden variables between and , and hidden variables between and , with the restriction that is a bounding variable and cannot be a bounding variable. There are an infinite number of such contributions, diagrams that can be labelled as :

 P↣(3)=∞∑i=0∞∑j=0[z(1)i⋆z(2)j]≡∞∑α=0P(α)↣(3) (22)

where the diagram, for and yields a correction

 [z(1)i⋆z(2)j] = ∫baf(x0)dx0∫x0af(x1|x0)dx1∫x1af(z(1)1|x1)dz(1)1[i∏p=2∫x1af(z(1)p|z(1)p−1)dz(1)p] ⋅∫x0x1f(x2|z(1)i)dx2∫x2af(z(2)1|x2)dz(2)1[j∏p=2∫x2af(z(2)p|z(2)p−1)dz(2)p]∫bx0f(x3|z(2)j)dx3

As can be seen, in this case more than one diagram contributes to each order in the perturbation expansion. The number of diagrams that contribute to the correction of order in is

 N(α,k)=(k−1α) (23)

Although explicit integral formulae get more and more involved for larger values of the degree , they can be easily expressed diagrammatically. For instance,

 P↣(4)=∞∑i=0∞∑j=0∞∑k=0[z(1)i⋆z(2)j⋆z(3)k] (24)

describes diagrams with an arbitrary amount of hidden variables located between and , and , or and . The general approximant of this expansion (for ) reads:

 [z(1)i⋆z(2)j⋆z(3)k] = ∫baf(x0)dx0∫x0af(x1|x0)dx1∫x1af(z(1)1|x1)dz(1)1[i∏p=2∫x1af(z(1)p|z(1)p−1)dz(1)p] ⋅∫x0x1f(x2|z(1)i)dx2∫x2af(z(2)1|x2)dz(2)1[j∏p=2∫x2af(z(2)p|z(2)p−1)dz(2)p] ⋅∫x02f(x3|z(2)j)dx3∫x3af(z(3)1|x3)dz(3)1[k∏p=2∫x3af(z(3)p|z(3)p−1)dz(3)p] ⋅∫bx0f(x4|z(3)k)dx4

Finally, by induction we can prove that in the general case the diagrammatic series reads

 P↣(k)=k−1∏Λ=1[∞∑iΛ=0]k−1⨂Λ=1[z(Λ)iΛ](≡∞∑α=0P(α)→(k)), (25)

where the general term

 k−1⨂Λ=1[z(Λ)iΛ]=[z(1)i1⋆z(2)i2⋯⋆z(k−1)ik−1]

is a diagram that introduces the following correction in order :

 k−1⨂Λ=1[z(Λ)iΛ] = ∫baf(x0)dx0k−1∏Λ=1{∫x0af(x1|x0)dx1∫x1af(z(1)1|x1)dz(1)1[i∏p=2∫x1af(z(1)p|z(1)p−1)dz(1)p] ⋅∫x0xi−1f(xi|z(Λ)Λ)dxi∫xiaf(z(Λ)1|xi)dz(Λ)1[Λ∏p=2∫xiaf(z(Λ)p|z(Λ)p−1)dz(Λ)p]}∫bx0f(xk|z(Λ)Λ)

Concrete evaluation of each diagram depends on the type of dynamics involved (that is, on the explicit invariant density and propagator ), much in the same vein as explicit computation of Feynman diagrams depend on the type of field theory involved ramond ().

## V Stochastic Markovian dynamics: the Ornstein-Uhlenbeck process

An Ornstein-Uhlenbeck process kampen () is a stochastic process that describes the velocity of a massive Brownian particle under the influence of friction. It is the only stationary Gaussian Markov process (up to linear transformations). Since it is a Gaussian process, we have

 f(x)=exp(−x2/2)√2π

 f(x2|x1)=exp(−(x2−Kx1)2/2(1−K2))√2π(1−K2),

where and the correlation function is . Variables are Gaussian, therefore and . Note that in this case we also expect convergent perturbation expansions of the type (5), since calculation of long distance visibility for short-range correlated processes can be approximated, for large , to the uncorrelated case.

### v.1 Exact solutions

To begin, consider . Note that this probability is only spanned by a single diagram (no hidden variables), yielding simply

 P↣(1)=∫∞−∞dx0f(x0)∫∞x0dx1f(x1|x0)=12,

for .

is more involved, as it has corrections to the free field solution at all orders (see figure 4). In the next section we compute a few terms of this diagrammatic expansion. Here we show that a mathematical trick can be used to compute all orders of simultaneously, yielding an exact result. The technique is numerical but nonetheless quite sound. We start by rewriting the contribution of each diagram in equation 20 as a recursive relation

 P↣(2)=∫∞−∞f(x0)∞∑p=0I(p|x0)

where satisfies

 I(p|x0)=∫x0−∞dx1f(x1|x0)Gp(x1,x1,x0), (26)

and satisfies:

 G0(x,y,z) ≡ ∫∞zf(h|y)dh, (27) Gp(x,y,z) = ∫x−∞dhf(h|y)Gp−1(x,h,z), p≥1. (28)

(These latter recursions can be straightforwardly proved by induction on the number of hidden variables). The last equations represent a convolution that can be one more time formally rewritten as , or , with an integral operator . Accordingly,

 I(p|x0)=∫x0−∞dx1f(x1|x0)∞∑p=0Gp(x1,x1,x0)≡∫x0−∞dx1f(x1|x0)S(x1,x1,x0). (29)

In the last equation the formal sum is defined as

 S(x,y,z)=∞∑p=0Gp(x,y,z)=∞∑p=0TpG0=11−TG0, (30)

where the convergence of last summation is guaranteed provided the spectral radius , that is,

 limn→∞||Tn||1/n<1, (31)

where is the norm of . This condition is trivially fulfilled as is a Markov transition probability. Then, equation (30) can be written as , or more concretely

 S(x,y,z)=G0(x,y,z)+∫x−∞dhf(h|y)S(x,h,z), (32)

which is a Volterra equation of the second kind for . Typical one-dimensional Volterra integral equations can be numerically solved applying quadrature formulae for approximate the integral operator librodeernesto (). The technique can be easily extended in the case that the integral equation involves more than one variable, as it is this case, using a Simpson-type integration scheme to compute the function . One technical point is that one needs to replace the limit in the integral by a sufficienly small number . We have found that is enough for a good convergence of the algorithm. Given a value of the recursion relation

 S(a,a+nδ,z)=G0(a,a+nδ,z) S(a+kδ,a+nδ,z)=G0(a,a+nδ,z)+δk−1∑i=0f(a+iδ|a+nδ)S(a+(k−1)δ,a+iδ,z)+O(δ2)

where is the step of the algorithm, for and , allows us to compute for . We find (for a correlation time )

 P↣(2)≈0.24,

on good agreement with numerical experiments (table 1).

### v.2 Perturbative expansions

In general the infinite set of diagrams that contributes to each probability cannot be summed up in closed form, and a perturbative approach should be followed. Here we explore the accuracy and convergence of such series expansion (eq. 5).

Up to zeroth order (free field theory), we count only diagrams with no hidden variables. Accordingly, the first degree probabilities are

 P(0)↣(1)=∫∞−∞dx0f(x0)∫∞x0dx1f(x1|x0) P(0)↣<