The entropy of \alpha-continued fractions:numerical results

# The entropy of α-continued fractions: numerical results

Carlo Carminati, Stefano Marmi, Alessandro Profeti, Giulio Tiozzo
28 November, 2009
###### Abstract

We consider the one-parameter family of interval maps arising from generalized continued fraction expansions known as -continued fractions. For such maps, we perform a numerical study of the behaviour of metric entropy as a function of the parameter. The behaviour of entropy is known to be quite regular for parameters for which a matching condition on the orbits of the endpoints holds. We give a detailed description of the set where this condition is met: it consists of a countable union of open intervals, corresponding to different combinatorial data, which appear to be arranged in a hierarchical structure. Our experimental data suggest that the complement of is a proper subset of the set of bounded-type numbers, hence it has measure zero. Furthermore, we give evidence that the entropy on matching intervals is smooth; on the other hand, we can construct points outside of on which it is not even locally monotone.

## 1 Introduction

Let . We will study the one-parameter family of one-dimensional maps of the interval

 Tα:[α−1,α]→[α−1,α]
 Tα(x)=⎧⎨⎩1|x|−⌊1|x|+1−α⌋\textupifx≠00\textupifx=0

If we let , , , then for every we get the expansion

 x=ϵ1,αa1,α+ϵ2,αa2,α+x\raisebox−6.02pt\scalebox0.8⋱

with which we call -continued fraction. These systems were introduced by Nakada () and are also known in the literature as Japanese continued fractions.

The algorithm, analogously to Gauss’ map in the classical case, provides rational approximations of real numbers. The convergents are given by

 {p−1,α=1p0,α=0pn+1,α=ϵn+1,αpn−1,α+an+1,αpn,αq−1,α=0q0,α=1qn+1,α=ϵn+1,αqn−1,α+an+1,αqn,α

It is known (see ) that for each there exists a unique invariant measure absolutely continuous w.r.t. Lebesgue measure.

In this paper we will focus on the metric entropy of the ’s, which by Rohlin’s formula () is given by

 h(Tα)=−2∫αα−1log|x|ρα(x)dx

Equivalently, entropy can be thought of as the average exponential growth rate of the denominators of convergents: for -a.e. ,

 h(Tα)=2limn→∞1nlogqn,α(x)

The exact value of has been computed for by Nakada () and for by Cassa, Marmi and Moussa ().

In , Luzzi and Marmi computed numerically the entropy for by approximating the integral in Rohlin’s formula with Birkhoff averages

 h(α,N,x)=−2NN−1∑j=0log|Tjα(x)|

for a large number of starting points and then averaging over the samples:

 h(α,N,M)=1MM∑k=1h(α,n,xk)

Their computations show a rich structure for the behaviour of the entropy as a function of ; it seems that the function is piecewise regular and changes monotonicity on different intervals of regularity.

These features have been confirmed by some results by Nakada and Natsui (, thm. 2) which give a matching condition on the orbits of and

 Tk1α(α)=Tk2α(α−1)\textupforsomek1,k2∈N

which allows to find countable families of intervals where the entropy is increasing, decreasing or constant (see section 3). It is not difficult to check that the numerical data computed via Birkhoff theorem fit extremely well with the matching intervals of .

In this paper we will study the matching condition in great detail. First of all, we analyze the mechanism which produces it from a group-theoretical point of view and find an algorithm to relate the -continued fraction expansion of and when a matching occurs. This allows us to understand the combinatorics behind the matchings once and for all, without having to resort to specific matrix identities. As an example, we will explicitly construct a family of matching intervals which accumulate on a point different from . In fact we also have numerical evidence that there exist positive values, such as , which are cluster point for intervals of all the three matching types: with , and .

We then describe an algorithm to produce a huge quantity of matching intervals, whose exact endpoints can be found computationally, and analyze the data thus obtained. These data show matching intervals are organized in a hierarchical structure, and we will describe a recursive procedure which should produce such structure.

Let now be the union of all matching intervals. It has been conjectured (, sect. 4, pg. 1213) that is an open, dense set of full Lebesgue measure. In fact, the correctness of our scheme would imply the following stronger

###### Conjecture 1.1.

For any , all elements of have regular continued fraction expansion bounded by .

Since the set of numbers with bounded continued fraction expansion has Lebesgue measure zero, this clearly implies the previous conjecture.

We will then discuss some consequences of these matchings on the shape of the entropy function, coming from a formula in . This formula allows us to recover the behaviour of entropy in a neighbourhood of points where a matching condition is present. First of all, we will use it to prove that entropy has one-sided derivatives at every point belonging to some matching interval, and also to recover the exact value of for . In general, though, to reconstruct the entropy one also has to know the invariant density at one point.

As an example, we shall examine the entropy on an interval on which (by previous experiments, see , sect. 3) it was thought to be linearly increasing: we numerically compute the invariant density for a single value of and use it to predict the analytical form of the entropy on , which in fact happens to be not linear. The data produced with this extrapolation method agree with high precision, and much better than any linear fit, with the values of computed via Birkhoff averages.

The paper is structured as follows: in section 2 we will discuss numerical simulations of the entropy and provide some theoretical framework to justify the results; in section 3 we shall analyze the mechanisms which produce the matching intervals and in section 4 we will numerically produce them and study their hierarchical structure; in section 5 we will see how these matching conditions affect the entropy function.

## Acknowledgements

This research was partially supported by the project “Dynamical Systems and Applications” of the Italian Ministry of University and Research111PRIN 2007B3RBEY., and the Centro di Ricerca Matematica “Ennio De Giorgi”.

## 2 Numerical computation of the entropy

Let us examine more closely the algorithm used in  to compute the entropy. A numerical problem in evaluating Birkhoff averages arises from the fact that the orbit of a point can fall very close to the origin: the computer will not distinguish a very small value from zero. In this case we neglect this point, and complete the (pseudo)orbit restarting from a new random seed222Another choice is to throw away the whole orbit and restart; it seems there is not much difference on the final result. As a matter of fact this algorithm produces an approximate value of

 hϵ(α):=∫Iαfϵ(x)dμα(x)   with   fϵ(x):={0|x|≤ϵ−2log|x||x|>ϵ

where ; of course is an excellent approximation of the entropy , since the difference is of order . To calculate we use the Birkhoff sums

 hϵ(α,N,x):=1NN−1∑j=0fϵ(Tjα(x))

and in  the fourth author proves that for large the random variable is distributed around its mean approximately with normal law and standard deviation where

 σ2ϵ(α):=limn→+∞∫Iα(Snfϵ−n∫fϵdμα√n)2dμα

which explains the aforementioned result by Luzzi and Marmi .

One of our goals is to study the function , in particular we ask whether it displays some regularity like continuity or semicontinuity. To this aim we pushed the same scheme as in  to get higher precision:

1. We take a sample of values chosen in a particular subinterval ;

2. For each value we choose a random sample in (the cardinality M of this sample is usually or );

3. For each () we evaluate as described before (the number of iterates N will be );

4. Finally, we evaluate the (approximate) entropy and take record of standard deviation as well:

 ^hϵ(α,N,M):=1MM∑i=1hϵ(α,N,xi)
 ^σϵ(α):= ⎷1MM∑i=1[hϵ(α,N,xi)−^hϵ(α,N,M)]2.

### 2.1 Central limit theorem

Let us restate more precisely the convergence result for Birkhoff sums proved in . Let us denote by the space of real-valued, -integrable, bounded variation functions of the interval . We will denote by the Birkhoff sum

 Snf=n−1∑j=0f∘Tjα
###### Lemma 2.1.

Let and be an element of . Then the sequence

 Mn=∫Iα(Snf−n∫fdμα√n)2dμα

converges to a real nonnegative value, which will be denoted by . Moreover, if and only if there exists such that and

 f−∫Iαfdμα=u−u∘Tα (1)

The condition given by is the same as in the proof of the central limit theorem for Gauss’ map, and it’s known as cohomological equation. The main statement of the theorem is the following:

###### Theorem 2.2.

Let and be an element of such that has no solutions. Then, for every we have

 limn→∞μα(Snf−n∫Ifdμασ√n≤v)=1√2π∫v−∞e−x2/2dx

Since we know that the invariant density is bounded from below by a nonzero constant, we can show that

###### Proposition 2.3.

For every real-valued nonconstant , the equation has no solutions. Hence, the central limit theorem holds.

Now, for every the function define in the previous section is of bounded variation, hence the central limit theorem holds and the distribution of the approximate entropy approaches a Gaussian when . As a corollary, for the standard deviation of Birkhoff averages

 \textupStd[Snfϵn]=E[(Snfϵn−∫Iαfϵdμα)2]1/2=σ√n+o(1√n)

### 2.2 Speed of convergence

In terms of numerical simulations it is of primary importance to estimate the difference between the sum computed at the step and the asymptotic value: a semi-explicit bound is given by the following

###### Theorem 2.4.

For every nonconstant real-valued , there exists such that

 supv∈R[μα(Snf−n∫Iαfdμασ√n≤v)−1√2π∫v−∞e−x22dx]≤C√n
###### Proof.

It follows from a Berry-Esséen type of inequality. For details see (, th.8.1). ∎

### 2.3 Dependence of standard deviation on α

Given these convergence results for the entropy, it is natural to ask how the standard deviation varies with . In this case not a single exact value of is known; by using the fact that natural extensions of are conjugate (, ), it is straightforward to prove the

###### Lemma 2.5.

The map is constant for .

###### Proof.

See appendix. ∎

The numerical study of this quantity is pretty interesting. We first considered the window , where the entropy is non-monotone. On this interval the standard deviation shows quite a strange behaviour: the values we have recorded do not form a cloud clustering around a continuous line (like for the entropy) but they cluster all above it.

One might guess that this is due to the fact that the map is only semicontinuous, but the same kind of asymmetry takes place also on the interval , where is constant. Indeed, we can observe the same phenomenon also evaluating for a fixed value but taking several different sample sets.

On the other hand this strange behaviour cannot be detected for other maps, like the logistic map, and could yet not be explained. Nevertheless, we point out that if you only consider observables, the standard deviation of Birkhoff sums can be proved continuous, at least for . See .

## 3 Matching conditions

In , Nakada and Natsui found a condition on the orbits of and which allows one to predict more precisely the behaviour of the entropy. Let us denote for any , , by the matrix such that , i.e.

 Mα,x,n=(0ϵα,11cα,1)(0ϵα,21cα,2)…(0ϵα,n1cα,n)

They proved the following

###### Theorem 3.1.

(, thm. 2) Let us suppose that there exist positive integers and such that

Then there exists such that, on , is :

• strictly increasing if

• constant if

• strictly decreasing if

It turns out that conditions (I)-(II)-(III) define a collection of open intervals (called matching intervals); they also proved that each of the cases (i), (ii) and (iii) takes place at least on one infinite family of disjoint matching intervals clustering at the origin, thus proving the non-monotonicity of the entropy function. Moreover, they conjectured that the union of all matching intervals is a dense, open subset of with full Lebesgue measure.

In the following we will analyze more closely the mechanism which leads to the existence of such matchings. As a consequence, we shall see that it looks more natural to drop condition (III) from the previous definition and replace (II) with

(which implies ).

We can now define the matching set as

 M={α∈(0,1]\textups.t.(I)and(II$′$)hold}

Note is open, since the symbolic codings of up to step and of up to step are locally constant.

Moreover, we will see that under this condition it is possible to predict the symbolic orbit of given the symbolic orbit of , and viceversa. As an application, we will construct a countable family of intervals which accumulates in a point different from .

Let us point out that our definition of matching produces a set slightly bigger than the union of all matching intervals satisfying condition (I,II,III): in fact the difference is just a countable set of points.

### 3.1 Structure of PGL(2, Z)

Let us define , . We have an exact sequence

 1→PSL(2,Z)→PGL(2,Z)→{±1}→1

where the first arrow is the inclusion and the second the determinant; moreover, if we consider the group

 C={(±100±1)}≅Z2Z×Z2Z

and let , then

 ¯¯¯¯C∩PSL(2,Z)={e}

therefore we have the semidirect product decomposition

 PGL(2,Z)=PSL(2,Z)⋊¯¯¯¯C

Now, it is well known that is the free product

 PSL(2,Z)=

where

 S=(0−110)U=(0−111)

are such that , . Geometrically, represents the function , and if we denote by the element corresponding to the translation , we have .

The matrix projects to a generator of and it satisfies , and in so we get the presentation

 PGL(2,Z)={S,T,V∣S2=I,(ST)3=I,V2=I,VSV−1=S,VTV−1=T−1}

### 3.2 Encoding of matchings

Every step of the algorithm generating -continued fractions consists of an operation of the type:

 z↦ϵz−cϵ∈{±1},c∈N

which corresponds to the matrix with

 e(ϵ)={0\textup if ϵ=−11\textup if ϵ=1

so if belongs to the cylinder we can express

 Tkα(x)=T−ckSVe(ϵk)…T−c1SVe(ϵ1)(x)

Now, suppose we have a matching and let belong to the cylinder and belong to the cylinder . One can rewrite the matching condition as

 T−ak1SVe(ϵk1)…T−a1SVe(ϵ1)(α)=T−bk2SVe(ηk2)…T−b1SVe(η1)T−1(α)

hence it is sufficient to have an equality of the two Möbius transformations

 T−ak1SVe(ϵk1)…T−a1SVe(ϵ1)=T−bk2SVe(ηk2)…T−b1SVe(η1)T−1

We call such a matching an algebraic matching. Now, numerical evidence shows that, if a matching occurs, then

 ϵ1=+1ϵi=−1\textupfor2≤i≤k1−1ηi=−1\textupfor1≤i≤k2−1

If we make this assumption we can rewrite the matching condition as

 Ve(ϵk1)+1Tak1(−1)e(ϵk1)STak1−1S⋯Ta1S=
 =Ve(ηk2)Tbk2(−1)[e(ηk2)+1]ST−bk2−1S⋯T−b1ST−1

which implies , i.e. . If for instance and , by substituting one has

 (U2S)ak1U(SU)ak1−1−2SU2…SU2(SU)a1−2SUS=
 =(U2S)bk2−1US(U2S)bk2−1−2US…US(U2S)b1−2US

Since every element of can be written as a product of and in a unique way, one can get a relation between the and . Notice that, since we are interested in , and for every , hence there is no cancellation in the equation above. By counting the number of blocks at the beginning of the word, one has , and by semplifying,

 (SU)ak1−1−2SU2…SU2(SU)a1−2SUS==S(U2S)bk2−1−2US…US(U2S)b1−2US (2)

If one has and instead, the matching condition is

 (SU)ak1−1SU2(SU)ak1−1−2SU2…SU2(SU)a1−2SUS=
 (SU)bk2SU2S(U2S)bk2−1−2US…US(U2S)b1−2US

which implies , and simplifying still yields equation .

Let us remark that is equivalent to

 T−1ST−ak1−1S…T−a1SV=VST−bk2−1S…T−b1ST−1

which is precisely condition : by evaluating both sides on

 1Tk1−1(α)+1Tk2−1(α−1)=−1

Moreover, from one has that to every bigger than it corresponds exactly a sequence of of length precisely , and viceversa. More formally, one can give the following algorithm to produce the coding of the orbit of up to step given the coding of the orbit of up to step (under the hypothesis that an algebraic matching occurs, and at least is known).

1. Write down the coding of from step to , separated by a symbol

 a1⋆a2⋆⋯⋆ak1−1
2. Subtract from every ; if , then leave the space empty instead of writing .

 a1−2⋆a2−2⋆⋯⋆ak1−1−2
3. Replace stars with numbers and viceversa (replace the number with consecutive stars, and write the number in place of stars in a row)

4. Add to every number you find and remove the stars: you’ll get the sequence .

###### Example.

Let us suppose there is a matching with and has initial coding . The steps of the algorithm are:

•  3⋆4⋆4⋆⋯⋆4⋆n times2
•  1⋆2⋆2⋆⋯⋆2⋆n times
•  ⋆1⋆⋆1⋆⋆1…1⋆⋆1n times
•  2 3 2 3 …2 3n times

so the coding of is , and .

### 3.3 Construction of matchings

Let us now use this knowledge to construct explicitly an infinite family of matching intervals which accumulates on a non-zero value of . For every , let us consider the values of such that belongs to the cylinder with the respect to . Let us compute the endpoints of such a cylinder.

• The right endpoint is defined by

 (−4−110)n(−3110)(α)=α−1

i.e.

 (1101)(−4−110)n(−3110)(α)=α
• The left endpoint is defined by

 (−4−110)n(−3110)(α)=−1α+2

i.e.

 (−2−110)(−4−110)n(−3110)(α)=α

By diagonalizing the matrices and computing the powers one can compute these value explicitly. In particular,

 α1min=√3−12+40√3−6913(2+√3)−2n+O((2+√3)−4n)
 α1max=√3−12+10√3−1213(2+√3)−2n+O((2+√3)−4n)

The s such that belongs to the cylinder are defined by the equations

 [(−3−110)(−2−110)]n+1(α−1)=α−1

for the left endpoint and

 [(−3−110)(−2−110)]n+1(α−1)=α

for the right endpoint, so the left endpoint corresponds to the periodic point such that

 [(−3−110)(−2−110)](α−1)=α−1

i.e.

 α2min=√3−12

and

 α2max=√3−12+33−19√32(2+√3)−2n+O((2+√3)−4n)

By comparing the first order terms one gets asymptotically

 α2min<α1min<α2max<α1max

hence the two intervals intersect for infinitely many , producing infinitely many matching intervals which accumulate at the point . The length of such intervals is

 α2max−α1min=567−327√326(2+√3)−2n+O((2+√3)−4n)

## 4 Numerical production of matchings

In this section we will describe an algorithm to produce a lot of matching intervals (i.e. find out their endpoints exactly), as well as the results we obtained through its implementation. Our first attempt to find matching intervals used the following scheme:

1. We generate a random seed of values belonging to (or some other interval of interest). When a high precision is needed (we manage to detect intervals of size ) the random seed is composed by algebraic numbers, in order to allow symbolic (i.e. non floating-point) computation.

2. We find numerically candidates for the values of and (if any) simply by computing the orbits of and of up to some finite number of steps, and numerically checking if holds approximately for some and smaller than some bound.

3. Given any triplet determined as above, we compute the symbolic orbit of up to step and the orbit of up to step .

4. We check that the two Möbius transformations associated to these symbolic orbits satisfy condition :

 Mα,α,k1−1=±(1101)Mα,α−1,k2−1(10−1−1)
5. We solve the system of quadratic equations which correspond to imposing that and have the same symbolic orbit as and , respectively.

Let us remark that this is the heaviest step of the whole procedure since we must solve quadratic inequalities; for this reason the value may be thought of as a measure of the computational cost of the matching interval and will be referred to as order of matching.

Following this scheme, we detected more than matching intervals, whose endpoints are quadratic surds; their union still leaves many gaps, each of which smaller than . A table with a sample of such data is contained in the appendix. 333A more efficient algorithm, which avoids random sampling, will be discussed in subsection 4.1.

In order to detect some patterns in the data, let us plot the size of these intervals (figure 5). For each matching interval , we drew the point of coordinates .

It seems there is some self-similar pattern: in order to understand better its structure it is useful to identify some “borderline” families of points. The most evident family is the one that appears as the higher line of points in the above figure (which we have highlighted in green): these points correspond to matching intervals which contain the values , and their endpoints are , ; this is the family already exhibited in . Since and , for the points are very close to . This suggests that this family will “straighten” if we replot our data in log-log scale. This is indeed the case, and in fact it seems that there are also other families which get perfectly aligned along parallel lines of slope 3 (see figure 6).

If we consider the ordinary continued fraction expansion of the elements of these families we realize that they obey to some very simple empirical444Unfortunately we are still not able to prove all these rules. rules:

1. the endpoints of any matching interval have a purely periodic continued fraction expansion of the type and ; this implies that the rational number corresponding to is a common convergent of both endpoints and is the rational with smallest denominator which falls inside the matching interval;

2. any endpoint of a matching interval belongs to a family ; in particular this family has a member in each cylinder for , so that each family will cluster at the origin.

3. other families can be detected in terms of the continued fraction expansion: for instance on each cylinder ( the largest matching interval on which is decreasing has endpoints with expansion and

4. matching intervals seem to be organized in a binary tree structure, which is related to the Stern-Brocot tree555Sometimes also known as Farey tree. See .: one can thus design a bisection algorithm to fill in the gaps between intervals, and what it’s left over is a closed, nowhere dense set. This and the following points will be analyzed extensively in subsection 4.1;

5. if is the endpoint of some matching interval then with ; this would imply that the values which do not belong to any matching interval must be bounded-type numbers with partial quotients bounded above by ;

6. it is possible to compute the exponent of a matching from the continued fraction expansion of any one of its endpoints. Figure 7: A few of the largest matching intervals in the window [1/4, 1/3], and the corresponding nodes of Stern-Brocot tree. The value on the y axis is the logarithm of the size of each interval.

From our data it is also evident that the size of these intervals decreases as increases, and low order matchings tend to disappear as approaches zero. Moreover, as tends to the space covered by members of “old” families of type (ii) encountered decreases, hence new families have to appear. One can quantify this phenomenon from figure 6: since the size of matching intervals in any family decreases as on the interval cylinder (whose size decreases like ): this means that, as n increases, the mass of gets more and more split among a huge number of tiny intervals.

This fact compromises our numerical algorithm: it is clear that choosing floating point values at random becomes a hopeless strategy when approaching zero. Indeed, even if there still are intervals bigger than the double-precision threshold, in most cases the random seed will fall in a really tiny interval corresponding to a very high matching order: this amounts to having very little gain as the result of a really heavy computation.

We still can try to test numerically the conjecture that the matching set has full measure on ; but we must expect that the percentage of space covered by matching intervals (found numerically) will decrease dramatically near the origin, since we only detect intervals with bounded by some threshold. The matching intervals we have found so far cover a portion of of the interval ; this ratio increases to if we restrict to the interval and it reaches restricting to the interval .

The following graph represents the percentage of the interval which is covered by matching intervals of order for different values of 666 Let us point out that for big values of the graph does not take into account all matching intervals of order but only those we have found so far. . It gives an idea of the gain, in terms of the total size covered by matching intervals, one gets when refining the gaps (i.e. considering matching intervals of higher order).

Finally, to have a more precise idea of the relationship between order of matching and size of the matching interval it is useful to see the following scattered plot: the red dots correspond to matching intervals found using a random seed, and the green ones to intervals found using the bisection algorithm. The two lines bounding the cloud correspond to matching intervals with very definite patterns: the upper line corresponds to the family (with endpoints of type and ), the lower line corresponds to matching intervals with endpoints of type and . The latter ones converge to , which is the supremum of all values where the entropy is increasing.

Thus numerical evidence shows that, if is an interval with matching order then the size of is bounded below by where and . On the other hand we know for sure that, on the right of 0.0475 (which corresponds to the leftmost matching interval of our list), the biggest gap left by the matching intervals found so far is of order . So, if is a matching interval which still does not belong to our list, either and (see figure 8), or its size must be smaller than and by the forementioned empirical rule, its order must be . Hence, our list should include all matching intervals with . Figure 10: The order k1+k2 versus the logarithm of the size of the first 107 matching intervals found.

### 4.1 The matching tree

As mentioned before, it seems that matching intervals are organized in a binary tree structure. To describe such structure, we will provide an algorithm which allows to construct all matching intervals by recursively “filling the gaps” between matching intervals previously obtained, similarly to the way the usual Cantor middle third set is constructed.

In order to do so, let us first notice that every rational value has two (standard) continued fraction expansions:

 r=[0;a1,a2,...,am,1]=[0;a1,a2,...,am+1]

One can associate to the interval whose endpoints are the two quadratic surds with continued fraction obtained by endless repetition of the two expansions of :

###### Definition 4.1.

Given with continued fraction expansion as above, we define to be the interval with endpoints

 [0;¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯a1,a2,...,am,1]\textupand[0;¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯a1,a2,...,am+1]

(in any order). The strings and will be said to be conjugate and we will write .

Notice that . It looks like all matching intervals are of type for some rational . On the other hand,

###### Definition 4.2.

Given an open interval one can define the pseudocenter of as the rational number which has the minimum denominator among all rational numbers contained in .

It is straightforward to prove that the pseudocenter of an interval is unique, and the pseudocenter of is itself.

We are now ready to describe the algorithm:

1. The rightmost matching interval is ; its complement is the gap .

2. Suppose we are given a finite set of intervals, called gaps of level , so that their complement is a union of matching intervals. Given each gap , we determine its pseudocenter . Let be the continued fraction expansion of , where is the finite string containing the first common partial quotients, the first partial quotient on which the two values differ, and the rest of the expansion of , respectively. The pseudocenter of will be the rational number with expansions