A Supplements for Section 4

# A global approach to the refinement of manifold data

## Abstract

A refinement of manifold data is a computational process, which produces a denser set of discrete data from a given one. Such refinements are closely related to multiresolution representations of manifold data by pyramid transforms, and approximation of manifold-valued functions by repeated refinements schemes. Most refinement methods compute each refined element separately, independently of the computations of the other elements. Here we propose a global method which computes all the refined elements simultaneously, using geodesic averages. We analyse repeated refinements schemes based on this global approach, and derive conditions guaranteeing strong convergence.

Key Words. Manifold data, geodesic average, convergence analysis.

AMS(MOS) subject classification. 65D99, 40A99, 58E10.

## 1 Introduction

In recent years many modern sensing devices produce data on manifolds or data that is modelled as points on a manifold. An example of such data is orientations of a rigid body as function of time, which can be regarded as data sampled from a function mapping a real interval to the Lie group of orthogonal matrices [29]. The classical methods for the approximation of a function from its samples, such as polynomial or spline interpolation, are linear, and there is no guarantee that such approximations produce always manifold values, due to the non-linearity of manifolds. Therefore, alternative methods are required.

Contrary to the development of classical approximation methods and numerical analysis methods for real-valued functions, the development in the case of manifold-valued functions, which is rather recent, was mainly concerned in its first stages with advanced numerical and approximation processes. Examples of such processes are geometric integration of ODE on manifolds (see e.g. [19]), subdivision schemes on manifolds (see e.g. [34, 37]) and wavelets-type approximation on manifolds (see e.g. [17, 29]).

Subdivision schemes were created originally to design geometrical models [3, 23]. Later, they were recognized as methods for approximation [5, 11]. The important advantage of these schemes is their simplicity and locality. They are defined by repeatedly refining sequences of points, applying in each refinement step simple and local arithmetic averaging. This enables the extension of subdivision schemes to more abstract settings, such as matrices [32] and sets [9].

For manifold valued data, Wallner and Dyn [36] introduced the concept of adapting linear subdivision schemes to manifold data, and in particular for Lie group data. That paper initiated a new path of research on subdivision schemes for manifold data, e.g., [32, 34]. Adaptation of a linear subdivision scheme can be done in several ways, for example, by rewriting the refinement rules as repeated binary averages, and then replacing each binary average by a geodesic average, see e.g., [32, 36].

Averages play a significant role in the methods for the adaptation of linear subdivision schemes to manifold data. A natural choice of an average of two points on a geodesically complete manifold is the midpoint of the geodesic curve between the two points. In some cases, the geodesic curve is known explicitly, e.g., [14, 16, 18, 25], while in general it can be calculated numerically, e.g., [4, 15, 22, 26].

The weighted geodesic average is induced by the geodesic curve, and acts as a generalization of the weighted arithmetic average in Euclidean spaces. For a weight , it is the point on the geodesic curve, connecting the two averaged points, which divides this curve segment in the ratio . Furthermore, on several manifolds, the geodesic average can also be extended to weights outside , that is extrapolating the geodesic curve of two points beyond these points, e.g., [20]. The geodesic average is also well-defined on more general spaces known as geodesic metric spaces, e.g., [1]. Thus, in such spaces our adaptation method is also valid.

We present here a method for the adaptation of linear subdivision schemes to manifold data based on the idea of replacing weighted arithmetic averages by weighted geodesic averages in a generalized Lane-Riesenfeld algorithm [23]. The refinement step in this proposed generalization consists of an elementary refinement of doubling the data, followed by several rounds of averaging. In each round of averaging the data is replaced by the same weighted average of all pairs of adjacent points in the data. Such an adaptation is discussed shortly in [8, 36]. We term such a refinement step “global refinement”.

Many results, concerning the convergence and smoothness of adapted subdivision schemes, are presented in the literature of the past few years, e.g., [34, 36, 37]. Most of these results are based on proximity conditions. A proximity condition bounds the distance between the operation of an adapted refinement step to the operation of its linear counterpart in terms of the maximal distance between adjacent data points. Such proximity conditions hold, since a manifold is locally close to a Euclidean space. Thus, the convergence results are often valid only for “dense enough data”, which is, in general, a condition that is hard to quantify and depends on properties of the manifold (such as curvature).

Recently, a progress in the convergence analysis is established in several papers which address the question of convergence from any initial data. Such a result is presented in [13] for adapted subdivision schemes to data in Hadamard spaces. Results for data on the manifold of positive definite matrices are derived in [32]. For the case of interpolatory subdivision schemes there are also results for several different metric spaces e.g., [20, 21, 35].

Here we prove convergence from all initial data, of the above adapted generalized Lane-Riesenfeld algorithm, when the weighted average in each round corresponds to a weight in , and give conditions for such convergence when some averages have weights outside . In addition, we extend the above construction to a wider class of linear schemes, by introducing weighted trinary averages based on geodesic weighted averages, and give sufficient conditions for convergence from all initial data. In all these cases, and for manifolds with globally bounded curvature, the convergence guarantees that the limits are , based on the proximity analysis in [36].

Three important observations on our adaptation method:

1. It extends the class of linear schemes for which an adapted scheme is known to be convergent from all initial data.

2. It is well-defined and convergent from all data in a wide class of geodesic metric spaces.

3. It leads to computationally feasible subdivision schemes.

The convergence analysis introduced in this paper supplies a new tool for the analysis of linear schemes. In particular, this analysis guarantees the convergence of any linear scheme with a symbol which is a Hurwitz polynomial, up to multiplication by a monomial. The question whether this method can improve our ability to determine the convergence of linear subdivision schemes is beyond the scope of this paper and is still under investigation.

The paper is organized as follows. We start in Section 2 by providing a short survey of the required background, including a summary on the Lane-Riesenfeld algorithm and a short review on geodesics and manifolds. We conclude Section 2 with a short discussion on a sufficient condition for the convergence of adapted subdivision schemes. Section 3 introduces our generalization of the Lane-Riesenfeld algorithm. Then, we give conditions for the convergence of an adapted scheme based on this algorithm, from any initial manifold data, where the corresponding linear scheme has a factorizable symbol over the reals. In Section 4 we further extend the algorithm to the adaptation of general linear schemes, and conclude the paper by the convergence analysis of these schemes.

## 2 Preliminaries

### 2.1 Subdivision schemes and the Lane-Riesenfeld algorithm

Linear, univariate subdivision schemes are defined on numbers (the functional setting) , and are extended to vectors by operating on each component separately. In the functional setting, these schemes are approximation operators, when the data is sampled uniformly from a continuous function . We denote the sampled data , , by . Any subdivision scheme consists of refinement rules that map to a new sequence associated with the values at , .

Let us denote by a refinement rule, defined by a finitely supported mask , as

 S(f)j=∑i∈Zaj−2ifi. (1)

A (stationary) subdivision scheme with a refinement rule is a repeated application of (1) and is also denoted by .

A subdivision is termed convergent if the sequence of piecewise linear interpolants to the data converges uniformly (see e.g. [7]). By definition, the limit is a continuous function.

The Lane-Reisenfeld (L-R) algorithm is a classical algorithm, which executes the refinement rules of a B-spline subdivision scheme [23]. This algorithm replaces each step of refinement by an elementary refinement (doubling all the data points) followed by several stages of averaging. In each stage of averaging, the data points are replaced by the mid-points of all pairs of consecutive data points. As a result, the refinement is done simultaneously to all data points. We term this refinement a global refinement, in contrary to the direct evaluation of (1), where each refined point is calculated independently of the other refined points. The refinement step of the L-R algorithm is presented in Algorithm 1.

An important tool in the analysis of convergence and smoothness of subdivision schemes is the symbol, defined as the -transform of the mask , that is . For example, the symbol of the B-spline subdivision scheme of degree is . A necessary condition for convergence is and implying that the subdivision scheme is invariant to a translation of the data [7, Proposition 2.1]. With the symbol the refinement rules (1) can be written algebraically as

 ∑j∈ZS(f)jzj=a(z)∑j∈Zfjz2j, (2)

where the equality is in the sense of equal coefficients corresponding to the same power of . The L-R algorithm is an interpretation of (2) with the symbols of the B-spline subdivision schemes. For explanation see Section 3.1 and in particular (10).

Over the years, several generalizations of the L-R algorithm have been proposed. In [2] any step of the subdivision consists of a refinement step of a fixed converging subdivision scheme, followed by a fixed number of “smoothing rounds” based on another subdivision scheme (e.g., applying the insertion rule of an interpolatory scheme to each point). In [10, 31] non-linear averages of numbers replace the arithmetic (linear) averages. A generalization based on a geodesic average goes back to [27, 28] where a corner cutting subdivision scheme based on geodesic averages is presented and analysed. In [9] the L-R algorithm is adapted to compact sets based on the metric average which is a geodesic average in the metric space of compact sets with the Hausdorff metric.

In this paper we discuss the adaptation of subdivision schemes from numbers to manifold data. To distinguish between sequences of numbers (or vectors) to sequences on a manifold, we denote by and a sequence of Euclidean data and manifold, respectively.

### 2.2 On manifolds and geodesics

A geodesic (or a geodesic curve) is a fundamental notion in differential geometry. This notion is an extension of the shortest arc on a surface, joining two arbitrary points and on the surface. On a plane, the geodesic is simply the line segment connecting and , described by

 (1−t)p1+tp2,t∈[0,1]. (3)

This line can be also characterized by its zero curvature and its endpoints. For a manifold, this property is generalized by having zero geodesic curvature (or constant velocity derived from the first fundamental form). In Riemannian manifolds, the geodesic curve is defined as the solution to the geodesic Euler-Lagrange equations. It turns out that any shortest path between two points is a geodesic curve.

In connected Riemannian manifolds, the Hopf-Rinow theorem guarantees that geodesic curves connecting any two points are globally well defined and smooth, see e.g., [6]. Such manifolds are also known as geodesically complete or simply complete Riemannian manifolds. For such manifolds, one can derive the uniqueness of the geodesic curve connecting any two points, in case one point is outside the cut locus of the other. Henceforth, we will use the term geodesic curve for such shortest path curves.

The geodesic curve is of great importance in our adaptation procedures. A natural question is its availability in different manifolds. Indeed, in many cases, the geodesic curve is known explicitly. Here are several examples: on a sphere (e.g., [14]), on an ellipsoid (e.g., [16]), on the cone of positive definite matrices (e.g., [18]), in the Lie group of orthogonal matrices of the same determinant (e.g., [33, Chapter 3]), in the Heisenberg groups (e.g., [25]). Alternatively, geodesics can be calculated numerically. This can be done by directly solving the Euler-Lagrange equations (e.g., [15]), by fast marching methods (e.g., [22]), by exploiting heat kernels based methods (e.g., [4]), or other hyper-surfaces techniques (e.g., [26]), just to name a few.

An important property of the geodesic curve is the metric property. Let be a complete Riemannian manifold with associated metric . Then, for any the geodesic curve connecting and , that is , with and , satisfies

 d(Mt(p1,p2),p2)=(1−t)d(p1,p2),t∈[0,1]. (4)

Since is a metric, we also have the compliment formula . In this paper, we consider data such that the geodesic curve between any two adjacent data points in is well-defined, and term such data “admissible”. Then, the geodesic curve is used as a weighted mean, that is the manifold analogue of the arithmetic mean (3). In some cases, we may need to be defined for values of outside , but close to it. Therefore, we must assume that the geodesic curve is well-defined for these “extrapolation” values. In these cases the metric property (4) is modified, replacing by .

There are some non-linear spaces, other than Riemannian manifolds, where the geodesic curve connecting any two points is unique. These are the geodesic metric spaces, see e.g., [1]. In such spaces, the differential structure is missing and a geodesic curve is defined as the path satisfying (4). Clearly, this definition agrees with the geodesic curve on Riemannian manifolds. Note that, in general, we do not need the uniqueness of the geodesic curve, but a canonical way to choose it, see e.g., [9].

### 2.3 Sufficient conditions for convergence of manifold-valued subdivision schemes

The convergence of manifold-valued subdivision schemes can be defined intrinsically. For that, we defined for any data sequence , a piecewise geodesic interpolant , connecting any pair of consecutive points in by their geodesic curve. The manifold-valued subdivision scheme is convergent, if the sequence , converges uniformly relative to the metric of the manifold (see [12]).

The analysis of adapted subdivision schemes in many papers is based on the method of proximity, introduced in [36]. This analysis uses conditions that indicate the proximity of the adapted refinement rule to its corresponding linear refinement rule . The simplest proximity condition is

 d(S(p),˜S(p))≤c(δ(p))2,δ(p)=supi∈Zd(pi,pi+1),c∈R+. (5)

In [36] it is proved that if is a refinement rule of a convergent scheme that generates limits, then condition (5) implies (with additional mild assumptions on the refinement rule ) that for small enough, the adapted subdivision scheme , applied to the initial data , converges to a limit.

The weakness of the proximity method is that convergence is only guaranteed for “close enough” data points. This requirement is typically not easy to quantify and it depends on the manifold and its curvature.

For a linear subdivision schemes a contractivity factor , namely

 δ(S(p))≤μδ(p),μ∈(0,1), (6)

implies the convergence of the scheme from any initial data, see e.g. [7].

For non-linear subdivision schemes, and in particular for schemes adapted to manifold data, contractivity is not sufficient for convergence, and an additional condition is required, see [12].

###### Definition 2.1 (Displacement-safe).

Let be a subdivision scheme adapted to manifold data. We say that is “displacement-safe” if

 d(˜S(p)2i,(p)i)≤Cδ(p),i∈Z. (7)

for any sequence of manifold data , where is a constant independent of .

In [12], it is proved that

###### Theorem 2.2.

Let be a displacement-safe subdivision scheme for manifold data with a contractivity factor . Then, is convergent for any input manifold data.

###### Remark 2.3.

Two concluding remarks:

1. Note that interpolatory schemes satisfy (7) with by definition and thus are displacement-safe.

2. In [36] it is proved that any adaptation of (1) based on repeated geodesic averages satisfies (5), under mild assumptions on the manifold, such as manifolds with globally bounded curvature. This observation implies that for with , (7) is also satisfied. Thus, for such schemes, it is enough to show that the scheme has a contractivity factor to obtain convergence for any initial data and to conclude that the limit is .

## 3 Adaptation of generalized L-R algorithms

We present an adaptation method of generalized L-R algorithms, based on geodesic averages. This method is already introduced in [8, 36]. Nevertheless, the convergence result stated there is the one that follows from proximity conditions, which applies only for small enough. First, we discuss in detail our adaptation and then analyze the resulting schemes, charactering classes of schemes for which convergence from any initial data is guaranteed.

### 3.1 The algorithm of global refinement

Consider a linear subdivision scheme of the form (1), with a symbol . The factorization of the symbol plays an important role in the analysis of convergence and smoothness of linear subdivision schemes [7], and is also significant in our adaptation.

We start with a class of convergent linear subdivision schemes having symbols which can be factorized into real linear factors. Recall that a necessary condition for convergence is that and [7, Proposition 2.1]. Thus, we can write

 a(z)=z−s(1+z)1+α1z1+α1⋯1+αmz1+αm, (8)

where are the nonzero roots of the symbol and is an integer. Note that cannot be a root of a symbol since . Thus, . and (8) is well-defined. We further define to be the minimizer of

 max(11+αj,αj1+αj), (9)

among . The reason will become clear later.

The relation between the factorization (8) and the global refinement is based on (2). For the symbol (8) we get from (2) that the linear scheme can be interpreted as

 Extra open brace or missing close brace (10)

By this interpretation, the factor indicates an initial elementary refinement step in which the data is duplicated. Then, each of the factors , implies a step of averaging, in which the current data is replaced by the weighted averages with weights on its pairs of adjacent points. A zero root of the symbol merely changes the value of . This value determines the shift of indices required to be applied, at the end of each refinement step. Note that for , , this interpretation becomes the L-R algorithm. Thus, we consider the global refinement step corresponding to (8) a generalized L-R algorithm.

The adaptation of the global refinement, based on geodesic averages, is summarized in Algorithm 2.

Note that for data sampled from a geodesic curve, all points generated by Algorithm 2, are on this geodesic curve.

### 3.2 Analysis of schemes corresponding to factorizable symbols over the reals

For our first result, we restrict the discussion to the case where the symbol (8) has a full set of real negative roots, namely , .

###### Theorem 3.1.

Let be a linear subdivision scheme with the symbol (8), such that , . Then, the adapted scheme based on the global refinement step of Algorithm 2 has a contractivity factor .

###### Proof.

Following Algorithm 2 we get that after the initial stage of Line 1 and Line 2 we have that

 d(q2i,0,q2i+1,0)=0,d(q2i−1,0,q2i,0)≤δ(p),i∈Z.

After the first iteration of the loop of Line 3 we have (see (10))

 q2i,1=q2i,0,q2i+1,1=Mα11+α1(q2i+1,0,q2i+2,0),i∈Z.

By the metric property (4),

 d(q2i,1,q2i+1,1)=11+α1δ(p),d(q2i−1,0,q2i,0)≤α11+α1δ(p),i∈Z.

Thus, for , with . The next iterations, , retain the maximal bound of , since for

 d(qi,j,qi+1,j)≤d(qi,j,qi+1,j−1)+d(qi+1,j−1.qi+1,j)≤αj1+αjμδ(p)+11+αjμδ(p)=μδ(p).

Note that the contractivity factor of Theorem 3.1 satisfies since and , with for .

The L-R algorithm satisfies the conditions of Theorem 3.1. Indeed, this theorem is a generalization of a similar result in [9, Lemma 4.1] for the adapted L-R algorithm to compact sets.

Next, we show that the adapted subdivision schemes corresponding to symbols having a full set of real negative roots, are displacement-safe.

###### Theorem 3.2.

Let be as in Theorem 3.1. Denote by the adapted scheme based on the global refinement of Algorithm 2. Then, is displacement-safe.

###### Proof.

The proof shows by induction that , . Denote by the linear subdivision scheme with a symbol obtained from the symbol of by retaining the first factors, , so that the adapted scheme of , , uses only iterations of the loop of Line 3 in Algorithm 2. Obviously . We use induction on . For , after the initial steps of Lines 1 and 2, Algorithm 2 inserts new points on the geodesic curves, connecting adjacent data points. Therefore, it is clear that we have , namely we get the constant for the case . The induction step assumes

 d(˜Sj(p)2i,pi)≤Kjδ(p),i∈Z,

for a given , with a constant , which depends on and is independent of . Then, using the triangle inequality we get

 d(˜Sj+1(p)2i,pi)≤d(˜Sj+1(p)2i,˜Sj(p)2i)+d(˜Sj(p)2i,pi).

While by the metric property (4) (see Line 5 in Algorithn 2)

 d(˜Sj+1(p)2i,˜Sj(p)2i)≤δ(˜Sj(p)). (11)

Since Theorem 3.1 implies that

 δ(˜Sj(p))≤μδ(p),μ=max{11+α1,α11+α1}, (12)

we can choose and the proof follows. The shift, defined by in (8) and done in Line 9 of Algorithm 2, does not affect the above bound, since is the same for all . ∎

We conclude

###### Corollary 3.3.

Let be a linear subdivision scheme with the symbol (8), such that , . Then, the adapted scheme based on the global refinement of Algorithm 2 converges for all admissible input data on the manifold.

The second case analyzed here corresponds to symbols of the form (8) with several positive roots. Positive roots mean negative weights in the averages, namely extrapolating averages in Line 5 of Algorithm 2.

###### Theorem 3.4.

Let be a linear convergent subdivision scheme with symbol of the form (8), such that has at least one negative root in addition to the root . Define

 μ1=minαi>0i∈{1,…,m}max{11+αi,αi1+αi},

and renumerate the factors in (8) such that is attained at . If

 μ=μ1m∏i=2ξ(αi)<1, (13)

where

 ξ(α)=⎧⎪ ⎪⎨⎪ ⎪⎩1,0<α,1+2\absα1+α,−1<α<0,1+2\abs11+α,α<−1,

then the adapted scheme based on global refinement has a contractivity factor , and it converges from any admissible initial data on the manifold.

###### Proof.

The proof basically modifies the proofs of Theorem 3.1 and Theorem 3.2. By assumption the set is not empty, and therefore . Similarly to the proof of Theorem 3.1 the application of an averaging step in Line 5 of Algorithm 2, corresponding to , does not expand the bound on the distances between consecutive points in the data. On the other hand, an averaging step corresponding to expands the bound.

To obtain the expanding factor note that after the -th step in Line 5 of Algorithm 2 we can bound the distance between consecutive points by

 d(qi,j,qi+1,j)≤d(qi,j,qi,j−1)+d(qi,j−1,qi+1,j−1)+d(qi+1,j−1,qi+1,j). (14)

Defining , , we obtain from (14)

 d(qi,j,qi+1,j)≤ξ(αj)μj−1δ(p). (15)

This together with assumption (13) shows that is a contractivity factor of the adapted scheme.

To complete the convergence proof, we observe that since , assumption (13) implies that , . Modifying the proof of Theorem 3.2, we get in its notation that (11) is replaced by

 d(˜Sj+1(p)2i,˜Sj(p)2i)≤2δ(˜Sj(p)).

Using the same inductive argument, and the bound (15), we get

 d(˜Sj+1(p)2i,pi) ≤ d(˜Sj+1(p)2i,˜Sj(p)2i)+d(˜Sj(p)2i,pi) ≤ 2δ(˜Sj(p))+Kjδ(p)≤(2μj+Kj)δ(p).

Thus, in this case . By (13) , and since implies , we finally arrive at .

We conclude that the adapted scheme obtained from by global refinement is displacement-safe and has a contractivity factor given in (13). Therefore, it converges by Theorem 2.2. ∎

###### Remark 3.5.

Two remarks for section 3.2:

1. As is proved in Theorems 3.1 and 3.2 the adaptation of Algorithm 2 leads to converging subdivision schemes when applied to linear subdivision schemes with positive mask coefficients, such that their symbols have a full set of negative roots. Theorem 3.4 extends the convergence to schemes with symbols having few positive roots in addition to at least two negative ones, which may correspond to masks with some negative coefficients.

2. Negative coefficients necessarily appear in the masks of smooth interpolatory schemes. However, the adaptation based on global refinement is inappropriate for interpolatory subdivision schemes, since the adapted schemes are not interpolatory any more. The commutativity of multiplication of numbers guarantees that for numbers the local refinement and the global refinement coincide.

In the next section we show that the global refinement can be interpreted as local refinements, based on a “pyramid averaging”.

### 3.3 interpretation of the global refinement as local refinement

Most known adaptation methods of convergent linear subdivision schemes to manifold data are based on first rewriting the average (1) in terms of repeated binary averages, and then replacing the linear averages by some manifold averages, see e.g. [34, 36, 37]. We term the so obtained refinement rules “local refinement”.

Next we show that global refinement can be interpreted as local refinement based on geodesic averages. This observation together with 2 of Remark 2.3 leads to the conclusion that the convergence of schemes adapted by global refinement guarantees limits.

We now describe how the global refinement can be interpreted as local refinement. For even, in Algorithm 2 can be calculated by a series of repeated averaging operating on . First we replace by , . We take from this sequence the first points, to form the initial level for a “pyramid averaging” of levels. In the -th level of the pyramid averaging any pair of adjacent points is replaced by its geodesic average with weight , . Thus at the -th level there are points. is the only value obtained at level of the pyramid averaging.

For odd, in Algorithm 2 can be calculated similarly, starting the same pyramid averaging from a different sequence. This sequence is obtained from by first replacing by , and then taking the first points. For illustrations and explanation of the pyramid averaging notion see [30].

The global refinement calculates only once each geodesic averages of adjacent points in the data, while the same average appears in the calculation of several points by local refinement. Thus, the global refinement is more efficient in terms of computational operations as compared to its local refinement interpretation. Note that it is possible to define a scheme adapted by local refinement which uses the same number of geodesic averages as the global refinement [12].

## 4 Adaptation based on global refinement – the general case

We extend the global refinement algorithm to converging linear schemes with general symbols. Then, instead of (8) such symbols, which are real polynomials, can be factorized into real linear factors (in addition to ) and quadratic real factors, with . Any complex root of the symbol corresponds to a real quadratic irreducible factor over the reals of the form

 1+αz1+α⋅1+¯¯¯¯αz1+¯¯¯¯α=1+2Re(α)z+\absα2z21+2Re(α)+\absα2, (16)

where and is the real part of . The average associated with such a factor has, in the sense of the global refinement algorithm, the following weights

 w1=11+2Re(α)+|α|2,w2=2Re(α)1+2Re(α)+\absα2,w3=\absα21+2Re(α)+\absα2. (17)

Note that . Instead of (8) we have in this case the factorization

 a(z)=z−s(1+z)(m1∏i=11+αiz1+αi)(m1+m2∏i=m1+11+2Re(αi)z+\absαi2z21+2Re(αi)+\absαi2). (18)
###### Lemma 4.1.

For any complex ,

 1+2Re(α)+\absα2>0. (19)
###### Proof.

When , (19) holds clearly, while if and is not real, then , and

 1+2Re(α)+\absα2>1−2\absα+\absα2=(1−\absα)2≥0.

From Lemma 4.1 and (17) we conclude that and are always positive.

### 4.1 The general algorithm of global refinement

For an irreducible quadratic factor in (18) one is required to average points on the manifold at once. Motivated by the pyramid averaging of Section 3.3, we define such an average and term it a three pyramid.

###### Definition 4.2.

For three points with corresponding weights , the “three pyramid” is

 P((p1,p2,p3),(w1,w2,w3))=Mr(Mt2(p3,p2),Mt1(p2,p1),),

where the following constraints must hold

1. .

2. .

3. .

###### Remark 4.3.

Two remarks on Definition 4.2:

1. For numbers the three pyramid coincides with .

2. The three constraints of Definition 4.2 are not independent. Since we always assume that , the sum of the three constraints always holds.

The global refinement of Algorithm 2 uses uniform averaging in each level. The following lemma shows that this is not possible for symbols with complex roots.

###### Lemma 4.4.

There is no three pyramid of Definition 4.2 for the weights (17) with . However, such a three pyramid exists with .

###### Proof.

For the first claim of the lemma, we rewrite the constraints of Definition 4.2 with . The case is impossible since by (17) and Lemma 4.1 . Therefore, substitution of into the third constraint yields , which has no real solution for the weights of (17).

To prove the second claim, one can choose for the weights in (17). This yields a three pyramid with

 t1=w1r=|α|+11+2Re(α)+|α|2,t2=1−w31−r=1+2Re(α)−\absα1+2Re(α)+\absα2. (20)

Note that for a non-real , , and thus in view of Lemma 4.1

 t1−t2=2(\absα−Re(α))1+2Re(α)+\absα2>0. (21)

The proof of Lemma 4.4 suggests a choice for the parameters of the three pyramid, for calculating the average of points at once. This choice, as is shown in Section 4.2, is designed to minimize the bound on the distance between averages of two adjacent triplets of points, .

The adaptation of the global refinement algorithm corresponding to the symbol (18), based on geodesic averages and three pyramid averages, is summarized in Algorithm 3, which replaces Algorithm 2 for symbols having complex roots.

### 4.2 Optimal choice of parameters in the three pyramid

To optimally bound the distance

 d(P((p1,p2,p3),(w1,w2,w3)),P((p2,p3,p4),(w1,w2,w3)). (22)

we start by setting . The reasons for this choice are presented in details in Appendix A.1. For the other parameters, we first prove the following Lemma.

###### Lemma 4.5.

Consider the three pyramid of Definition 4.2 for the weights (17) with . Then, .

###### Proof.

By the constraints of Definition 4.2, . We show that . Indeed, , which implies a single minimum point of at . By (21) we have that , and since is a minimum point, . ∎

###### Theorem 4.6.

Consider the three pyramid of Definition 4.2 with the weights (17) and . Then, for with ,

 d(P((p1,p2,p3),(w1,w2,w3)),P((p2,p3,p4),(w1,w2,w3)))≤(2(t1−t2)+1)δ(p). (23)
###### Proof.

Figure 1 accompanies the proof. There and correspond to and respectively, while and correspond to and respectively. and there correspond to and respectively.

We first apply the metric property (4) and the triangle inequality to get (see the schematic illustration in Figure a)

 d(Mt2(p3,p2),Mt1(p2,p1),)≤d(Mt2(p3,p2),p2)+d(p2,Mt1(p2,p1))=(1−t2)d(p2,p3)+t1d(p1,p2). (24)

Note that and that . Similarly we get

 d(Mt1(p3,p2),p2)=(1−t1)d(p2,p3),

and since by Lemma 4.5, we conclude that is closer to than (see Figure b). Observing that these two averages lie on the geodesic curve connecting and , we conclude that

 d(Mt1(p3,p2),Mt2(p3,p2))=((1−t2)−(1−t1))d(p2,p3)=(t1−t2)d(p2,p3). (25)

To prove (23) we sum the following three bounds, on the lengths of the three parts of the path connecting to via and in Figure c,

 Missing or unrecognized delimiter for \right ≤ (1−r)(t1+(1−t2))δ(p), d(Mt2(p3,p2),Mt1(p3,p2)) ≤ (t1−t2)δ(p), d(Mt1(p3,p2),P((p2,p3,p4),(w1,w2,w3))) ≤ r(t1+(1−t2))δ(p).

The first and third bounds are obtained from Definition 4.2 by (4) and (24), the second bound is (25).

###### Remark 4.7.

Two important conclusions, related to the parameters of the three pyramid:

1. Theorem 4.6 implies that in order to reduce the expansion factor in (23) corresponding to a three pyramid the function , from the proof of Lemma 4.5 has to be minimized. Thus, the parameters and of (20) and are preferred.

2. For the parameters in the first part of the remark, we deduce from Lemma 4.5 that the bound in (23) is bigger than one. This means that the bound on the distances between adjacent points is not preserved after applying the three pyramid.

Note that in the linear case, any averaging step corresponding to a complex root does not expand the distance between consecutive points as long as the weights (17) are positive, that is the real part of is positive.

### 4.3 Analysis of convergence

First, we consider the case of symbols of the form (18) having several complex roots and then discuss in detail the case of a single complex root.

In case of positive roots, which is analysed in Theorem 3.4, we show an initial contractivity factor induced by , associated with the negative root, followed by a series of expanding factors for , associated with the positive roots. Equipped with Theorem 4.6, the analysis of the convergence of the schemes adapted by Algorithm 3 is essentially the same.

###### Corollary 4.8.

Let be a linear subdivision scheme with symbol of the form (18), with and . Define

 μ1=minαi>0i∈{1,…,m1}max{11+αi,αi1+αi},

and renumerate the linear factors in (18) such that is attained at . If

 μ