Renormalization of the Hutchinson Operator

# Renormalization of the Hutchinson Operator

Yann Demichel Laboratory MODAL’X - EA 3454
Université Paris Nanterre
200 Avenue de la République
92000 Nanterre, France
###### Abstract.

One of the easiest and common ways of generating fractal sets in is as attractors of affine iterated function systems (IFS). The classic theory of IFS’s requires that they are made with contractive functions. In this paper, we relax this hypothesis considering a new operator obtained by renormalizing the usual Hutchinson operator . Namely, the -orbit of a given compact set is built from the original sequence by rescaling each set by its distance from . We state several results for the convergence of these orbits and give a geometrical description of the corresponding limit sets. In particular, it provides a way to construct some eigensets for . Our strategy to tackle the problem is to link these new sequences to some classic ones but it will depend on whether the IFS is strictly linear or not. We illustrate the different results with various detailed examples. Finally, we discuss some possible generalizations.

###### Key words and phrases:
Hutchinson operator, Iterated function system, Attractor, Fractal sets
###### 2010 Mathematics Subject Classification:
Primary 28A80, 37C70, 47H10; secondary 37C25, 37E05, 15A99

## 1. Introduction and notation

The theory and the use of fractal objects, introduced and developed by Mandelbrot (see e.g. [19]), still play an important role today in scientific areas as varied as physics, medicine or finance (see e.g. [12] and references therein). Exhibit theoretical models or solve practical problems requires to produce various fractal sets. There is a long history of generating fractal sets using Iterated Function Systems. After the fundamental and theoretical works by Hutchinson (see [17]), this method was popularized and developed by Barnsley in the 80s (see [2, 1]). Since these years very numerous developments and extensions were made (see e.g. [4]) making even more enormous the literature related to these topics. Indeed, the simplicity and the efficiency of this approach have contributed to its success in a lot of domains, notably in image theory (see e.g. [13]) and shape design (see e.g. [15]).

### 1.1. Background

Let us recall the mathematical context and give the main notation used throughout the paper. Let be a metric space. For any map , we define the -orbit of a point as the sequence given by

 xn=(f∘⋯∘f)(x0)=fn(x0),

where is the th iterate of with the convention that is the identity function . In particular, one has hence, if is continuous and if converges to , then is an invariant point for i.e. .

We denote by the set of all non-empty compact subsets of . We obtain a metric space endowing it with the Hausdorff metric defined by

 ∀K,K′∈KM,dH(K,K′)=inf{ε>0|K⊂K′(ε) and % K′⊂K(ε)}

where is the set of points at a distance from less than .

For every we define the set and we will assume in the sequel that .

Let maps . Then we can define a new map setting

 ∀K∈KM,H(K)=p⋃i=1fi(K). (1)

We say that is the Hutchinson Operator associated with the Iterated Function System (IFS in short) (see e.g. [17, 1, 12]).

Basic questions about an IFS are the following: Does the orbit converge for any compact set ? Does its limit depend on ? What are the geometrical properties of the limit sets?

The classic theory of IFS’s is based on the Contractive Mapping Principle (see e.g. [17, 1, 12]). Let us recall that a map is contractive if

 λf=sup{d(f(x),f(y))d(x,y):x,y∈M with x≠y}<1.

Let us assume that is a complete metric space. Then, any contractive map is continuous, has a unique invariant point , and the -orbit of any converges to with the basic estimate

 ∀n⩾0,d(fn(x0),z)⩽λnfd(x0,z).

If are contractive then the associated Hutchinson operator is also contractive because of

 λH=max1⩽i⩽p{λfi}.

Since inherites the completeness of , the map has then a unique invariant point , called the attractor of , and for all the sequence converges to . One of the interests is that such sets are generally fractal sets.

In the sequel, the space will be essentially , , endowed with the metric induced by the Euclidean norm . Writing simply for , a subset belongs to if and only if it is closed and bounded. In particular, the closed ball with center and radius will be denoted by .

In this paper, we are interested in affine IFS’s i.e. when is defined by with a matrix and a vector. Such a map satisfies where is the norm of given by

 ∥Ai∥=sup{∥Aix∥:x∈RD with ∥x∥=1}=inf{r>0|∀x∈RD,∥Aix∥⩽r∥x∥}.

In particular, classic IFS’s consist of transformations involving rotations, symmetries, scalings and translations. In this case, if is contractive, the corresponding attractor is called a self-affine set. One obtains a nice subclass of such IFS’s when the ’s are homotheties i.e. when with . Indeed, contrarily to general affine maps, contracts the distances with the same ratio in all directions. This enables a precise description of . For example, if the sets are mutually disjoints then is a Cantor set whose fractal dimension is the solution of a very simple equation (see [12, 21]). Cantor sets are fundamental and come naturally when one studies IFS’s. A simple family of Cantor sets in is where is the attractor of the IFS with and . For example, is the usual triadic Cantor set (see [17, 10, 12]). When , the attractor of the previous IFS becomes the whole interval . These basic examples will be extensively used in the sequel.

### 1.2. Motivation

Let us point out two specific situations:

1. When the previous results become false: typical orbits fail to converge. Basically, the orbits of some points may then satisfy for some , preventing the sequence from being bounded.

2. When all the ’s are contractive linear maps, the attractor of is always so does not depend on the fine structure of the ’s but only on their norms.

However, in these two degenerate situations we can observe an intriguing geometric structure of the sets . For example, let us consider the IFS where the are the linear maps given by their canonical matrices

 A1=[aaa0] and A2=[a−a−a0]

with . We focus on the -orbit of the unit ball . For all large enough we have and the sequence is not bounded: the diameter of grows to infinity. At the contrary, for all small enough we have . Thus is now contractive and converges to : vanishes to . Nevertheless, whatever is , one can observe that the sets tend to a same limit shape looking like a ‘sea urchin’-shaped set (see Figure 1). So one can wonder if there exists a critical value for which do not degenerate so makes possible to observe this asymptotic set.

In this paper, we aim to modify the original Hutchinson operator to annihilate these two degenerate behaviors. We wish to obtain a limit set even if the IFS is not contractive, and a non zero limit set for contractive linear IFS’s. Moreover, we would like this new operator to exhibit the typical ‘limit shape’ observed above.

### 1.3. Renormalization with the radius function

Our strategy is to rescale each set by dividing it by its size. The idea of rescale a sequence of sets to get its convergence to a non degenerate compact limit is not new and is particularly used in stochastic modeling (see e.g. [22, 8] for famous examples of random growth models and more recently [20, 18] in the context of random graphs and planar maps). Probabilists usually consider the a posteriori rescaled sets where estimates the size of , often its diameter.

Here we proceed differently. First, in order to keep dealing with the orbit of an operator, we will do an a priori renormalization. Secondly, we will measure the size of a compact set with its distance from . Precisely, we consider the radius function defined on by

 ∀K∈K,ρ(K)=sup{∥x∥:x∈K} (2)

and we denote by the operator defined by

 ∀K∈K,Hρ(K)=1ρ(H(K))H(K). (3)

The radius function satisfies the three following basic properties:

1. continuity : is continuous with respect to ;

2. monotonicity : If then ;

3. homogeneity : For all , .

Actually is a very nice function because it enjoys an additional stability property:

 ∀K,K′∈K,ρ(K∪K′)=max{ρ(K),ρ(K′)}. (4)

The subject of interest of the paper is then the -orbit of sets . For simplicity, we will write in the sequel so that

 ∀n⩾0,Kn+1=1dnp⋃i=1fi(Kn) (5)

with

 dn=ρ(p⋃i=1fi(Kn))=max1⩽i⩽pρ(fi(Kn)). (6)

We will assume that , i.e. .

Observe that for all , thus:

1. so that the orbit of any set is bounded;

2. There exist at least one such that so that cannot vanish to .

In particular, if converges to a set then and .

This new operator is then a good candidate to solve the problems discussed in Section 1.2. It will act by freezing the geometrical structure of at each step of the construction of the orbit.

### 1.4. Eigen-equation problem

Let us point out a very strong connection with the ‘eigen-equation problem’ recently studied in [3] for affine IFS’s. Indeed, if converges to a set then converges to and taking the limit in (5) leads to . Hence is an eigenvalue of and a corresponding eigenset. Existence of solutions for this equation is discussed and proved in [3]. The values for are closely related to the joint spectral radius of the ’s (see (7)). In particular, for linear IFS’s, was interpreted as a transition value for which exists a corresponding eigenset whose structure is similar to the one described in Section 1.2. Unfortunately, these results don’t hold for every IFS. In particular it rules out simple IFS’s only made up with homotheties or some more interesting ones made up with stochastic matrices. However, the results stated in [3] provide important clues to determine and study the possible limits of both sequences and .

When studying the eigen-equation problem, an interesting question is to approximate any couple of solutions of equation . Let us look at the special case when the IFS consists in only one linear map with matrix and set . Then with

 ∀n⩾0,xn+1=1∥Axn∥Axn.

One recognizes the famous Power Iteration Algorithm. With suitable assumptions it gives a simple way to approximate the unit eigenvector associated with the dominant eigenvalue of , this eigenvalue being the limit of . Therefore, iterating the operator from a set is nothing but a generalization of this algorithm and then provides a natural procedure to approximate both an eigenvalue of and one of its associated eigenset.

From now on we are then interested in the convergence of and the geometric properties of its limit. Typically, is not contractive and the classic theory may not be applied. In particular, may have different invariant points so that the limit of may be no longer unique but deeply depend on . Furthermore, it is clear that the -orbits of may diverge for some (for example when the ’s are only rotations). We will expose different ways to state the convergence of depending on whether the IFS is affine (Section 2) or strictly linear (Section 3). Finally, some generalizations will be shown in the last section (Section 4).

## 2. Results for affine IFS’s

We suppose in this section that the IFS consists in affine maps defined by . We denote by the set of their canonical matrices. Let us recall that the joint spectral radius of is defined by

 σM=limsupn→∞(sup1⩽i1,…,in⩽p{α(Ai1⋯Ain)})1n=limsupn→∞(sup1⩽i1,…,in⩽p{∥Ai1⋯Ain∥})1n (7)

where denotes the usual spectral radius of the matrix (see [24]). Finally, we denote by the set of the eigenvalues of so that .

### 2.1. Strategy: a general result

Our strategy consists in linking the convergence of to the asymptotic behavior of the sequence of positive numbers . If converges to a set then converges to and the eigen-equation shows that may be seen as an invariant set of the classical Hutchinson operator associated with the IFS . In particular, if then is unique: it is the attractor of this contractive operator .

Conversely, if is a constant sequence, say , one has so that converges to if . Actually, when is no longer constant, but converges to a positive number , then the convergence of to still happen.

###### Theorem 2.1.

Let . Assume that the sequence converges to a positive number . Then the sequence converges to the attractor of the .

###### Proof.

Let us set . We have to prove that converges to . We can write

 dH(Kn+1,K′n+1) ⩽dH(1dnH(Kn),1dnH(K′n))+dH(1dnH(K′n),1dH(K′n)) ⩽λHdndH(Kn,K′n)+∣∣∣1dn−1d∣∣∣ρ(H(K′n)).

Since converges, there exists such that for all . Then let us fix and such that for all . We obtain where and . It follows that

 ∀n>N,0⩽εn⩽μn−NεN+n−N−1∑k=0μkmn−1−k.

Since and it follows that . ∎

Let us emphasize that we did not use the definition of nor the fact that the ’s are affine. Hence the result is valid for any pairs of sequences and satisfying (5).

Let us notice that the sequence depends on , so that the two limits and may also depend on . If , the asymptotic behavior of is more delicate to derive directly from the one of . Therefore, in view of Theorem 2.1, we ask the following questions: Does the sequence always converge? Does its limit may not depend on or may be smaller than ?

### 2.2. Convergence of (dn)n

Except for very special cases it is impossible to obtain the exact expression of . Therefore we rather seek for bounds for and . Let us begin with a basic result.

###### Lemma 2.2.

Let be the sequence defined in (6). Then,

 ∀n⩾1,max1⩽i⩽p{∥Bi∥−∥Ai∥}⩽dn⩽max1⩽i⩽p{∥Ai∥+∥Bi∥}. (8)

In particular, if converges to , then also satisfies (8).

###### Proof.

Let . One has for all . Thus, any satisfies , that is

 ∥Bi∥−∥Ai∥⩽∥x∥⩽∥Ai∥+∥Bi∥.

Since we obtain (8). ∎

The next result provides non trivial bounds for the possible limit .

###### Proposition 2.3.

If converges to , then

 ∀i∈{1,…,p},0⩽∥Bi∥⩽∥dId−Ai∥. (9)

Moreover, if is such that , then

 0⩽∥(dId−Ai)−1Bi∥⩽1. (10)

In particular, if then

 max1⩽i⩽p{∥(dId−Ai)−1Bi∥}⩽1. (11)
###### Proof.

Let and consider the sequence defined by and . One has and . By sommation we get

 nBi=(dnxn+1−Aix1)+n−1∑k=1(dkId−Ai)xk+1. (12)

Therefore, for all ,

 ∥Bi∥ ⩽1n∥dnxn+1−Aix1∥+1nn−1∑k=1∥(dkId−Ai)xk+1∥ ⩽2n(dn+∥Ai∥)+(1n−1n−1∑k=1∥dkId−Ai∥).

The first term in the sum above goes to when and Cesàro’s Lemma implies that the term into brackets goes to . That gives (9).

Now assume that is such that . Then the matrix is invertible and (12) yields

 n(M−1iBi)=M−1i(dnxn+1−Aix1)+n−1∑k=1M−1i(dkId−Ai)xk+1.

Thus we obtain in a similar way

 ∥M−1iBi∥⩽2n∥M−1i∥(dn+∥Ai∥)+(1n−1n−1∑k=1∥M−1i(dkId−Ai)∥).

We conclude as above using that as .

Finally, since holds for every matrix , inequality implies that , which concludes the proof. ∎

We will now show that (11) is an equality when the ’s are homotheties. Actually, we will prove again (11) but with a very different approach which can be generalized (see Theorem 4.1 ). We need the following result. We denote by the convex hull of a non-empty set .

###### Lemma 2.4.

Assume that for all . Denote by the unique invariant point of and by the attractor of the . If for all , then the convex hull of is the polytope .

###### Proof.

Let us write . Let . Since , one has and then . To prove the reverse inclusion we have to state that . It is enough to prove that , i.e. that . So let , and , a point in . We have

 fi(z)=p∑j=1tj(Aizj+Bi)=p∑j=1tjfj(zi),

thus . ∎

###### Proposition 2.5.

If converges to with then satisfies the inequality

 ρ({(dId−A1)−1B1,…,(dId−Ap)−1Bp})⩽1. (13)

Moreover, if with for all , then (13) is an equality. In this case, there is at least one .

###### Proof.

Let . First, implies that is invertible and is the unique invariant point of . Secondly, implies that converges to so . Therefore , and, by monotonicity, . That gives (13).

Now, if all the ’s are homotheties, one has

 ∀j∈{1,…,p},fjd(zi)=αjdzi+(1−αjd)zj.

Since , one has . Thus, it follows from Lemma 2.4 applied to the IFS that . Since for all , we obtain

 1=ρ(Ld)=ρ(ch(Ld))=ρ(ch({z1,…,zp}))=ρ({z1,…,zp}),

hence (13) becomes an equality. Finally, if for all then the lhs of (13) is zero, hence a contradiction. ∎

Notice that using the stability property of , (13) gives (11).

We conclude now by giving another non trivial bounds for valid for a particular class of IFS’s. The next result is only a rephrasing of Theorems 2 and 3 in [3].

###### Proposition 2.6.

Assume that the ’s have no common invariant subspaces except and . If converges to , then converges to

 d=max1⩽i⩽pρ(fi(K))⩾σM

and equality holds if for all .

The determination of is delicate but the basic estimates

 max1⩽i⩽p{α(Ai)}⩽σM⩽max1⩽i⩽p{∥Ai∥}=λH

always hold (see [24]). In particular for homotheties, i.e. when with , one obtains . Unfortunately, this simple case does not fulfill the hypotheses of Proposition 2.6.

### 2.3. Case of homotheties

We can give a complete answer when all the ’s are homotheties: the sequence always converges and its limit may be explicited. First, we show that converges and we give the possible value for its limit .

###### Lemma 2.7.

Assume that with for all . Let be an index such that . Then converges to a number . If then else and satisfies

 d={max1⩽i⩽p{αi+∥Bi∥}if d>αjαj−∥Bj∥if d<αj.
###### Proof.

For all we can find and such that . Then, satisfies and . Since we obtain

 dn+1⩾∥αinun+Bin∥⩾∥αinun+dnun∥−∥dnun−Bin∥=(αin+dn)∥un∥−αin∥yn∥⩾dn.

Thus is increasing and bounded (see (8)), so it converges. Let be its limit.

For all , choosing such that we get

 d⩾dn⩾∥αjxn+Bj∥⩾∥αjxn∥−∥Bj∥=αj−∥Bj∥. (14)

Inequality (9) with writes so implies . If then . In addition with (14) we obtain .

If , it follows from Proposition 2.5 that is a solution of . We can consider only the . Then, since and the functions are strictly decreasing on , the unique solution is . ∎

We can state now the precise result. We denote by the closure of a non-empty set .

###### Theorem 2.8.

Assume that with for all . Let two indices such that and . Then, for all , the sequence converges to a set . Precisely,

If then

1. Either , for all and in this case ,

2. Or else does not depend on it is the attractor with and

If then

1. Either and then ,

2. Or else and then does not depend on it is the attractor with and .

###### Proof.

(i) Assume that . Hence by Lemma 2.7 we have .

(a) Suppose first that . Then, use of Lemma 2.7 again shows that . In particular and . Moreover, it follows from (14) that for all . Let and consider the sequence defined by for all . Notice that so in particular . Let us introduce the unique point such that . Noticing that we obtain by induction that

 ∀k⩾0,2⩾∥xn+k−u∥=(αjd)k∥xn−u∥⩾0.

Since we must have . It follows that for all . Thus for all and . Therefore conditions of (a) are all fulfilled. Conversely, if they are satisfy we have obviously for all and the result.

(b) Suppose now that . Then Lemma 2.7 implies that . Since the convergence to follows from Theorem 2.1. Since is the attractor of the IFS , it contains the invariant point of which is .

(ii) Assume that . Hence by (14) we have .

(a) Suppose first that . Then it follows from Lemma 2.2 that and then by (14) that . Therefore, for all ,

 Kn+1=1αjp⋃i=1fi(Kn)=Kn∪p⋃i≠jfi(Kn).

Thus is increasing. Since it is bounded it converges to .

(b) Suppose now that . Assume that . Then inequality (9) with yi