The mean field equation for the Kuramoto model on graph sequences with non-Lipschitz limit

# The mean field equation for the Kuramoto model on graph sequences with non-Lipschitz limit

## Abstract

The Kuramoto model (KM) of coupled phase oscillators on graphs provides the most influential framework for studying collective dynamics and synchronization. It exhibits a rich repertoire of dynamical regimes. Since the work of Strogatz and Mirollo [20], the mean field equation derived in the limit as the number of oscillators in the KM goes to infinity, has been the key to understanding a number of interesting effects, including the onset of synchronization and chimera states. In this work, we study the mathematical basis of the mean field equation as an approximation of the discrete KM. Specifically, we extend the Neunzert’s method of rigorous justification of the mean field equation (cf. [17]) to cover interacting dynamical systems on graphs. We then apply it to the KM on convergent graph sequences with non-Lipschitz limit. This family of graphs includes many graphs that are of interest in applications, e.g., nearest-neighbor and small-world graphs. The approaches for justifying the mean field limit for the KM proposed previously in [10, 3] do not cover the non-Lipschitz case.

## 1 Introduction

The KM of coupled phase oscillators provides a useful framework for studying collective behavior in large ensembles of interacting dynamical systems. It is derived from a weakly coupled system of nonlinear oscillators, which are described by autonomous systems of ordinary differential equations possessing a stable limit cycle [7]. Originally, Kuramoto considered all-to-all coupled systems, in which each oscillator interacts with all other oscillators in exactly the same way. In this case, the KM has the following form

 ˙uni=ωi+Knn∑j=1sin(unj−uni+α),i∈[n]:={1,2,…,n}. (1.1)

Here, stands for the phase of oscillator  as a function of time, is its intrinsic frequency, is the coupling strength, and is a parameter defining the type of interactions.

Despite its simple form, the KM (1.1) features a rich repertoire of interesting dynamical effects. For the purpose of this review, we mention the onset of synchronization in (1.1) with randomly distributed intrinsic frequencies (Fig. 1a, b) (cf. [19]) and chimera states, interesting spatio-temporal patterns combining coherent and incoherent behaviors [9, 1]. The mathematical analysis of these and many other dynamical regimes uses the mean field equation, derived in the limit when the number of oscillators goes to infinity [20]. The mean field equation is a partial differential equation for the probability density describing the distribution of the phases on . We discuss the mean field equation in more detail below.

Recently, there has been a growing interest in the dynamics of coupled dynamical systems on graphs [18]. In the KM on a graph, each oscillator is placed at a node of an undirected graph . Here, stands for the node set of and denotes its edge set. The oscillator  interacts only with the oscillators at the adjacent nodes:

 ˙uni=ωi+Kn∑j:j∼isin(unj−uni+α),i∈[n], (1.2)

where is a shorthand for .

Clearly, one can not expect limiting behavior of solutions of (1.2) as , unless the graph sequence is convergent in the appropriate sense. In the present paper, we use the following construction of the convergent sequence . Let be a symmetric measurable function on the unit square . is called a graphon. It will be used to define the asymptotic behavior of . Further, let

 Xn={xn1,xn2,…,xnn},xni=i/n,i∈[n], (1.3)

and

 Wn,ij:=n2∫In,i×In,jW(x,y)dxdy,In,i:=[xn(i−1),xni),i,j∈[n]. (1.4)

The weighted graph on nodes is defined as follows. The vertex set is and the edge set is

 E(Γn)={{i,j}:Wn,ij≠0,i,j∈[n]}. (1.5)

Each edge is supplied with the weight 1.

The KM on has the following form

 ˙uni=ωi+Knn∑j=1Wn,ijsin(unj−uni+α),i∈[n]. (1.6)

For different (1.6) implements the KM on a variety of simple and weighted graphs. Moreover, it provides an effective approximation of the KM on random graphs. Indeed, let be a random graph on nodes, whose edge set is defined as follows:

 P({i,j}∈E(Γn))=Wn,ij, (1.7)

assuming the range of is . The decision for each pair is made independently from the decisions on other pairs. is called a W-random graph [11].

The KM on the W-random graph has the following form:

 ˙¯uni=ωi+Kn−1n∑j=1enijsin(¯unj−¯uni),i∈[n], (1.8)

where are independent Bernoulli RVs:

 P(enij=1)=Wn,ij,

and

The following lemma shows that the deterministic model (1.6) approximates the KM on the random graph (1.8).

###### Lemma 1.1.

[3] Let and denote solutions of the IVP for (1.6) and (1.8) respectively. Suppose that the initial data for these problems coincide Then

 limn→∞supt∈[0,T]∥un(t)−¯un(t)∥1,n=0a.s., (1.9)

where and

 ∥un∥1,n= ⎷n−1n∑i=1u2ni (1.10)

is a discrete -norm.

###### Example 1.2.

A few examples are in order.

1. Let . Then is an Erdős-Rényi graph (Fig. 2a).

2. Let

 Wp,h(x,y)={1−p,dS(2πx,2πy)≤2πh,p,otherwise, (1.11)

where are two parameters and

 dS(θ,θ′)=min{|θ−θ′|,2π−|θ−θ′|} (1.12)

is the distance on . Then is a W-small-world graph [15] (Fig. 2b).

3. is a -nearest-neighbor graph (Fig. 2c).

For more examples, we refer an interested reader to [3].

###### Remark 1.3.

For simplicity, we restrict the presentation to the KM on dense graphs. The KM on W-random graphs (1.8) easily extends to sparse graphs like scale-free graphs (see [8] for details).

Below, we will focus on the deterministic model (1.6). All results for this model can be extended to the KM on random graphs via Lemma 1.1. Furthermore, from now on we will assume that all intrinsic frequencies in (1.6) are the same , , and, thus, can be set to by switching to the rotating frame of coordinates. Extending the analysis in the main part of this paper to models with distributed frequencies is straightforward (see, e.g., [3]), but it complicates the presentation. We will comment on the adjustments in the analysis that are necessary to cover the distributed intrinsic frequencies case in Section 4. Until then we consider the following system of coupled oscillators on :

 ˙un,i = n−1n∑j=1Wn,ijD(un,j−un,i), (1.13) un,i(0) = u0n,i,i∈[n], (1.14)

where is a Lipschitz continuous -periodic function.

Without loss of generality, we assume

 sup(x,y)∈I2|W(x,y)|≤1,maxu∈S|D(u)|≤1, (1.15)

and

 |D(u)−D(v)|≤|u−v|∀u,v∈S. (1.16)

In addition, we assume that the graphon satisfies the following condition:

 limδ→0∫I|W(x+δ,y)−W(x,y)|dy=0∀x∈I. (1.17)

Having defined the KMs on deterministic and random graphs (1.6) and (1.8) respectively, we will now turn to the mean field limit:

 ∂∂tρ(t,u,x)+∂∂u{V(t,u,x)ρ(t,u,x)}=0, (1.18)

where

 V(t,u,x)=∫I∫SW(x,y)D(v−u)ρ(t,v,y)dvdy. (1.19)

The initial condition

 ρ(0,u,x)=ρ0(u,x)∈L1(G) (1.20)

is a probability density function on for every . It approximates the distribution of the initial conditions (1.14).

In the continuum limit as , the nodes of fill out . Thus, heuristically, in (1.18) stands for the density of the probability distribution of the phase of the oscillator at on at time . As we will see below, this probability distribution is indeed continuous for , provided that the initial conditions for the discrete problem (1.13), (1.14) converge weakly to the probability distribution with density (1.20). In fact, in [3] it is shown that in this case, the empirical measure on the Borel subsets of , ,

 μnt(A)=n−1n∑i=11A((uni(t),xni))(A),A∈B(G), (1.21)

converges weakly to the absolutely continuous measure

 μt(A)=\lx@stackrel[A]∫∫ρ(t,u,x)dudx,A∈B(G). (1.22)

The analysis in [3], which extends the analysis of the all-to-all coupled KM (1.1) by Lancellotti [10], relies on the Lipschitz continuity of . This is the essential assumption of the Neunzert’s fixed point argument that lies at the heart of the method used in [10, 3]. This puts the KM on such common graphs as the small-world and -nearest-neighbor ones out of the scope of applications of [3] (see Example 1.2). It is the goal of the present paper to fill this gap. Specifically, we extend the Neunzert’s method to the KM on convergent families of graphs with non-Lipschitz limits. Our results apply to a general model of interacting particles on a graph (cf. [6]). However, for concreteness and in view of the diverse applications of the KM, in this paper, we present our method in the context of the KM of coupled phase oscillators.

The organization of this paper is as follows. In the next section, we revise the Neunzert’s fixed point theory to adapt it to the KM on convergent graph sequences. This includes a careful choice of the underlying metric space in Subsection 2.1, setting up the fixed point equation in Subsection 2.2, proving existence and uniqueness of solution of the fixed point equation in Subsection 2, and showing continuous dependence on the initial data in Subsection 2.4 and on the graphon in Subsection 2.5. In Section 3, we apply the fixed point theory to the KM on graphs. To this end, we first apply it to an auxiliary problem and then show that this problem approximates the original KM on graphs. We conclude with a brief discussion of our results in Section 4.

## 2 The fixed point equation

### 2.1 The metric space

Let denote the space of Borel probability measures on . The bounded Lipschitz distance on is given by

 d(μ,η)=supf∈L∣∣∣∫Sf(v)(dμ(v)−dη(v))∣∣∣, (2.1)

where stands for the class of Lipschitz continuous functions on with Lipschitz constant at most (cf. [4]). is a complete metric space.

Consider the set of measurable -valued functions2

 ¯M:={¯μ:I→M}.

Equip with the metric

 ¯d(¯μ,¯η)=∫Id(μx,ηx)dx.
###### Lemma 2.1.

is a complete metric space.

###### Proof.

Since is a metric, it is straightforward that is a metric as well. In order to prove the completeness of , take a Cauchy sequence in . Then there is an increasing sequence of indices such that

 ¯d(μnk,μnk+1)=∫Id(μxnk,μxnk+1)dx<12k+1,k=1,2,….

By B. Levi’s theorem, the series

 ∞∑k=1d(μxnk,μxnk+1)

converges for a.e. to some measurable function , and

 ∞∑k=1∫Id(μxnk,μxnk+1)dx=∫If(x)dx.

Since, for every with ,

 d(μxni,μxnj)≤j−1∑k=id(μxnk,μxnk+1),

the sequence is Cauchy for a.e. . Since the metric space is complete, there exists the limit

 μx=limk→∞μxnk,a.e. x∈I.

Extending the definition of in an arbitrary way to all of , we obtain a function

 ¯μ:={μx}:I→M

which is measurable as an a.e. pointwise limit of measurable functions. Thus . Next, for every with , we have

 ¯d(¯μni,¯μnj)=∫Id(μxni,μxnj)dx≤∫Ij−1∑k=id(μxnk,μxnk+1)dx=j−1∑k=i∫Id(μxnk,μxnk+1)dx≤∞∑k=i∫Id(μxnk,μxnk+1)dx<12i.

We also have that, for all , a.e. and the function is Lebesque integrable on . By the Lebesque Dominated Convergence Theorem, letting , we obtain that

 ¯d(¯μni,¯μ)=∫Id(μxni,μx)dx=∫Ilimj→∞d(μxni,μxnj)dx=limj→∞∫Id(μxni,μxnj)dx≤12i→0as i→∞.

Since the subsequence of the Cauchy sequence converges to in , we conclude that the sequence converges to as well. Thus is a complete metric space.

Let be arbitrary but fixed and denote . We define , the space of continuous -valued functions.

For any the following is a metric on :

 dα(¯μ.,¯ν.)=supt∈Te−αt¯d(¯μt,¯νt)=supt∈Te−αt∫Id(μxt,νxt)dx. (2.2)

### 2.2 The equation of characteristics

Recall that , where is arbitrary but fixed. For a given consider the following equation of characteristics

 ddtu=V[W,¯μ.](u,x,t), (2.3)

where

 V[W,¯μ.](u,x,t)=∫IW(x,y){∫SD(v−u)dμyt(v)}dy. (2.4)
###### Lemma 2.2.

For every , is Lipschitz continuous in and continuous in and .

###### Proof.

The proof follows from the following estimates. First, using (1.15) and (1.16), we have

 |V[W,¯μ.](u,x,t)−V[W,¯μ.](v,x,t)|=∣∣∣∫IW(x,y)∫S(D(w−u)−D(w−v))dμyt(w)dy∣∣∣≤∫I|W(x,y)|∫S|D(w−u)−D(w−v)|dμyt(w)dy≤|u−v|,∀u,v∈S,x∈I,t∈T.

Using the bound on (cf. (1.15)), we obtain

 |V[W,¯μ.](u,x,t)−V[W,¯μ.](u,z,t)|=∣∣∣∫I(W(x,y)−W(z,y))∫SD(v−u)dμyt(v)dy∣∣∣≤∫I|W(x,y)−W(z,y)|∫S|D(v−u)|dμyt(v)dy≤∫I|W(x,y)−W(z,y)|dy∀u∈S,x,z∈I,t∈T. (2.5)

The continuity of in follows from (2.5) and (1.17).

Finally,

 |V[W,¯μ.](u,x,t)−V[W,¯μ.](u,x,s)|=∣∣∣∫IW(x,y)∫SD(v−u)(dμyt(v)−dμys(v))dy∣∣∣≤∫I|W(x,y)|∣∣∣∫SD(v−u)(dμyt(v)−dμys(v))∣∣∣dy≤∫I|W(x,y)|d(μyt,μys)dy≤¯d(¯μt,¯μs).

Similarly to the derivation of the last inequality, we prove the following lemma.

###### Lemma 2.3.
 |V[W,¯μ.](u,x,t)−V[W,¯ν.](u,x,t)|≤¯d(¯μt,¯νt)∀¯μ.,¯ν.∈¯MT. (2.6)

Consider the initial value problem (IVP) for (2.3) subject to the initial condition at time . By Lemma 2.2, for every and there exists a unique solution of the IVP for (2.3). Since is uniformly Lipschitz in , can be extended to . Thus, the equation of characteristics (2.3) generates the flow on

 Txt,s[W,¯μ.]us=u(t),t,s∈T,us∈S. (2.7)

For every , is a two-parameter family of one-to-one transformations of to itself depending continuously on :

 Txs,s[W,¯μ.]=id,(Txt,s[W,¯μ.])−1=Txs,t[W,¯μ.].

### 2.3 Existence of solution of the fixed point equation

In the remainder of this section, we will study the following fixed point equation. For a given , consider the pushforward measure

 ¯μt=¯μ0∘T0,t[W,¯μ.]∀t∈T, (2.8)

which is interpreted as

 μxt=μx0∘Tx0,t[W,¯μ.]almost % everywhere (a.e.)x∈I,andt∈T. (2.9)

First, we address existence and uniqueness of solution of (2.8).

###### Theorem 2.4.

For every , the fixed point equation (2.8) has a unique solution .

For the proof of Theorem 2.4, we will need a variant of the Gronwall’s inequality, which we formulate below for convenience.

###### Lemma 2.5.

Let and be continuous functions on and

 ϕ(t)≤A∫t0ϕ(s)ds+B∫t0a(s)ds+C,t∈[0,T], (2.10)

where . Then

 ϕ(t)≤eAt(B∫t0a(s)e−Asds+C). (2.11)
###### Proof.

Denote the right-hand side of (2.10) by . Differentiating and using (2.10), we have

 ψ′(s)=Aϕ(s)+Ba(s)≤Aψ(s)+Ba(s). (2.12)

Thus,

 dds{e−Asψ(s)}≤Be−Asa(s)

and

 ψ(t)≤eAt(ψ(0)+B∫t0e−Asa(s)ds)≤eAt(B∫t0e−Asa(s)ds+C). (2.13)

Finally,

 ϕ(t)≤ψ(t)≤eAt(B∫t0e−Asa(s)ds+C).

###### Proof of Theorem 2.4.

Given consider defined by

 A[W,¯μ.](t,x)=μx0∘Tx0,t[W,¯μ.],a.e.x∈I. (2.14)

Below we show that is a contraction on with . To this end,

 ¯d(A[W,¯μ.](t,⋅),A[W,¯η.](t,⋅))=¯d(¯μ0∘T0,t[W,¯μ.],¯μ0∘T0,t[W,¯η.])=∫Id(μx0∘Tx0,t[W,¯μ.],μx0∘Tx0,t[W,¯η.])dx=∫Isupf∈L∣∣∣∫Sf(v)(dμx0∘Tx0,t[W,¯μ.](v)−dμx0∘Tx0,t[W,¯η.](v))∣∣∣dx=∫Isupf∈L∣∣∣∫Sf(Txt,0[W,¯μ.]v)dμx0(v)−∫Sf(Txt,0[W,¯η.]v)dμx0(v)∣∣∣dx≤∫I∫S∣∣Txt,0[W,¯μ.]v−Txt,0[W,¯η.]v∣∣dμx0(v)dx=:λ(t). (2.15)

The change of variables formula used in (2.15) is explained in [12, §6.1]. Using (2.3) and (2.6), we obtain

 λ(t)=∫I∫S∣∣Txt,0[W,¯μ.]v−Txt,0[W,¯η.]v∣∣dμx0(v)dx≤∫t0∫I∫S∣∣V[W,¯μ.](Txs,0[W,¯μ.]v,x,s))−V[W,¯η.](Txs,0[W,¯η.]v,x,s))∣∣dμx0(v)dxds≤∫t0∫I∫S∣∣V[W,¯μ.](Txs,0[W,¯μ.]v,x,s))−V[W,¯η.](Txs,0[W,¯μ.]v,x,s))∣∣dμx0(v)dxds+∫t0∫I∫S∣∣V[W,¯η.](Txs,0[W,¯μ.]v,x,s))−V[W,¯η.](Txs,0[W,¯η.]v,x,s))∣∣dμx0(v)dxds≤∫t0¯d(¯μs,¯ηs)ds+∫t0∫I∫S∣∣Txs,0[W,¯μ.]v−Txs,0[W,¯η.]v∣∣dμx0(v)dxds≤∫t0¯d(¯μs,¯ηs)ds+∫t0λ(s)ds. (2.16)

Using Gronwall’s inequality (cf. Lemma 2.5), from (2.16) we obtain

 λ(t)≤et∫t0¯d(¯μs,¯ηs)e−sds. (2.17)

Combining (2.15), (2.16), and (2.17), we have

 ¯d(A[W,¯μ](t,⋅),A[W,¯η](t,⋅))≤et∫t0¯d(¯μs,¯ηs)e−sds. (2.18)

and

 dα(A[W,¯μ](t,⋅),A[W,¯η](t,⋅))=supt∈T{e−αt¯d(A[W,¯μ](t,⋅),A[W,¯η](t,⋅))}≤supt∈Te−(α−1)t∫t0¯d(¯μs,¯ηs)e−sds≤dα(¯μ,¯η)e−(α−1)t∫t0e(α−1)sds≤(α−1)−1dα(¯μ.,¯η.). (2.19)

We conclude the proof with using the contraction mapping principle to establish a unique solution of (2.8).

### 2.4 Continuous dependence on initial data

###### Lemma 2.6.

Let be two solutions of (2.8) corresponding to initial conditions respectively. Then

 supt∈T¯d(¯μt,¯ηt)≤eT¯d(¯μ0,¯η0). (2.20)
###### Proof.

For every , by the triangle inequality, we have

 ¯d(¯μt,¯ηt)=¯d(¯μ0∘T0,t[W,¯μ.],¯η0∘T0,t[W,¯η.])≤¯d(¯μ0∘T0,t[W,¯μ.],¯μ0∘T0,t[W,¯η.])+¯d(¯μ0∘T0,t[W,¯η.],¯η0∘T0,t[W,¯η.]). (2.21)

Exactly in the same way as in (2.15), we estimate the first term on the right hand side of (2.21) as follows

 ¯d(¯μ0∘T0,t[W,¯μ.],¯μ0∘T0,t[W,¯η.])=∫Isupf∈L∣∣∣∫S(f(Txt,0[W,¯μ.]v)−f(Txt,0[W,¯η.]v))dμx0(v)∣∣∣dx≤∫I∫S∣∣Txt,0[W,¯μ.]v−Txt,0[W,¯η.]v∣∣dμx0(v)dx=:λ(t). (2.22)

Similarly, repeating the steps in (2.16)

 λ(t)≤∫t0¯d(¯μs,¯ηs)ds+∫t0λ(s)ds. (2.23)

Using Gronwall’s inequality, from (2.23) we obtain

 λ(t)≤et∫t0¯d(¯μs,¯ηs)e−s