The mean field equation for the Kuramoto model on graph sequences with non-Lipschitz limit

The mean field equation for the Kuramoto model on graph sequences with non-Lipschitz limit


The Kuramoto model (KM) of coupled phase oscillators on graphs provides the most influential framework for studying collective dynamics and synchronization. It exhibits a rich repertoire of dynamical regimes. Since the work of Strogatz and Mirollo [20], the mean field equation derived in the limit as the number of oscillators in the KM goes to infinity, has been the key to understanding a number of interesting effects, including the onset of synchronization and chimera states. In this work, we study the mathematical basis of the mean field equation as an approximation of the discrete KM. Specifically, we extend the Neunzert’s method of rigorous justification of the mean field equation (cf. [17]) to cover interacting dynamical systems on graphs. We then apply it to the KM on convergent graph sequences with non-Lipschitz limit. This family of graphs includes many graphs that are of interest in applications, e.g., nearest-neighbor and small-world graphs. The approaches for justifying the mean field limit for the KM proposed previously in [10, 3] do not cover the non-Lipschitz case.

1 Introduction

The KM of coupled phase oscillators provides a useful framework for studying collective behavior in large ensembles of interacting dynamical systems. It is derived from a weakly coupled system of nonlinear oscillators, which are described by autonomous systems of ordinary differential equations possessing a stable limit cycle [7]. Originally, Kuramoto considered all-to-all coupled systems, in which each oscillator interacts with all other oscillators in exactly the same way. In this case, the KM has the following form


Here, stands for the phase of oscillator  as a function of time, is its intrinsic frequency, is the coupling strength, and is a parameter defining the type of interactions.

Despite its simple form, the KM (1.1) features a rich repertoire of interesting dynamical effects. For the purpose of this review, we mention the onset of synchronization in (1.1) with randomly distributed intrinsic frequencies (Fig. 1a, b) (cf. [19]) and chimera states, interesting spatio-temporal patterns combining coherent and incoherent behaviors [9, 1]. The mathematical analysis of these and many other dynamical regimes uses the mean field equation, derived in the limit when the number of oscillators goes to infinity [20]. The mean field equation is a partial differential equation for the probability density describing the distribution of the phases on . We discuss the mean field equation in more detail below.

Recently, there has been a growing interest in the dynamics of coupled dynamical systems on graphs [18]. In the KM on a graph, each oscillator is placed at a node of an undirected graph . Here, stands for the node set of and denotes its edge set. The oscillator  interacts only with the oscillators at the adjacent nodes:


where is a shorthand for .

Clearly, one can not expect limiting behavior of solutions of (1.2) as , unless the graph sequence is convergent in the appropriate sense. In the present paper, we use the following construction of the convergent sequence . Let be a symmetric measurable function on the unit square . is called a graphon. It will be used to define the asymptotic behavior of . Further, let




The weighted graph on nodes is defined as follows. The vertex set is and the edge set is


Each edge is supplied with the weight 1.

a b

Figure 1: The distribution of the phase oscillators in the KM (1.1) for values of below (a) and above (b) the critical value . In the former plot, the distribution is approximately uniform, whereas the latter plot exhibits a pronounced cluster. The bold vectors depict the order parameter, whose length reflects the degree of coherence. The random distribution of the oscillators shown in these plots can be effectively analyzed with the mean field equation (1.18). In particular, the mean field analysis determines the critical value is determined from the mean field equation.

The KM on has the following form


For different (1.6) implements the KM on a variety of simple and weighted graphs. Moreover, it provides an effective approximation of the KM on random graphs. Indeed, let be a random graph on nodes, whose edge set is defined as follows:


assuming the range of is . The decision for each pair is made independently from the decisions on other pairs. is called a W-random graph [11].

The KM on the W-random graph has the following form:


where are independent Bernoulli RVs:


The following lemma shows that the deterministic model (1.6) approximates the KM on the random graph (1.8).

Lemma 1.1.

[3] Let and denote solutions of the IVP for (1.6) and (1.8) respectively. Suppose that the initial data for these problems coincide Then


where and


is a discrete -norm.

Example 1.2.

A few examples are in order.

  1. Let . Then is an Erdős-Rényi graph (Fig. 2a).

  2. Let


    where are two parameters and


    is the distance on . Then is a W-small-world graph [15] (Fig. 2b).

  3. is a -nearest-neighbor graph (Fig. 2c).

For more examples, we refer an interested reader to [3].

a b c

Figure 2: The pixel pictures representing adjacency matrices of the Erdős-Rényi (a), small-world (b), and nearest-neighbor (c) graphs.
Remark 1.3.

For simplicity, we restrict the presentation to the KM on dense graphs. The KM on W-random graphs (1.8) easily extends to sparse graphs like scale-free graphs (see [8] for details).

Below, we will focus on the deterministic model (1.6). All results for this model can be extended to the KM on random graphs via Lemma 1.1. Furthermore, from now on we will assume that all intrinsic frequencies in (1.6) are the same , , and, thus, can be set to by switching to the rotating frame of coordinates. Extending the analysis in the main part of this paper to models with distributed frequencies is straightforward (see, e.g., [3]), but it complicates the presentation. We will comment on the adjustments in the analysis that are necessary to cover the distributed intrinsic frequencies case in Section 4. Until then we consider the following system of coupled oscillators on :


where is a Lipschitz continuous -periodic function.

Without loss of generality, we assume




In addition, we assume that the graphon satisfies the following condition:


Having defined the KMs on deterministic and random graphs (1.6) and (1.8) respectively, we will now turn to the mean field limit:




The initial condition


is a probability density function on for every . It approximates the distribution of the initial conditions (1.14).

In the continuum limit as , the nodes of fill out . Thus, heuristically, in (1.18) stands for the density of the probability distribution of the phase of the oscillator at on at time . As we will see below, this probability distribution is indeed continuous for , provided that the initial conditions for the discrete problem (1.13), (1.14) converge weakly to the probability distribution with density (1.20). In fact, in [3] it is shown that in this case, the empirical measure on the Borel subsets of , ,


converges weakly to the absolutely continuous measure


The analysis in [3], which extends the analysis of the all-to-all coupled KM (1.1) by Lancellotti [10], relies on the Lipschitz continuity of . This is the essential assumption of the Neunzert’s fixed point argument that lies at the heart of the method used in [10, 3]. This puts the KM on such common graphs as the small-world and -nearest-neighbor ones out of the scope of applications of [3] (see Example 1.2). It is the goal of the present paper to fill this gap. Specifically, we extend the Neunzert’s method to the KM on convergent families of graphs with non-Lipschitz limits. Our results apply to a general model of interacting particles on a graph (cf. [6]). However, for concreteness and in view of the diverse applications of the KM, in this paper, we present our method in the context of the KM of coupled phase oscillators.

The organization of this paper is as follows. In the next section, we revise the Neunzert’s fixed point theory to adapt it to the KM on convergent graph sequences. This includes a careful choice of the underlying metric space in Subsection 2.1, setting up the fixed point equation in Subsection 2.2, proving existence and uniqueness of solution of the fixed point equation in Subsection 2, and showing continuous dependence on the initial data in Subsection 2.4 and on the graphon in Subsection 2.5. In Section 3, we apply the fixed point theory to the KM on graphs. To this end, we first apply it to an auxiliary problem and then show that this problem approximates the original KM on graphs. We conclude with a brief discussion of our results in Section 4.

2 The fixed point equation

2.1 The metric space

Let denote the space of Borel probability measures on . The bounded Lipschitz distance on is given by


where stands for the class of Lipschitz continuous functions on with Lipschitz constant at most (cf. [4]). is a complete metric space.

Consider the set of measurable -valued functions2

Equip with the metric

Lemma 2.1.

is a complete metric space.


Since is a metric, it is straightforward that is a metric as well. In order to prove the completeness of , take a Cauchy sequence in . Then there is an increasing sequence of indices such that

By B. Levi’s theorem, the series

converges for a.e. to some measurable function , and

Since, for every with ,

the sequence is Cauchy for a.e. . Since the metric space is complete, there exists the limit

Extending the definition of in an arbitrary way to all of , we obtain a function

which is measurable as an a.e. pointwise limit of measurable functions. Thus . Next, for every with , we have

We also have that, for all , a.e. and the function is Lebesque integrable on . By the Lebesque Dominated Convergence Theorem, letting , we obtain that

Since the subsequence of the Cauchy sequence converges to in , we conclude that the sequence converges to as well. Thus is a complete metric space.

Let be arbitrary but fixed and denote . We define , the space of continuous -valued functions.

For any the following is a metric on :


2.2 The equation of characteristics

Recall that , where is arbitrary but fixed. For a given consider the following equation of characteristics



Lemma 2.2.

For every , is Lipschitz continuous in and continuous in and .


The proof follows from the following estimates. First, using (1.15) and (1.16), we have

Using the bound on (cf. (1.15)), we obtain


The continuity of in follows from (2.5) and (1.17).


Similarly to the derivation of the last inequality, we prove the following lemma.

Lemma 2.3.

Consider the initial value problem (IVP) for (2.3) subject to the initial condition at time . By Lemma 2.2, for every and there exists a unique solution of the IVP for (2.3). Since is uniformly Lipschitz in , can be extended to . Thus, the equation of characteristics (2.3) generates the flow on


For every , is a two-parameter family of one-to-one transformations of to itself depending continuously on :

2.3 Existence of solution of the fixed point equation

In the remainder of this section, we will study the following fixed point equation. For a given , consider the pushforward measure


which is interpreted as


First, we address existence and uniqueness of solution of (2.8).

Theorem 2.4.

For every , the fixed point equation (2.8) has a unique solution .

For the proof of Theorem 2.4, we will need a variant of the Gronwall’s inequality, which we formulate below for convenience.

Lemma 2.5.

Let and be continuous functions on and


where . Then


Denote the right-hand side of (2.10) by . Differentiating and using (2.10), we have






Proof of Theorem 2.4.

Given consider defined by


Below we show that is a contraction on with . To this end,


The change of variables formula used in (2.15) is explained in [12, §6.1]. Using (2.3) and (2.6), we obtain


Using Gronwall’s inequality (cf. Lemma 2.5), from (2.16) we obtain


Combining (2.15), (2.16), and (2.17), we have




We conclude the proof with using the contraction mapping principle to establish a unique solution of (2.8).

2.4 Continuous dependence on initial data

Lemma 2.6.

Let be two solutions of (2.8) corresponding to initial conditions respectively. Then


For every , by the triangle inequality, we have


Exactly in the same way as in (2.15), we estimate the first term on the right hand side of (2.21) as follows


Similarly, repeating the steps in (2.16)


Using Gronwall’s inequality, from (2.23) we obtain