The geometry of spontaneous spiking in neuronal networks

# The geometry of spontaneous spiking in neuronal networks

Georgi S. Medvedev  and Svitlana Zhuravytska Department of Mathematics, Drexel University, 3141 Chestnut Street, Philadelphia, PA 19104, medvedev@drexel.eduBank of America, 1200 N. King St., Wilmington, DE 19801, svitlana.zhuravytska@bankofamerica.com
###### Abstract

The mathematical theory of pattern formation in electrically coupled networks of excitable neurons forced by small noise is presented in this work. Using the Freidlin-Wentzell large deviation theory for randomly perturbed dynamical systems and the elements of the algebraic graph theory, we identify and analyze the main regimes in the network dynamics in terms of the key control parameters: excitability, coupling strength, and network topology. The analysis reveals the geometry of spontaneous dynamics in electrically coupled network. Specifically, we show that the location of the minima of a certain continuous function on the surface of the unit cube encodes the most likely activity patterns generated by the network. By studying how the minima of this function evolve under the variation of the coupling strength, we describe the principal transformations in the network dynamics. The minimization problem is also used for the quantitative description of the main dynamical regimes and transitions between them. In particular, for the weak and strong coupling regimes, we present asymptotic formulae for the network activity rate as a function of the coupling strength and the degree of the network. The variational analysis is complemented by the stability analysis of the synchronous state in the strong coupling regime. The stability estimates reveal the contribution of the network connectivity and the properties of the cycle subspace associated with the graph of the network to its synchronization properties. This work is motivated by the experimental and modeling studies of the ensemble of neurons in the Locus Coeruleus, a nucleus in the brainstem involved in the regulation of cognitive performance and behavior.

## 1 Introduction

Direct electrical coupling through gap-junctions is a common way of communication between neurons, as well as between cells of the heart, pancreas, and other physiological systems [10]. Electrical synapses are important for synchronization of the network activity, wave propagation, and pattern formation in neuronal networks. A prominent example of a gap-junctionally coupled network, whose dynamics is thought to be important for cognitive processing, is a group of neurons in the Locus Coeruleus (LC), a nucleus in the brainstem [1, 4, 37]. Electrophysiological studies of the animals performing a visual discrimination test show that the rate and the pattern of activity of the LC network correlate with the cognitive performance [38]. Specifically, the periods of the high spontaneous activity correspond to the periods of poor performance, whereas the periods of low synchronized activity coincide with good performance. Based on the physiological properties of the LC network, it was proposed that the transitions between the periods of high and low network activity are due to the variations in the strength of coupling between the LC neurons [38]. This hypothesis motivates the following dynamical problem: to study how the dynamics of electrically coupled networks depends on the coupling strength. This question is the focus of the present work.

The dynamics of an electrically coupled network depends on the properties of the attractors of the local dynamical systems and the interactions between them. Following [38], we assume that the individual neurons in the LC network are spontaneously active. Specifically, we model them with excitable dynamical systems forced by small noise. We show that depending on the strength of electrical coupling, there are three main regimes of the network dynamics: uncorrelated spontaneous firing (weak coupling), formation of clusters and waves (intermediate coupling), and synchrony (strong coupling). The qualitative features of these regimes are independent from the details of the models of the individual neurons and network topology. Using the center manifold reduction [8, 18] and the Freidlin-Wentzell large deviation theory [14], we derive a variational problem, which provides a useful geometric interpretation for various patterns of spontaneous activity. Specifically, we show that the location of the minima of a certain continuous function on the surface of the unit cube encodes the most likely activity patterns generated by the network. By studying the evolution of the minima of this function under the variation of the control parameter (coupling strength), we identify the principal transformations in the network dynamics. The minimization problem is also used for the quantitative description of the main dynamical regimes and transitions between them. In particular, for the weak and strong coupling regimes, we present asymptotic formulae for the activity rate as a function of the coupling strength and the degree of the network. The variational analysis is complemented by the stability analysis of the synchronous state in the strong coupling regime. In analyzing various aspects of the network dynamics, we pay special attention to the role of the structural properties of the network in shaping its dynamics. We show that in weakly coupled networks, only very rough structural properties of the underlying graph matter, whereas in the strong coupling regime, the finer features, such as the algebraic connectivity and the properties of the cycle subspace associated with the graph of the network, become important. Therefore, this paper presents a comprehensive analysis of electrically coupled networks of excitable cells in the presence of noise. It complements the existing studies of related deterministic networks of electrically coupled oscillators (see, e.g., [11, 19, 26, 33] and references therein).

The outline of the paper is as follows. In Section 2, we formulate the biophysical model of the LC network. Section 3 presents numerical experiments elucidating the principal features of the network dynamics. In Section 4, we reformulate the problem in terms of the bifurcation properties of the local dynamical systems and the properties of the linear coupling operator. We then introduce the variational problem, whose analysis explains the main dynamical regimes of the coupled system. In Section 5, we analyze the stability of the synchronous dynamics in the strong coupling regime, using fast-slow decomposition. The results of this work are summarized in Section 6.

## 2 The model

### 2.1 The single cell model

According to the dynamical mechanism underlying action potential generation, conductance-based models of neurons are divided into Type I and Type II classes [35]. The former assumes that the model is near the saddle-node bifurcation, while the latter is based on the Andronov-Hopf bifurcation. Electrophysiological recordings of the LC neurons exhibit features that are consistent with the Type I excitability. The existing biophysical models of LC neurons use Type I action potential generating mechanism [2, 5]. In accord with these experimental and modeling studies, we use a generic Type I conductance-based model to simulate the dynamics of the individual LC neuron

 C˙v = −Iion(v,n)+σ˙w, (2.1) ˙n = n∞(v)−nτ(v). (2.2)

Here, dynamical variables and are the membrane potential and the activation of the potassium current, , respectively. stands for the membrane capacitance. The ionic currents are modeled using the Hodgkin-Huxley formalism (see Appendix for the definitions of the functions and parameter values used in (2.1) and (2.2)). A small Gaussian white noise is added to the right hand side of (2.1) to simulate random synaptic input and other possible fluctuations affecting system’s dynamics. Without noise (), the system is in the excitable regime. For , it exhibits spontaneous spiking. The frequency of the spontaneous firing depends on the proximity of the deterministic system to the saddle-node bifurcation and on the noise intensity. A typical trajectory of (2.1) and (2.2) stays in a small neighborhood of the stable equilibrium most of the time (Fig. 1a). Occasionally, it leaves the vicinity of the fixed point to make a large excursion in the phase plane and then returns to the neighborhood of the steady state (Fig. 1a). These dynamics generate a train of random spikes in the voltage time series (Fig. 1b).

In neuroscience, the (average) firing rate provides a convenient measure of activity of neural cells and neuronal populations. It is important to know how the firing rate depends on the parameters of the model. In this paper, we study the factors determining the rate of firing in electrically coupled network of neurons. However, before setting out to study the network dynamics, it is instructive to discuss the behavior of the single neuron model first. To this end, we use the center-manifold reduction to approximate (2.1) and (2.2) by a system:

 ˙z=−U′(z)+~σ˙wt,U(z)=μz−13z3+23μ3/2, (2.3)

where is the rescaled projection of onto a slow manifold, is the distance to the saddle-node bifurcation, and is the noise intensity after rescaling. We postpone the details of the center-manifold reduction until we analyze a more general network model in §4.1.

The time between two successive spikes in voltage time series corresponds to the first time the trajectory of (2.3) with initial condition overcomes potential barrier . The large deviation estimates (cf. [14]) yield the logarithmic asymptotics of the first crossing time

 lim~σ→0~σ2lnE z0τ=2U(√μ)=4μ3/23⇒E z0τ≍exp{4μ3/23~σ2}, (2.4)

where stands for the expected value with respect to the probability generated by the random process with initial condition . Throughout this paper, we use to denote logarithmic asymptotics. It is also known that the first exit time is distributed exponentially as shown in Fig. 2 (cf. [12]).

Equation (2.4) implies that the statistics of spontaneous spiking of a single cell is determined by the distance of the neuronal model (2.1) and (2.2) to the saddle-node bifurcation and the intensity of noise. Below we show that, in addition to these two parameters, the strength and topology of coupling are important factors determining the firing rate of the coupled population.

### 2.2 The electrically coupled network

The network model includes cells, whose intrinsic dynamics is defined by (2.1) and (2.2), coupled by gap-junctions. The gap-junctional current that Cell receives from the other cells in the network is given by

 I(i)c=gn∑i=1aij(v(j)−v(i)), (2.5)

where is the gap-junction conductance and

 aij={1,Cell~{}i and Cell~{}j are % connected,0,otherwise.aii=0,(i,j)∈[n]2.

Adjacency matrix defines the network connectivity. By adding the coupling current to the right hand side of the voltage equation (2.1) and combining the equations for all neurons in the network, we arrive at the following model

 C˙v(i) = −Iion(v(i),n(i))+gn∑i=1aij(v(j)−v(i))+σ˙w(i), (2.6) ˙n(i) = n∞(v(i))−n(i)τ(v(i)), (2.7)

where are independent copies of the standard Brownian motion.

The network topology is an important parameter of the model (2.6) and (2.7). The following terminology and constructions from the algebraic graph theory [6] will be useful for studying the role of the network structure in shaping its dynamics. Let denote the graph of interactions between the cells in the network. Here, and denote the sets of vertices (i.e., cells) and edges (i.e., the pairs of connected cells), respectively. Throughout this paper, we assume that is a connected graph. For each edge , we declare one of the vertices to be the positive end (head) of , and the other to be the negative end (tail). Thus, we assign an orientation to each edge from its tail to its head. The coboundary matrix of is defined as follows (cf. [6])

 H=(hij)∈Rm×n,hij=⎧⎪⎨⎪⎩1,vj is a positive end ofei,−1,vj is a negative end ofei,0,otherwise. (2.8)

Let be a spanning tree of , i.e., a connected subgraph of such that , and there are no cycles in [6]. Without loss of generality, we assume that

 E(~G)={e1,e2,…,en−1}. (2.9)

Denote the coboundary matrix of by .

Matrix

 L=HTH (2.10)

is called a graph Laplacian of . The Laplacian is independent of the choice of orientation of edges that was used in the definition of [6]. Alternatively, the Laplacian can be defined as

 L=D−A, (2.11)

where is the degree map and is the adjacency matrix of .

Let

 λ1(L)≤λ2(L)≤⋯≤λn(L)

denote the eigenvalues of arranged in the increasing order counting the multiplicity. The spectrum of the graph Laplacian captures many structural properties of the network (cf. [6, 7, 9]). In particular, the first eigenvalue of , , is simple if and only if the graph is connected [16]. The second eigenvalue is called the algebraic connectivity of , because it yields a lower bound for the edge and the vertex connectivity of [16]. The algebraic connectivity is important for a variety of combinatorial, probabilistic, and dynamical aspects of the network analysis. In particular, it is used in the studies of the graph expansion [22], random walks [7], and synchronization of dynamical networks [17, 29].

Next, we introduce several examples of the network connectivity including nearest neighbor arrays of varying degree and a pair of degree symmetric and random graphs. These examples will be used to illustrate the role of the network topology in pattern formation.

###### Example 2.1.

The nearest-neighbor coupling scheme is an example of the local connectivity (Fig. 3a). For simplicity, we consider a array. For higher dimensional lattices, the nearest neighbor coupling is defined similarly. In this configuration, each cell in the interior of the array is coupled to two nearest neighbors. This leads to the following expression for the coupling current:

 I(j)c=g(v(j+1)−v(j))+g(v(j−1)−v(j)),j=2,3,…,n−1.

The coupling currents for the cells on the boundary are given by

 I(1)c=g(v(2)−v(1))andI(n)c=g(v(n−1)−v(n)).

The corresponding graph Laplacian is

 L=⎛⎜ ⎜ ⎜⎝1−10…00−12−1…00………………000…−11⎞⎟ ⎟ ⎟⎠. (2.12)
###### Example 2.2.

The nearest neighbor coupling scheme is a natural generalization of the previous example. Suppose each cell is coupled to of its nearest neighbors from each side whenever they exist or as many as possible otherwise:

 I(j)c=min{k,n−j}∑i=1g(v(j+i)−v(j))+min{k,j}∑i=1g(v(j−i)−v(j)),j=2,3,…,n−1, (2.13)

where we use a customary convention that if . The coupling matrix can be easily derived from (2.13).

###### Example 2.3.

The all-to-all coupling features global connectivity (Fig. 3c):

 I(j)c=gn∑i=1(v(i)−v(j)),j=1,2,3,…,n. (2.14)

The Laplacian in this case has the following form

 L=⎛⎜ ⎜ ⎜⎝n−1−1−1…−1−1−1n−1−1…−1−1………………−1−1−1…−1n−1⎞⎟ ⎟ ⎟⎠. (2.15)

The graphs in the previous examples have different degrees: ranging from in Example 2.1 to in Example 2.3. In addition to the degree of the graph, the pattern of connectivity itself is important for the network dynamics. This motivates our next example.

###### Example 2.4.

Consider a pair of degree graphs shown schematically in Fig. 4. The graph in Fig. 4a has symmetric connections. The edges of the graph in Fig. 4b were selected randomly. Both graphs have the same number of nodes and equal degrees.

Graphs with random connections like the one in the last example represent expanders, a class of graphs used in many important applications in mathematics, computer science and other branches of science and technology (cf. [22]). In Section 5 we show that dynamical networks on expanders have very good synchronization properties (see also [29]).

###### Example 2.5.

Let be a family of graphs on vertices, with the following property:

 λ2(Gn)≥α>0,n∈N. (2.16)

Such graphs are called (spectral) expanders [22, 36]. There are known explicit constructions of expanders, including the celebrated Ramanujan graphs [28, 27]. In addition, families of random graphs have good expansion properties. In particular, it is known that

 Prob{λ2(Gn)≥d−2√d−1−ϵ}=1−on(1)∀ϵ>0, (2.17)

where stands for the family of random graphs of degree and [15].

## 3 Numerical experiments

The four parameters controlling the dynamics of the biophysical model (2.6) and (2.7) are the excitability, the noise intensity, the coupling strength, and the network topology. Assuming that the system is at a fixed distance from the bifurcation, we study the dynamics of the coupled system for sufficiently small noise intensity . Therefore, the two remaining parameters are the coupling strength and the network topology. We focus on the impact of the coupling strength on the spontaneous dynamics first. At the end of this section, we discuss the role of the network topology. The numerical experiments of this section show that activity patterns generated by the network are effectively controlled by the variations of the coupling strength.

### 3.1 Three phases of spontaneous activity

To measure the activity of the network for different values of the control parameters, we will use the average firing rate - the number of spikes generated by the network per one neuron and per unit time. Fig. 5a shows that the activity rate varies significantly with the coupling strength. The three intervals of monotonicity of the activity rate plot reflect three main stages in the network dynamics en route to complete synchrony: weakly correlated spontaneous spiking, formation of clusters and wave propagation, and synchronization. We discuss these regimes in more detail below.

Weakly correlated spontaneous spiking. For sufficiently small, the activity retains the features of spontaneous spiking in the uncoupled population. Fig. 6b shows no significant correlations between the activity of distinct cells in the weakly coupled network. The distributions of the interspike intervals are exponential in both cases (see Fig. 6 (c,d)). There is an important change, however: the rate of firing goes down for increasing values of for small . This is clearly seen from the graphs in Fig. 5. The decreasing firing rate for very weak coupling can also be noted from the interspike interval distributions in Fig. 6c,d: the density in Fig. 6d has a heavier tail. Thus, weak electrical coupling has a pronounced inhibitory (shunting) effect on the network dynamics: it drains the current from a neuron developing a depolarizing potential and redistributes it among the cells connected to it. This effect is stronger for networks with greater number of connections. The three plots shown in Fig. 5a correspond to nearest-neighbor coupling, nearest neighbor coupling, and all-to-all coupling. Note that the slope at zero is steeper for networks with greater degree.

Coherent structures. For increasing values of the system develops clusters, short waves, and robust waves (see Fig. 7). The appearance of these spatio-temporal patterns starts in the middle of the first decreasing portion of the firing rate plot in Fig. 5a and continues through the next (increasing) interval of monotonicity. While patterns in Fig. 7 feature progressively increasing role of coherence in the system’s dynamics, the dynamical mechanisms underlying cluster formation and wave propagation are distinct. Factors A and B below identify two dynamical principles underlying pattern formation in this regime.

Factor A:At the moment when one neuron fires due to large deviations from the rest state, neurons connected to it are more likely to be closer to the threshold and, therefore, are more likely to fire within a short interval of time.

Factor B:When a neuron fires, it supplies neurons connected to it with depolarizing current. If the coupling is sufficiently strong, the gap-junctional current triggers action potentials in these cells and the activity propagates through the network.

Factor A follows from the variational interpretation of the spontaneous dynamics in weakly coupled networks, which we develop in Section 4. It is responsible for the formation of clusters and short waves, like those shown in Fig. 7a. To show numerically that that Factor A (vs. Factor B) is responsible for the formation of clusters, we modified the model (2.6) and (2.7) in the following way. Once a neuron in the network has crossed the threshold, we turn off the current that it sends to the other neurons in the network until it gets back close to the the stable fixed point. We will refer to this model as the modified model (2.6) and (2.7). Numerical results for the modified model in Fig. 8a,b, show that clusters are formed as the result of the subthreshold dynamics, i.e., are due to Factor A. Factor B becomes dominant for stronger coupling. It results in robust waves with constant speed of propagation. The mechanism of the wave propagation is essentially deterministic and is well known from the studies of waves in excitable systems (cf. [24]). However, in the presence of noise, the excitation and termination of waves become random (see Fig. 7(b,c)).

Synchrony. The third interval of monotonicity in the graph of the firing rate vs. the coupling strength is decreasing (see Fig. 5a). It features synchronization, the final dynamical state of the network. In this regime, once one cell crosses the firing threshold the entire network fires in unison. The distinctive feature of this regime is a rapid decrease of the firing rate for increasing (see Fig. 5a). The slowdown of firing in the strong coupling regime was studied in [32] (see also [31, 29, 34]). When the coupling is strong the effect of noise on the network dynamics is diminished by the dissipativity of the coupling operator. The reduced effect of noise results in the decrease of the firing rate. In §5.4, we present analytical estimates characterizing denoising by electrical coupling for the present model.

### 3.2 The role of the network topology

All connected networks of excitable elements (regardless of the connectivity pattern) undergo the three dynamical regimes, which we identified above for weak, intermediate, and strong coupling. The topology becomes important for quantitative description of the activity patterns. In particular, the topology affects the boundaries between different phases. We first discuss the role of topology for the onset of synchronization. The transition to synchrony corresponds to the beginning of the third phase and can be approximately identified with the location of the point of maximum on the firing rate plot (see Fig. 5a,b). The comparison of the plots for and nearest-neighbor coupling schemes shows that the onset of synchrony takes place at a smaller value of for the latter network. This illustrates a general trend: networks with greater number of connections tend to have better synchronization properties. However, the degree is not the only structural property of the graph that affects synchronization. The connectivity pattern is important as well. Fig. 9 shows that a randomly connected degree network synchronizes faster than its symmetric counterpart (cf. Example 2.4). The analysis in §4.4 shows that the point of transition to synchrony can be estimated using the algebraic connectivity of the graph . Specifically, the network is synchronized, if where stands for the coupling strength in the rescaled nondimensional model. The algebraic connectivity is easy to compute numerically. For many graphs with symmetries including those in Examples 2.1-2.3, the algebraic connectivity is known analytically. On the other hand, there are effective asymptotic estimates of the algebraic connectivity available for certain classes of graphs that are important in applications, such as random graphs [15] and expanders [22]. The algebraic connectivities of the graphs in Examples 2.1-2.2 tend to zero as . Therefore, for such networks one needs to increase the strength of coupling significantly to maintain synchrony in networks growing in size. This situation is typical for symmetric or almost symmetric graphs. In contrast, it is known that for the random graph from Example 2.4 the algebraic connectivity is bounded away from zero (with high probability) as [15, 22]. Therefore, one can guarantee synchronization in dynamical networks on such graphs using finite coupling strength when the size of the network grows without bound. This counter-intuitive property is intrinsic to networks on expanders, sparse well connected graphs [22, 36]. For a more detailed discussion of the role of network topology in synchronization, we refer the interested reader to Section  in [29].

The discussion in the previous paragraph suggests that connectivity is important in the strong coupling regime. It is interesting that to a large extent the dynamics in the weak coupling regime remains unaffected by the connectivity. For instance, the firing rate plots for the random and symmetric degree- networks (Example 5) shown in Fig. 5b coincide over an interval in near . Furthermore, the plots for the same pair of networks based on the modified model (2.6) and (2.7) are almost identical, regardless the disparate connectivity patterns underlying these networks. The variational analysis in §4.3 shows that, in the weak coupling regime, to leading order the firing rate of the network depends only on the number of connections between cells. The role of the connectivity in shaping network dynamics increases in the strong coupling regime.

## 4 The variational analysis of spontaneous dynamics

In this section, we analyze dynamical regimes of the coupled system (2.6) and (2.7) under the variation of the coupling strength. In §4.1, we derive an approximate model using the center manifold reduction. In §4.2, we relate the activity patterns of the coupled system to the minima of a certain continuous function on the surface of an cube. The analysis of the minimization problem for weak, strong, and intermediate coupling is used to characterize the dynamics of the coupled system in these regimes.

### 4.1 The center manifold reduction

In preparation for the analysis of the coupled system (2.6) and (2.7), we approximate it by a simpler system using the center manifold reduction [8, 18]. To this end, we first review the bifurcation structure of the model. Denote the equations governing the deterministic dynamics of a single neuron by

 ˙x=f(x,μ), (4.1)

where and is a smooth function and is a small parameter, which controls the distance of (4.1) from the saddle-node bifurcation.

###### Assumption 4.1.

Suppose that at , the unperturbed problem (4.1) has a nonhyperbolic equilibrium at the origin such that has a single zero eigenvalue and the rest of the spectrum lies to the left of the imaginary axis. Suppose further that at there is a homoclinic orbit to entering the origin along the center manifold.

Then under appropriate nondegeneracy and transversality conditions on the local saddle-node bifurcation at , for near zero the homoclinic orbit is transformed into either a unique asymptotically stable periodic orbit or to a closed invariant curve having two equilibria: a node and a saddle [8] (Fig. 10). Without loss of generality, we assume that the latter case is realized for small positive , and the periodic orbit exists for negative . Let be a sufficiently small fixed number, i.e., (4.1) is in the excitable regime (Fig. 10). For simplicity, we assume that the stable node near the origin is the only attractor of (4.1).

We are now in a position to formulate our assumptions on the coupled system. Consider local systems (4.1) that are placed at the nodes of the connected graph and coupled electrically:

 ˙X=F(X,μ)−g(L⊗J)X+σ(In⊗P)˙W, (4.2)

where , is an identity matrix, , and is the Laplacian of . Matrix defines the linear combination of the local variables engaged in coupling. In the the neuronal network model above, . Parameters and control the coupling strength and the noise intensity respectively. is a Gaussian white noise process in . The local systems are taken to be identical for simplicity. The analysis can be extended to cover nonhomogeneous networks.

We next turn to the center manifold reduction of (4.2). Consider (4.2 (the zero subscript refers to ) for . By our assumptions on the local system (4.1), has a kernel. Denote

 e∈kerDf(0,0)/{0}andp∈ker(Df(0,0))Tsuch thatpTe=1. (4.3)

By the center manifold theorem, there is a neighborhood of the origin in the phase space of (4.2), , and such that for and , in , there exists an attracting locally invariant dimensional slow manifold . The trajectories that remain in for sufficiently long time can be approximated by those lying in . Thus, the dynamics of (4.2 can be reduced to , whose dimension is times smaller than that of the phase space of (4.2. The center manifold reduction is standard. Its justification relies on the Lyapunov-Schmidt method and Taylor expansions (cf. [8]). Formally, the reduced system is obtained by projecting (4.2 onto the center subspace of (4.2 for (see [25]):

 ˙y=a1y2−a2μ−a3gLy+O(|y|3,μ2,g2), (4.4)

where provided that the following nondegeneracy conditions hold

 a1 = 12∂2∂u2pTf(ue,0)|u=0≠0, (4.5) a2 = −∂∂μpTf(0,μ)∣∣μ=0≠0, (4.6) a3 = pTJe≠0. (4.7)

Conditions (4.5) and (4.6) are the nondegeneracy and transversality conditions of the saddle-node bifurcation in the local system (4.1). Condition (4.7) guarantees that the projection of the coupling onto the center subspace is not trivial. All conditions are open. Without loss of generality, assume that nonzero coefficients are positive.

Next, we include the random perturbation in the reduced model. Note that near the saddle-node bifurcation (), the vector field of (4.2 is much stronger in the directions transverse to than in the tangential directions. The results of the geometric theory of randomly perturbed fast-slow systems imply that the trajectories of (4.2) with small positive that start close to the node of (4.2 remain in a small neighborhood of on finite intervals of time with overwhelming probability (see [3] for specific estimates). To obtain the leading order approximation of the stochastic system (4.2) near the slow manifold, we project the random perturbation onto the center subspace of (4.2 for and add the resultant term to the reduced equation (4.4):

 ˙y=a1y2−a2μ−a3gLy+σB˙W+…,B=In⊗(pTP)∈Rn×nd. (4.8)

We replace by identically distributed , where is a white noise process in and . Here, stands for the Euclidean norm of . After rescaling the resultant equation and ignoring the higher order terms, we arrive at the following reduced model

 ˙z=z2−1n−γLz+σ˙w, (4.9)

where stands for a standard Brownian motion in and Here, with a slight abuse of notation, we continue to use to denote the small parameter in the rescaled system. In the remainder of this paper, we analyze the reduced model (4.9).

### 4.2 The exit problem

In this subsection, the problem of identifying most likely dynamical patterns generated by (4.2) is reduced to a minimization problem for a smooth function on the surface of the unit cube.

Consider the initial value problem for (4.9)

 ˙z=f(z)−γLz+σ˙w,L=HTH,z(0)=z0∈D⊂Rn, (4.10)

where

 f(z)=(f(z1),f(z2),…,f(zn)),f(ξ)=ξ2−1, (4.11)

and

 D={z=(z1,z2,…,zn):−2−b

where auxiliary parameter will be specified later. Let

 ∂+D={z=(z1,z2,…,zn):z∈¯D&(∃i∈[n]zi=1)}, (4.13)

denote a subset of the boundary of D, . If , then at least one of the neurons in the network is at the firing threshold. It will be shown below that the trajectories of (4.10) exit from through with probability as , provided is sufficiently large.111Positive parameter in the definition of (cf. (4.12)) is used to exclude the possibility of exit from through . Therefore, the statistics of the first exit time

 τ=inf {t>0: z(t)∈∂D} (4.14)

and the distribution of the location of the points of exit characterize the statistics of the interspike intervals and the most probable firing patterns of (2.1) and (2.2), respectively. The Freidlin-Wentzell theory of large deviations [14] yields the asymptotics of and for small .

To apply the large deviation estimates to the problem at hand, we rewrite (4.10) as a randomly perturbed gradient system

 ˙z=−∂∂zUγ(z)+σ˙wt, (4.15)

where

 Uγ(z)=γ2⟨Hz,Hz⟩+Φ(z),Φ(z)=n∑i=1F(zi),F(ξ)=23+ξ−13ξ3, (4.16)

where stands for the inner product in . The additive constant in the definition of the potential function is used to normalize the value of the potential at the local minimum .

The following theorem summarizes the implications of the large deviation theory for (4.15).

###### Theorem 4.2.

Let denote the points of minima of on

 Uγ(¯z(i))=¯U:=minz∈∂DUγ(z),i=1,2,…,k,

and . Then for any and ,

 A) limσ→0P z0{ρ(z(τ),¯Z)<δ}=1, (4.17) B) limσ→0σ2lnE z0τ=¯U, (4.18) C) limσ→0P z0{exp{σ−2(¯U−h)}<τ0, (4.19)

where stands for the distance in .

The statements A)-C) can be shown by adopting the proofs of Theorems 2.1, 3.1, and 4.1 of Chapter 4 of [14] to the case of the action functional with multiple minima.

Theorem 4.2 reduces the exit problem for (4.10) to the minimization problem

 Uγ(z)→min,z∈∂D. (4.20)

In the remainder of this section, we study (4.20) for the weak, strong, and intermediate coupling strength.

### 4.3 The weak coupling regime

In this subsection, we study the minima of on for small . First, we locate the points of minima of the for (cf. Lemma 4.3). Then, using the Implicit Function Theorem, we continue them for small (cf. Theorem 4.4).

###### Lemma 4.3.

Let in the definition of (4.12) be fixed. The minimum of on is achieved at points

 ξi=(ξi1,ξi2,…,ξin),ξij={1,j=i,−1,j≠i,j∈[n]. (4.21)

The minimal value of on is

 ¯U:=minz∈∂DU0(z)=43. (4.22)

Proof. Denote

 ∂+iD := ∂D⋂{z=(z1,z2,…,zn)∈Rn:zi=1}, ∂−iD := ∂D⋂{z=(z1,z2,…,zn)∈Rn:zi=−2−b},

and .

Consider the restriction of on

 ~U0(y):=U0((1,y))=43+n−1∑i=1F(yi), (4.23)

where . The gradient of is equal to

 ∂∂y~U0(y)=f(y):=(f(y1),f(y2),…,f(yn−1))T.

The definition of (4.11) implies that restricted to has a unique critical point at and On the other hand, on the boundary of , , the minimum of satisfies

 minz∈∂∂+1DU(z)>43,

for any in (4.12).

Likewise, as follows from the definitions of (4.16) and (4.12), for , for any choice of in (4.12). Thus, minimizes over . The lemma is proved by repeating the above argument for the remaining faces , .

###### Theorem 4.4.

Suppose in (4.12) is sufficiently large. There exists such that for on each face , achieves minimum

 z=ϕi(γ),

where is a smooth function such that

 (4.24)

and is the th column of the graph Laplacian after deleting the th entry. The equations in (4.24) are written using the following local coordinates for

 (y1,y2,…,yn−1)↦(y1,y2,…,yi−1,1,yi,…,yn−2,yn−1)∈∂+iD⊂Rn.

Moreover, the minimal value of on is given by

 (4.25)

Consequently,

 uγ:=minz∈∂DUγ=43+γmink∈[n] deg(vk)+O(γ2). (4.26)

Proof. Let denote the restriction of on :

 ~Uγ(y)=γ2⟨Hz(y),Hz(y)⟩+n−1∑i=1F(yi)+43, (4.27)

where

Next, we compute the gradient of :

 ∂∂y~Uγ(y)=γ2∂∂y⟨Hz(y),Hz(y)⟩−~f(y), (4.28)

where . Further,

 (4.29)

where stands for the matrix obtained from by deleting the th row and th column. By plugging (4.29) in (4.28), we have

 ∂∂y~Uγ(y)=γ(L1y+l1)−~f(y). (4.30)

The equation for the critical points has the following form

 R(γ,y):=γ(L1y+l1)−~f(y)=0. (4.31)

Note that

 R(0,−1n−1)=0,∂∂γR(γ,y)∣∣γ=0,y=−1n−1=−L11n−1+l1=2l1≠0.\lx@notemarkfootnote (4.32)

Above we used the relation

 −L11n−1=l1, (4.33)

which follows from the fact that the rows of sum to zero.

By the Implicit Function Theorem, for small , the unique solution of (4.31) is given by the smooth function that satisfies

 ϕ1(0)=−1n−1,ddγϕ1(γ)∣∣γ=0=−[∂∂yR(γ,y)∣∣γ=0,y=−1n−1]−1[∂∂γR(γ,y)∣∣γ=0,y=−1n−1]. (4.34)

By taking into account,

 ∂∂γR(γ,y)∣∣γ=0,y=−1n−1=2l1and∂∂yR(γ,y)∣∣γ=0,y=−1n−1=2In−1,

from (4.34) we have

 ddγϕ1(γ)∣∣γ=0=−l1. (4.35)

This shows (4.24). To show (4.25), we use the Taylor expansion of :

 ~Uγ(ϕ1(γ)) = ~U0(−1n−1)+γ[∂∂γ~Uγ(γ,y)+⟨∂∂y~Uγ(γ,y),ddγϕ1(γ)⟩]γ=0,y=−1n−1+O(γ2) (4.36) = 23+γ2⟨H(1,−1n−1),H(1,−1n−1)⟩+O(γ2)=43+γ deg(v1)+O(γ2).

By choosing in (4.12) large enough one can ensure that for remains smaller than the values of on the boundary . To complete the proof, one only needs to apply the same argument to all other faces of , and note that on , the values of , can be made arbitrarily large by choosing sufficiently large in (4.12).

###### Remark 4.5.

The second equation in (4.24) shows that the minima of the potential function lying on the faces corresponding to connected cells move towards the common boundaries of these faces, under the variation of .

### 4.4 The strong coupling regime

For small , the minima of are located near the minima of the potential function (cf. (4.16)). In this subsection, we show that for larger , the minima of are strongly influenced by the quadratic term , which corresponds to the coupling operator in the differential equation model (4.10). To study the minimization problem for , we rewrite as follows:

 Uγ(z)=γ{12⟨Hz,Hz⟩+1γΦ(z)}=:γU1γ(z). (4.37)

Thus, the problem of minimizing for becomes the minimization problem for

 Uλ(z):=⟨Hz,Hz⟩+λΦ(z)→min,z∈∂+D,|λ|≪1. (4.38)
###### Lemma 4.6.

attains the global minimum on at :

 u0:=U0(1n)=0. (4.39)

Proof.  is nonnegative, moreover,

 ⟨Hz,Hz⟩=0if and only ifz∈kerH=span{1n}.

Finally, .

###### Theorem 4.7.

Let denote the smallest eigenvalue of matrix , obtained from by deleting the th row and the th column. Then for , achieves its minimal value on at :

 uλ:=Uλ(1n)=4λn3, (4.40)

provided in the definition of (cf. (4.12)) is sufficiently large.

###### Remark 4.8.

By the interlacing eigenvalues theorem (cf. Theorem 4.3.8, [21]), (Note that is not a graph Laplacian.) Furthermore, , because is connected (cf. Theorem 6.3, [6]). With these observations, Theorem 4.7 yields an estimate for the onset of synchrony in terms of the eigenvalues of :

 γ≥2(mini∈[n]λ1(Li))−1≥2(λ2(L))−1. (4.41)

Note that (4.41) yields smaller lower bounds for the onset of synchrony for graphs with larger algebraic connectivity. In particular, for the families of expanders (cf. Example 2.5), it provides bounds on the coupling strength guaranteeing synchronization that are uniform in .

For the proof of Theorem 4.7, we need the following auxiliary lemma.

###### Lemma 4.9.

For there exists such that

 Uλ(z)=12yTLiy+λ{43+n−1∑j=1F(1−yj)}, (4.42)

where is defined by

 yj={1−zj,j∈[i−1],1−zj+1,j∈[n−1]∖[i−1]. (4.43)

Proof. Since , for some . Without loss of generality, let . Then

 z=(11n−1−y)=:~y,y=(y1,y2,…,yn−1)T,0≤yj≤3+b,j∈[n−1],

and

 Uλ(z)=~Uλ(y):=12Q(y)+4λ3+λn−1∑j=1F(1−yj),

where the quadratic function . Differentiating yields:

 ∂∂yQ(y)=−2⎛⎜ ⎜ ⎜⎝