Abstract

Self-organization of microcircuits in networks of spiking neurons with plastic synapses

Gabriel Koch Ocker, Ashok Litwin-Kumar, Brent Doiron

1: Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, United States of America

2: Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, United States of America

3: Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, United States of America

4: Center for Theoretical Neuroscience, Columbia University, New York, NY, United States of America

E-mail: bdoiron@pitt.edu

Abstract

The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory that describes the evolution of network structure by combining fast spiking covariance with a fast-slow theory for synaptic weight dynamics. Through a finite-size expansion of network dynamics, we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on the frequency of weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP, these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

Author Summary

The connectivity of mammalian brains exhibits structure at a wide variety of spatial scales, from the broad (which brain areas connect to which) to the extremely fine (where synapses from different inputs lie on the morphology of individual neurons). Recent experimental work in the neocortex has highlighted structure at the level of microcircuits: different patterns of connectivity between small groups of neurons are either more or less abundant than would be expected by chance. A central question in systems neuroscience is how this structure emerges. Attempts to answer this question are confounded by the known mutual interaction of network structure and spiking activity. Indeed, synaptic connections influence spiking statistics, while individual synapses are highly plastic and become stronger or weaker depending on the activity of the pre- and postsynaptic neurons. We present a self-consistent theory for how activity-dependent synaptic plasticity leads to the emergence of neuronal microcircuits and use it to show how the form of the plasticity rule can govern the promotion or suppression of different connectivity patterns. Our work provides a foundation for understanding how cortical circuits, and not just individual synapses, are malleable in response to inputs both external and internal to a network.

Introduction

The wiring of neuronal networks exhibits structure across a broad range of spatial scales [1]. For example, patterns of connectivity among small groups of cortical neurons are over- or under-represented compared to random networks [2, 3, 4, 5]. The prevalence of these motifs is related to a neuron’s stimulus preferences and activity levels [6, 7]. Motivated in part by these observations, there is a growing body of theoretical work that discusses how wiring structure dictates the coordinated spiking activity of cortical neurons in recurrent networks [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18].

While neural architecture undoubtedly plays a strong role in determining neuronal activity, the reverse is also true. Individual synapses can both potentiate (strengthen) and depress (weaken), and whether they do so depends on the relative timing of action potentials from the connected neurons [19, 20]. Such spike timing-dependent plasticity (STDP) has featured prominently in both experimental and theoretical studies of neural circuits [21, 22, 23]. Of particular interest, STDP provides a mechanism for Hebbian plasticity: synaptic potentiation occurs when a presynaptic neuron reliably drives spike responses from a postsynaptic neuron, while failure to recruit spiking results in synaptic depression [24]. Hebbian plasticity provides a potential link between circuit structure and function by providing a mechanism for the formation of heavily wired assemblies of neurons, where assembly membership is associated with coordinated, elevated firing rates during a specific computation [25]. Evidence supporting this idea, originally proposed by Hebb [26], has been found in both hippocampus [27] and sensory cortex [28].

Despite the promise of STDP to provide insight into the functional wiring of large neural circuits, many studies of STDP have concentrated on the plasticity of synaptic connections between just a single pair of pre- and postsynaptic neurons, often focusing on the distribution of individual synaptic weights [24, 29, 30, 31, 32]. Other studies have shown that multiple temporally correlated inputs to a neuron will cooperate to potentiate, while uncorrelated inputs may depress [33, 24, 34, 35]. In this case STDP can generate feedforward circuits [36], which while important for the propagation of neural activity [37], are unlike the recurrent structure of the neocortex. Understanding the two-way interaction between plastic recurrent network structure and spiking activity is thus a central challenge for theories of synaptic plasticity.

Due to this challenge, many studies have resorted to large scale numerical simulations of cortical networks with plastic synapses [38, 39, 40, 41]. While intuition for the development of circuit structure can be gained using this approach, without a governing theoretical framework it is often difficult to extract generalized principles. Alternatively, mathematical analyses have been restricted to either small networks [42, 40], or have required the assumption that neurons fire as Poisson processes [43, 44, 45, 46]. These latter works assumed shared inputs from outside the network to be the only source of correlated spiking activity, neglecting covariance due to recurrent coupling. Thus, there is a need for a coherent mathematical framework that captures how STDP drives self-organization of circuit structure in recurrent cortical networks.

To this end, we construct a self-consistent theory for the coevolution of spiking statistics and synaptic weights in networks with STDP. This theory makes use of a previously developed linear response framework for calculating joint spiking statistics [47, 48, 15] and a separation of timescales between spiking covariance and synaptic plasticity [33]. Most previous studies of plasticity in recurrent networks have focused on how they can be trained to represent an external stimulus. We focus on how spiking covariance generated by coupling within the network interacts with plasticity to shape network structure. We then use this high-dimensional theory to derive a low-dimensional, closed system for STDP of synaptic connectivity motifs in recurrent networks. This reveals instabilities in the motif dynamics such that when potentiation and depression are approximately balanced, the dynamics are partitioned into regimes in which different motifs are promoted or suppressed depending on the initial network structure. It also highlights the circumstances in which spike time covariations, in contrast to firing rates, drive STDP. In total, we provide a consistent and general framework in which to study STDP in large recurrent networks.

Results

Our study is separated into two main sections. The first presents a self-consistent theory for STDP in recurrent networks of model spiking neurons. Using this theory, we accurately predict the co-evolution of spiking covariance on fast timescales with synaptic weights on slow timescales of a large network. The second part leverages our theory to develop a low-dimensional dynamical system for the development of two-synapse motifs in the network structure. We analyze this system and determine how the balance between synaptic potentiation and depression drives the emergence of microcircuits in recurrent networks.

Spike train covariance determines synaptic plasticity

We begin by reviewing a well studied phenomenological model of spike timing-dependent plasticity (STDP) [49], acting within a simple circuit of two reciprocally coupled neurons. Consider a pair of pre- and postsynaptic spike times with time lag . The evolution of the synaptic weight connecting presynaptic neuron to postsynaptic neuron obeys , with the STDP rule (Fig. 1A) being Hebbian:

(1)

Here if while if , imposing bounds on the weights to prevent the magnitude of excitatory synapses from becoming negative or potentiating without bound (i.e. ). The coefficients scale the amplitude of weight changes induced by individual pre-post spike pairs and determine how synchronous pre- and postsynaptic spikes must be to drive plasticity.

The spike train from neuron is the point process , with being its spike time. Following [33] we relate the joint statistics of and to the evolution of synaptic weights. We first assume that individual pre-post spike pairs induce small changes in synaptic weights (). This makes synaptic weights evolve slowly, on a much longer timescale than the millisecond scale of pairwise spiking covariance due to network interactions. The separation of timescales between synaptic plasticity and spiking activity provides an approximation to the evolution of the synaptic weights (Methods: learning dynamics):

(2)

Here is the time-averaged firing rate of neuron , and is the cross-covariance function of neuron and ’s spike trains. The separation of timescales allows us to calculate the equilibrium spiking statistics , taking to be constant on the timescale of . The term in Eq. (2) captures the firing rate dependence of STDP, while models the sensitivity of STDP to spike timing. Finally, is the adjacency matrix of the network – a binary matrix with denoting the presence of a synapse. Multiplying by ensures that synapses that do not exist cannot potentiate into existence. Eq. (2) requires only the first and second order joint spiking statistics. To facilitate calculations, many previous studies have used Poisson neuron models with a specified and to generate . In contrast, we will use a white noise driven exponential integrate-and-fire model [50] for the generation of spike times (Methods: Network model). While this complicates the calculation of the spike train statistics, it provides a more biophysically realistic model of neural dynamics [51, 52] that better captures the timescales and neuronal nonlinearities that shape and . In total, the above theory determines synaptic evolution from the integrated combination of an STDP rule and the spike train cross-covariance function . Thus, any mechanism affecting two neurons’ spiking covariance is expected to shape network structure through STDP.

As a simple illustration of how spiking correlations can drive STDP, we examined the synaptic weight dynamics, and , in a reciprocally coupled pair of neurons, both in the presence and absence of common inputs. Specifically, the fluctuating input to neuron was the sum of a private and common term, , with the fraction of shared input to the neurons. In the absence of common input (; Fig. 1B), the two synapses behaved as expected with Hebbian STDP: one synapse potentiated and the other depressed (Fig. 1C). In contrast, the presence of common input () was a source of synchrony in the two neurons’ spike trains, inducing a central peak in the spike train cross-covariance function (Fig. 1B vs 1D). In this case, both synapses potentiated (Fig. 1E) because the common input increased synchronous spiking. Since the potentiation side of the learning rule was sharper than the depression side (Fig. 1A), this enhanced the degree of overlap between and the potentiation component of . This overcame the effects of depression in the initially weaker synapse and promoted strong, bidirectional connectivity in the two-neuron circuit.

This example highlights how the temporal shape of the spike train cross-covariance function can interact with the shape of the learning rule, , to direct spike timing-dependent plasticity. However, this case only considered the effect of correlated inputs from outside of the modeled circuit (Fig. 1). Our primary goal is to predict how spiking covariance due to internal network interactions combines with STDP to drive self-organized network structure. In order to do this, we first require a theory for predicting the spiking covariance between all neuron pairs given some static, recurrent connectivity. Once this theory has been developed, we will use it to study the case of plastic connectivity.

Network architecture determines spiking covariance in static networks

In this section we review approximation methods [47, 48, 15] that estimate the pairwise spike train cross-covariances using a static weight matrix (see Methods: Spiking statistics for a full description). The exposition is simplified if we consider the Fourier transform of a spike train, , where is frequency. Assuming weak synaptic connections , we approximate the spike response from neuron as:

(3)

The function is the linear response [53] of the postsynaptic neuron, measuring how strongly modulations in synaptic currents at frequency are transferred into modulations of instantaneous firing rate about a background state . The function is a synaptic filter. In brief, Eq. (3) is a linear ansatz for how a neuron integrates and transforms a realization of synaptic input into a spike train.

Following [47, 48, 15] we use this linear approximation to estimate the Fourier transform of , written as ; here denotes complex conjugation. This yields the following matrix equation:

(4)

where is an interaction matrix defined by . The matrix is the baseline covariance, with elements , and is the identity matrix. Using Eq. (4) we recover the matrix of spike train cross-covariance functions by inverse Fourier transformation. Thus, Eq. (4) provides an estimate of the statistics of pairwise spiking activity in the full network, taking into account the network structure.

As a demonstration of the theory, we examined the spiking covariances of three neurons from a 1,000-neuron network (Fig. 2A, colored neurons). The synaptic weight matrix was static and had an adjacency matrix that was randomly generated with Erdös-Rényi statistics (connection probability of 0.15). The neurons received no correlated input from outside the network, making a diagonal matrix, and thus recurrent network interactions were the only source of spiking covariance. Neuron pairs that connected reciprocally with equal synaptic weights had temporally symmetric spike train cross-covariance functions (Fig. 2C), while uni-directional connections gave rise to temporally asymmetric cross-covariances (Fig. 2D). When neurons were not directly connected, their covariance was weaker than that of directly connected neurons but was still nonzero (Fig. 2E). The theoretical estimate provided by Eq. (4) was in good agreement with estimates from direct simulations of the network (Fig. 2C,D,E red vs. gray curves).

Self-consistent theory for network structure and spiking covariance with plastic synapses

In general, it is challenging to develop theoretical techniques for stochastic systems with several variables and nonlinear coupling [53], such as in Eq. (2). Fortunately, in our model the timescale of spiking covariance in the recurrent network with static synapses is on the order of milliseconds (Fig. 2C,D,E), while the timescale of plasticity is minutes (Fig. 1C,E). This separation of timescales provides an opportunity for a self-consistent theory for the coevolution of and . That is, so long as in Eq. (1) are sufficiently small, we can approximate as static over the timescales of and insert Eq. (4) into Eq. (2). The resulting system yields a solution that captures the long timescale dynamics of the plastic network structure (Methods: self-consistent theory for network plasticity).

As a first illustration of our theory, we focus on the evolution of three synaptic weights in a 1,000-neuron network (Fig. 3A, colored arrows). The combination of Eqs. (2) and (4) predicted the dynamics of , whether the weight increased with time (Fig. 3B left, red curve), decreased with time (Fig. 3C left, red curve), or remained approximately constant (Fig. 3D left, red curve). In all three cases, the theory matched well the average evolution of the synaptic weight estimated from direct simulations of the spiking network (Fig. 3B,C,D left, thick black curves). Snapshots of the network at three time points (axis arrows in Fig. 3B,C,D, left), showed that coevolved with the spiking covariance (Fig. 3B,C,D right). We remark that for any realization of background input , the synaptic weights deviated from their average value with increasing spread (Fig. 3B,C,D left, thin black curves). This is expected since is an average over realizations of , and thus provides only a prediction for the drift of , while the stochastic nature of spike times leads to diffusion of around this drift [33].

In sum, the fast-slow decomposition of spiking covariance and synaptic plasticity provides a coherent theoretical framework to investigate the formation of network structure through STDP. Also, our treatment is complementary to past studies on STDP [35, 44, 54] that focused on the development of architecture through external input. In our model, there is no spiking covariance in the background state (i.e. for ). Hence, we are specifically considering self-organization of network structure through internally generated spiking covariance.

While our theory gives an accurate description of plasticity in the network, it is nevertheless high-dimensional. Keeping track of every individual synaptic weight and spike train cross-covariance function involves variables. For large networks, this becomes computationally challenging. More importantly, this high-dimensional theory does not provide insights into the plasticity of the connectivity patterns or motifs that are observed in cortical networks [4, 3]. Motifs involving two or more neurons represent correlations in the network’s weight matrix, which cannot be described by a straightforward application of mean-field techniques. In the next sections, we develop a principled approximation of the high-dimensional theory to a closed low-dimensional mean-field theory for how the mean weight and the strength of two-synapse motifs evolve due to STDP.

Dynamics of mean synaptic weight

We begin by considering the simple case of a network with independent weights (an Erdös-Rényi network). In this case, the only dynamical variable characterizing network structure is the mean synaptic weight, :

(5)

The mean synaptic weight, , in addition to the connection probability , completely characterizes the connectivity of a weighted Erdös-Rényi network. In order to calculate the dynamics of , we insert the fast-slow STDP theory of Eq. (2) into Eq. (5):

(6)

where the spiking covariances are calculated using linear response theory (Eq. (4)). This equation depends on the network structure in two ways. First, it depends on the full adjacency matrix . Second, the spike train cross-covariances depend on the full weight matrix: . This dependence of a first–order connectivity statistic on the network structure poses a challenge for the development of a closed theory.

The main steps in our approach here are two approximations. First, the matrix of spike train cross-covariances obtained from our linear ansatz (Eq. (4)) can be expanded in a power series around the background cross-covariances (see Eq. (27)). Powers of the interaction matrix in this series correspond to different lengths of paths through the network [13, 15]. We truncate the spiking covariances at length one paths to obtain:

(7)

where denotes convolution. This truncation separates the sources of covariance between the spiking of neurons and into direct forward () and backward () connections, and common ( and ) inputs. Nevertheless, after truncating , the mean synaptic weight still depends on higher-order connectivity motifs (Eq. (32)). Fortunately, for an Erdös-Rényi network, these higher-order motifs are negligible.

The second approximation is to ignore the bounds on the synaptic weight in Eq. (1). While this results in a theory that only captures the transient dynamics of , it greatly simplifies the derivation of the low-dimensional dynamics of motifs, because dynamics along the boundary surface are not considered.

With these two approximations, the mean synaptic weight obeys:

(8)

The first term on the right hand side of Eq. (8) is scaled by , modeling the interaction between STDP rules that lead to net potentiation () or depression (), and the mean firing rate, , across the network. This captures STDP due to chance spiking coincidence. The remaining terms capture how synaptic weights interact with the temporal structure of spiking covariance. Because of the expansion in Eq. (7), these dependencies decompose into three terms, each scaled by the integral of the product of the STDP rule and a component of the spike train cross-covariance . Specifically, covariance due to forward connections is represented by (Eq. (36); Fig. 4A), covariance due to backward connections is represented by (Eq. (37); Fig. 4B), and finally covariance due to common connections is represented by (Eq. (38); Fig. 4C).

For an Erdös-Rényi network, each sum in Eq. (8) can be simplified. Let be the connection probability of the network. Since our theory for spiking covariances required weak synapses, we also explicitly scaled the weights, motifs, and amplitude of synaptic changes by . This ensured that as the connection probability was varied, synaptic weights scaled to keep the total input to a neuron constant (neglecting plasticity). The first and second terms of Eq. (8) correspond to the definitions of and . Since different elements of and are independent, the third term reduces to due to the central limit theorem. The last term can be similarly evaluated and the dynamics of to first order in reduce to:

(9)

We next study this mean-field theory in two regimes, before examining the plasticity of non-Erdös-Rényi networks that exhibit motifs.

Unbalanced STDP of the mean synaptic weight

Eq. (9) contains one term proportional to product of firing rates and the integral of the STDP rule, , and additional terms proportional to the small parameter . When the learning rule, , is dominated by either depression or potentiation (so that ) the whole network uniformly depresses (Fig. 5A,C) or potentiates (Fig. 5B,D) due to chance spike coincidences (the firing rate term dominates in Eq. (2)). These dynamics are straightforward at the level of individual synapses and this intuition carries over straightforwardly to the mean synaptic weight. When the STDP rule is dominated by potentiation or depression, the terms in Eq. (9) are negligible; the average plasticity is solely determined by the firing rates, with spiking covariance playing no role. In this case, the leading-order dynamics of are:

(10)

so that the mean synaptic weight either potentiates to its upper bound or depresses to , depending on whether the integral of the STDP rule, , is positive or negative. For both depression and potentiation dominated STDP our simple theory in Eq. (10) quantitatively matches estimated from simulations of the entire network (Fig. 5C,D, black vs. red curves).

Balanced STDP of the mean synaptic weight

On the other hand, if there is a balance between potentiation and depression in the STDP rule , then spiking covariance affects the average plasticity. In order to make explicit the balance between potentiation and depression, we write (with for STDP with the balance tilted in favor of potentiation and for balance tilted in favor of depression). The leading-order dynamics of are then, for Erdös-Rényi networks,

(11)

This quadratic equation admits up to two fixed points for . We begin by examining the dynamics of for the case perfectly balanced potentiation and depression () and a realistic shape of the STDP curve, and then consider the case of . We find that potentiates for all cases except depression-dominated, balanced STDP, which can exhibit novel multistable dynamics.

Experimentally measured STDP rules in cortex often show and [55, 56], making potentiation windows sharper and higher-amplitude that depression windows. In this case, the STDP-weighted covariance from forward connections, , is greater in magnitude than those from backward connections, (Fig. 4), and hence . Furthermore, since the covariance from common input decays symmetrically around (Fig. 4C), we have that . Consequently, when , all terms in Eq. (11) are positive and potentiates to .

For potentation-dominated balanced STDP, , again all terms in Eq. (11) are positive and potentiates to (Fig. 6A). However, with depression-dominated balanced STDP ( in Eq. (11)), has two fixed points, at:

(12)

Since and because of our assumptions on and , the term inside the square root is positive, and one fixed point is positive and the other negative. The positive fixed point is unstable and, if within , provides a separatrix between potentation and depression of (Fig. 6B). This separatrix arises from the competition between potentiation (due to forward connections and common input) and depression (due to backward connections and firing rates). Examination of Eq. (12) shows competing effects of increasing : it moves both fixed points closer to 0 (the terms outside and inside the square root), but also pushes the fixed points away from 0 due to firing rate- and common-input terms. The latter effect dominates for the positive fixed point. So the mean synaptic weight of sparsely connected networks have a propensity to potentiate, while more densely connected networks are more likely to depress (Fig. 6B).

In total, we see that a slight propensity for depression can impose bistability on the mean synaptic weight. In this case, a network with an initially strong mean synaptic weight can overcome depression and strengthen synaptic wiring, while a network with the same STDP rule and connection probability but with an initially weak mean synaptic weight will exhibit depression. In the next section we will show that similar separatrices exist in non-Erdös-Rényi networks with slightly depression-dominated STDP and govern motifs in such networks as well.

Motif dynamics

We now consider networks that have structure at the level of motifs, so that different patterns of connectivity may be over- or under-represented compared to Erdös-Rényi networks. We begin by defining the two-synapse motif variables:

(13)

The variables , and , respectively, measure the strength of divergent, convergent, and chain motifs. For each variable, we subtract the expected value of the sum in a network with independent weights, , so that the s measure above- or below-chance levels of structure in the network. Since these variables depend on the strength of both synapses making up the motif, we will refer to them as motif strengths. Motif strengths are also related to neurons’ (weighted) in- and out-degrees (the total strength of incoming or outgoing synapses for each neuron). and are proportional to the variance of neurons’ in- and out-degrees. , on the other hand, is proportional to the covariance of neurons’ in- and out-degrees. This can be seen by taking the definitions of these motifs, Eq. (13), and first summing over the indices . This puts the sum in , for example, in the form of a sum over neurons’ squared out-degrees.)

We now wish to examine the joint dynamics of the mean synaptic weight and these motif strengths. In order to calculate the dynamics of, for example, , we insert the fast-slow STDP theory of Eq. (2) into the definition of , as earlier. Similarly to Eq. (9), the dynamics of motifs , , and then depend on the network structure. This dependence of first- and second-order connectivity statistics on the network structure poses a challenge for the development of a closed theory for the dynamics of motifs. The main steps in developing such a theory are the two approximations we used to develop Eq. (9), as well as one more.

As above, our first approximation is to truncate the spike-train covariances at length one paths through the network. This removes the dependency of the dynamics on longer paths through the network. Nevertheless, after truncating , the first- () and second-order (, , ) motifs still depend on higher-order motifs (Eq. (8)). This is because of coupling between lower and higher-order moments of the connectivity matrix (see Eqs. (32)-(34)) and presents a significant complication.

In order to close the dynamics at one- and two-synapse motifs, our new approximation follows [16], and we rewrite higher-order motifs as combinations of individual synapses and two-synapse motifs (see Eqs. (39)-(40)). For the mean synaptic weight, for example, one third-order motif appears due to the common input term of the spike-train covariances (Eq. (8)). We break up this three-synapse motif into all possible combinations of two-synapse motifs and individual connections, estimating its strength as:

(14)

This corresponds to assuming that there are no third- or higher-order correlations in the weight matrix beyond those due to second-order correlations; three- and more-synapse motifs are represented only as much as would be expected given the two-synapse motif strengths. This allows us to close the motif structure at two-synapse motifs. However, two new motifs appear in Eq. (14), and . The subscript denotes that these motifs are mixed between the weight and adjacency matrices, measuring the strength of individual connections, conditioned on their being part of a particular motif. corresponds to the strength of connections conditioned on being part of a convergent motif and to the strength of connections conditioned on the postsynaptic neuron making another synapse in a chain (Eq. (26)). As in previous sections, the final approximation is to ignore the bounds on the synaptic weight in Eq. (1), so that our theory only captures the transient dynamics of .

These approximations allow us (see Eqs. (29), (32), and (41)) to rewrite the dynamics of the mean synaptic weight as:

(15)

The parameters , , and are defined as above. Note that we recover Eq. (9) when all ’s vanish (i.e an Erdös-Rényi network). When the network contains additional motifs (), the dynamics of contain new terms. In Eq. (15), the influence of forward connections through is again proportional to the mean synaptic weight . In contrast, the influence of backward connections must interact with the new variable , which measures the mean strength of connections conditioned on their being part of a reciprocal loop (i.e the strength of a backwards connection, conditioned on the existence of the forward one). As described above (Eq. (14)), the covariance from common input involves , the divergent motif, , as well as terms conditioned on weights being part of a convergent motif, , or on the postsynaptic neuron making another synapse in a chain, . The definitions for the mixed motifs, the s, are given in Eqs. (26). In total, the dynamics of mean synaptic weight cannot be written as a single closed equation, but also requires knowledge of how the second order motifs evolve.

Fortunately, using a similar approach, dynamical equations can be derived for each of the two-synapse motifs , , and (Eqs. (42)-(44)). However, to close the system we require dynamics for five mixed motifs, , , , , and (Eqs. (45)-(49)). In total, this yields an autonomous 9-dimensional system of nonlinear differential equations describing the population averaged plasticity of second-order network structure. We have derived these equations in the absence of common external inputs to the neurons; the theory can easily be extended to this case by including external covariance in Eq. (7) (replacing with , where is the covariance matrix of the inputs).

When the network structure, is approximately Erdös-Rényi, the motif frequencies are . If we further assume initial conditions for the motif strengths and the mixed motifs to be consistent with Erdös-Rényi statistics ( for all motifs), then we also have and for each motif, Eqs. (42)-(49). In this case we can neglect, to leading order, the dynamics of motifs entirely. Thus, the set of Erdös-Rényi networks forms an invariant set under the dynamics of the motif strengths; we examined the behavior on this invariant set in the above section (Fig. 5 and 6).

We refer to the mean field theory of Eqs. (41)-(49) as the motif dynamics for a recurrent network with STDP. This theory accurately predicts the transient dynamics of the first- and two-synapse motifs of the full stochastic spiking network (Fig. 7, compare red versus thin black curves), owing to significant drift compared to diffusion in the weight dynamics and these network-averaged motif strengths. The derivation and successful application of this reduced theory to a large spiking network is a central result of our study.

Our theory captures several nontrivial aspects of the evolution of network structure. First, while the STDP rule is in the depression-dominated regime ( for the simulations in Fig. 7), the mean synaptic weight nevertheless grows (Fig. 7A). Second, both divergent and convergent connections, and , grow above what is expected for a random (Erdös-Rényi) graph (Fig. 7B,C); however, at the expense of chain connections which decay (Fig. 7G). In the subsequent sections, we leverage the simplicity of our reduced theory to gain insight into how the STDP rule interacts with recurrent architecture to drive motif dynamics.

Unbalanced STDP of two-synapse motifs

When the STDP rule is dominated by potentiation or depression so that , then the terms in Eqs. (42)-(49) are negligible. In this case each motif’s plasticity is solely determined by the firing rates, with spiking covariance plays no role. Here the motif dynamics are simply:

(16)

for (and taking in the second equation). The dynamics of are the same here as for the Erdös-Rényi case above; we include it for completeness. Dropping order terms gives the simple solutions:

(17)

for (Methods: Unbalanced STDP). As stated previously, with , individual synapses uniformly potentiate or depress (Fig. 5). This is reflected in the linear decay or growth (for depression- or potentiation-dominated , respectively) of with and quadratic amplification of baseline motif frequencies for the two-synapse motif strengths.

Balanced STDP of two-synapse motifs

Now, we turn our attention to how internally generated spiking covariance interacts with balanced STDP to control motifs (examining the dynamics of Eqs. (41)-(49)). As before, we consider STDP rules with sharper windows for potentiation than depression ( and ). Each two-synapse motif can have a nullcline surface in the nine-dimensional motif space. These nullclines define a separatrix for the promotion or suppression of the corresponding motif, analogous to the case on the Erdös-Rényi invariant set (Fig. 7). We illustrate this by examining the dynamics in the plane. For STDP rules with a balance tilted towards depression (), the nullclines provided thresholds for the promotion or suppression of divergent or convergent motifs (Fig. 8A, blue lines). The flow in this slice of the motif space predicted the motif dynamics well (Fig. 8A, compare individual realizations of the full spiking network – thin black lines – to the flow defined by the vector field of the reduced motif system).

For STDP rules with the balance tilted towards potentiation (), on the other hand, the nullclines were at negative motif strengths (Fig. 8B). Can the motif strengths achieve negative values? As stated previously, and are proportional to the variances of neurons’ in and out degrees, respectively. So, like the mean synaptic weight, , and these motifs always potentiated for STDP rules (Fig. 8B).

Chain motifs, in contrast, correspond to the covariance of neurons’ weighted in- and out-degrees and so can achieve negative values. Indeed, the strength of chains can depress below zero even while the mean synaptic weight and other motifs potentiate (Fig. 5A,G). Examining how and coevolve allowed us to see how in- and out-hubs developed in the network. With the STDP rule, increased along with and (Figs. 8B, 9C,D). So, individual neurons tended to become both in- and out-hubs. With the STDP rule, however, could decrease while and increased (Fig. 5). In this case, neurons tended to become in- or out-hubs, but not both.

Many studies have examined how STDP affects either feedforward or recurrent structure in neuronal networks, commonly showing that STDP promotes feedforward structure at the expense of recurrent loops [36, 57, 58]. This is consistent with the intuition gained from isolated pairs of neurons, where STDP can induce competition between reciprocal synapses and eliminate disynaptic loops (Supp. Fig. 1; [24]). Our theory provides a new way to examine how STDP regulates feedforward vs recurrent motifs by examining the dynamics of . This variable includes both recurrent loops () and feedforward chains (). In order to understand the contribution of each of these to overall potentiation or depression of chains, we split the motif strength for chains into contributions from recurrent loops and feedforward chains, rewriting as:

(18)

Similar to the case of other two-synapse motifs, the leading order dynamics of the recurrent motif are:

(19)

We obtain the dynamics of the feedforward motif by subtracting from (Eq. (54)). In Eq. (18) we subtract from because is the dominant contributor to . This restricts to being non-negative. The new auxiliary variable is proportional to the conditional second moment of weights that are part of loops (Eq. (51)), and evolves according to Eq. (53). The replacement of by these variables expands the motif space to 11 dimensions.

We investigated the joint dynamics of feedforward chains and recurrent loops similarly to the other motifs, examining the plane. The and nullclines divided this plane into regions where each motif potentiated or depressed. The shape of the STDP rule and the initial values of the other motif strengths determined the location of these nullclines. For the STDP rule, the nullcline was just below (Fig. 9A, blue horizontal line). Since , this forced to potentiate. The feedforward motif, in contrast, could potentiate or depress above chance levels. In our spiking simulations, the initial conditions put in the region of depression, so that feedforward structure depressed even while all other motifs were growing (Fig. 9A, right panels).

These dynamics were the opposite of what would be expected from examining isolated pairs of neurons. With both the and balanced STDP rules, isolated pairs of neurons showed splitting of synaptic weights to eliminate the recurrent loop (Supp. Fig. 1). Thus, with the STDP rule, the intuition gained from pairs of neurons did not predict the plasticity of feedforward and recurrent motifs.

The locations of the and nullclines were sensitive to the values of the other motif variables. Since the mean synaptic weight and and exhibited bistability under the STDP rule, we examined the slice through motif space when the other motifs were potentiating (Fig. 9B, right panels) or depressing (Fig. 9C, right panels). In both cases, the nullcline was above so that the recurrent motif could either potentiate or depress, depending on its initial strength (Fig. 9B,C blue horizontal lines). Similarly, the feedforward motif could either potentiate or depress.

In spiking simulations with STDP where and the other motifs potentiated (Fig. 9B, right), the initial conditions put in the region of phase space where they both depressed (Fig. 9B, left). In spiking simulations with STDP where and other motifs depressed (Fig. 9C, right), the initial conditions put in the region where potentiated and depressed. This region corresponded to what would be expected from examining pairs of neurons (Supp. Fig. 1): the loss of disynaptic loops and promotion of feedforward structure. So with the STDP rule, the region of phase space where the pair-based intuition was accurate at the network level was accessible. In most of the motif space, however, interactions between triplets of neurons played a strong role so that the theory developed here was necessary to predict the STDP of motif structure.

Discussion

We have developed a theory for spike timing-dependent plasticity in weakly coupled recurrent networks of exponential integrate-and-fire neurons. We used this framework to derive a low-dimensional dynamical system capturing the plasticity of two-synapse motifs. The resulting system naturally classifies STDP rules into two categories: 1) rules with an imbalance between potentiation and depression whose dynamics are dominated by the firing rates of neurons in the network, and 2) rules with balanced potentiation and depression in which different sources of spiking covariance interact with the STDP rule to determine network structure. In the latter case, any mechanism controlling spiking covariance in the network may affect how the network structure evolves. Thus, spike initiation dynamics [59, 60, 61, 62], spike-frequency adaptation [63, 64], synaptic inhibition [65, 66, 67] and passive membrane properties [68] could all, in addition to controlling firing rates, drive motif dynamics.

STDP in recurrent networks

A recent suite of studies derived a theory for how STDP shapes the full structure of networks of neurons whose spike trains are Poisson processes [43, 35, 44, 45, 54, 46]. The initial approach is similar to ours with a linear approximation to estimate spiking covariance (see Eq. (3)-(4)). However, these studies mostly focused on the effects of external input, considering how feedforward inputs entrain structure in recurrent synapses [35, 44, 54]. The only source of spiking covariance was inherited from external sources (meaning has off-diagonal structure), and subsequently filtered by the network to produce spiking covariance. Two previous studies by the same authors also examined STDP in networks without external stimuli [43, 45]; however, these took a large system size limit () and neglected the contribution of spiking covariance to STDP, focusing on the firing rate dependence due to an unbalanced learning rule.

In contrast, we consider the case where the intrinsic variability of neurons’ spike trains is the only source of spiking covariance, necessitating a finite sized network (). There is little difference between our results and those of past studies [43, 45] when the learning rule is unbalanced. However, if there is a balance between potentiation and depression, our theory shows how internally generated spiking covariances play a strong role in STDP. Furthermore, our use of integrate-and-fire models allows our theory to predict the evolution of network structure without fixing the statistics of individual or joint spiking activity.

Stability of learned network structures

Early studies of long-term plasticity, which gave rise to the phenomenological plasticity model we used, focused on the relative timing of action potentials. More recent experiments have shown that neurons’ firing rates and the postsynaptic membrane voltage and spike patterns all affect the shape of measured STDP curves [69, 70, 56, 71, 72]. More complicated models of long-term plasticity, based on spike-triplet- or voltage-dependent STDP [73, 74] or on calcium thresholds for the induction of depression and potentiation [75, 76, 77], can replicate many of these complexities. The observation that firing rates undergo large fluctuations over slow timescales [78, 79, 80, 81, 82] suggests that in vivo STDP may transition between unbalanced potentiation- and depression-dominated regimes. While long-term plasticity can be strongly affected by pre- and postsynaptic firing rates, connectivity motifs and spiking covariance could determine the direction of plasticity during transitions between potentiation- and depression-dominated regimes. While our paper provides an initial framework to study how STDP shapes structure in recurrent networks, a more realistic learning rule than that used here (Eq. (1)) will be needed to address these issues.

The additive, Hebbian STDP model we used here gives rise to splitting of synaptic weights: individual weights potentiate to some upper bound, or depress to a lower bound. This produces a bimodal distribution of synaptic weights, while experimentally observed weight distributions tend to be unimodal and long-tailed [4, 3, 83, 84]. Modifications of this model, such as introducing axonal or dendritic delays or weight-dependence of plasticity, can yield weight distributions more closely resembling those observed in neural tissue [32, 85, 31, 30, 86]. Depending on the modification made (delays vs weight-dependence), either the same or similar theories for motif plasticity can be derived using the methods presented in our study. Strong weight dependence, however, forces every weight to the same value so that the baseline motif frequencies completely determine the structure of the weight matrix (Supplemental Information: Multiplicative STDP). The dynamics of motifs under more realistic models of synaptic plasticity remain to be studied.

A major feature of STDP is that it can potentiate temporally correlated inputs [33]. Since synchronous inputs are effective at driving postsynaptic spiking, this can give rise to pathological activity in recurrent networks [39]. Synaptic depression driven by postsynaptic spikes, independent of presynaptic activity, can stabilize postsynaptic firing rates during STDP [29, 35]. Such additional rate-dependent terms of the plasticity rule can also stabilize the full weight matrix [45] and thus give rise to stable motif configurations. Recent work has focused on the necessity of homeostatic mechanisms, including synaptic scaling [87] or inhibitory plasticity, in stabilizing both the activity and structure of neural networks [88, 89, 90, 36, 91, 41]. Since balanced STDP can give rise to bistability of mean synaptic weights in a network (Fig. 7B), it could also provide a mechanism for assembly formation (selected weights potentiate, while other weights depress). Mechanisms of metaplasticity [92], operating on a similar timescale to STDP, could give rise to such a balance. This suggests a novel role for metaplasticity in controlling not only single-neuron excitability but also the self-organization of microcircuits in recurrent networks.

Plasticity of motifs

Early studies on STDP focused on isolated pairs of reciprocally connected neurons, showing that the type of STDP we study tends to induce competition between reciprocal synapses (Fig. 1B,C; [24]). Since then, many simulation studies have investigated how STDP affects the structure and activity of recurrent networks [38, 93, 94, 95, 41], commonly examining the emergence of highly connected clusters. Reduced theories exposing how STDP shapes network-level structure have, however, been difficult to obtain. Most have examined the average synaptic weight in a network [96, 97], focusing on the relationship between network-averaged firing rates and mean synaptic weights () but neglecting spiking covariance. Mean-field theories are accurate for fully homogenous networks; however, if all neurons have the same weighted in- and out-degrees, there is no plasticity of two-synapse motifs (Supplemental Information: Motif plasticity in homogenous networks). So plasticity of higher-order network structure depends on inhomogeneities in neurons’ inputs and outputs.

The few reduced theories examining STDP of higher-order network structure have focused on the question of how STDP controls feedforward chains versus recurrent loops. One study compared the mean strengths of feedforward versus recurrent inputs in a network receiving synchronous stimulation [58], but did so for a neuron that made no feedback connections to the network – effectively only taking into account the first term of Eq. (7). Another study examined the strength of loops in a network of linear excitatory neurons, showing that STDP tends to reduce the total number of loops (of all lengths) in a network [57]. Our theory is restricted to two-synapse loops; while we have shown that these can potentiate (as in Fig. 9C), [57] predicts that longer loops would meanwhile be weakened. Whether this is the case with balanced STDP driven by more realistic neuron models remains to be seen.

There is a growing body of evidence that cortical networks exhibit fine-scale structure [2, 3, 4, 5]. Experimental studies have shown that such microcircuits depend on sensory experience [98, 99]. Our work provides an important advance towards explicitly linking the plasticity rules that control individual synapses and the emergent microcircuits of cortical networks. We have shown that synaptic plasticity based only on temporally precise spike-train covariance can give rise to a diversity and, under certain conditions, multistability of motif configurations. Motifs can have a strong influence on pairwise and population-level activity [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], suggesting that precise spike timing may play a role in how networks reorganize patterns of connectivity in order to learn computations.

Methods

Neuron and network model

We model a network of neurons. The membrane dynamics of individual neurons obey the exponential integrate-and-fire (EIF) model [50], one of a class of models well-known to capture the spike initiation dynamics and statistics of cortical neurons [51, 52]. Specifically, the membrane voltage of neuron evolves according to:

(20)

The first term on the right-hand side is the leak current, with conductance and reversal potential . The next term describes a phenomenological action potential with an initiation threshold and steepness : when the voltage reaches , it diverges; this divergence marks an action potential. For numerical simulations, action potentials are thresholded at , reset to a reset potential and held there for an absolute refractory period .

Input from external sources not included in the model network is contained in . We model this as a Gaussian white noise process: . The mean of the the external input current is . The parameter controls the strength of the noise and scales the noise amplitude to be independent of the passive membrane time constant. With this scaling, the infinitesimal variance of the passive membrane voltage is .

The last term of Eq. (20) models synaptic interactions in the network. The matrix contains the amplitudes of each synapse’s postsynaptic currents. It is a weighted version of the binary adjacency matrix , where indicates the presence (absence) of a synapse from neuron onto neuron . If a synapse is present then denotes its strength. Due to synaptic plasticity, is dynamic; it changes in time as individual synapses potentiate or depress. The spike train from neuron is the point process , where denotes the spike time from neuron . The matrix defines the shape of the postsynaptic currents. In this study, we use exponential synapses: , where is the Heaviside step function. Our theory is not exclusive to the EIF model or to the simple synaptic kernels we used; similar methods can be used with any integrate-and-fire model and arbitrary synaptic kernels. Model parameters are contained in Table 1 (unless specified otherwise in the text).

Unless otherwise stated we take the adjacency matrix to have Erdös-Rényi statistics with connection probability .

Learning dynamics

We now derive Eq. (2), summarizing a key result of [33]. Changes in a synaptic weight are governed by the learning rule , Eq. (1). We begin by considering the total change in synaptic weight during an interval of length ms:

(21)

where multiplying by the corresponding element of the adjacency matrix ensures that nonexistent synapses do not potentiate into existence. Consider the trial-averaged rate of change:

(22)

where and denotes the trial average. We first note that this contains the definition of the trial-averaged spike train cross-covariance:

(23)

where is the time-averaged firing rate of neuron and subtracting off the product of the rates corrects for chance spike coincidences. Inserting this definition into Eq. (22) yields:

(24)

We then take the amplitude of individual changes in the synaptic weights to be small: , where define the temporal shape of the STDP rule (see Eq. (1)). In this case, changes in the weights occur on a slower timescale than the width of the learning rule. Taking allows us to extend the limits of integration in Eq. (24) to , which gives Eq. (2). Note that in the results we have dropped the angle brackets for convenience. This can also be justified by the fact that the plasticity is self-averaging, since depends on the integrated changes over the period .

Spiking statistics

In order to calculate , we need to know the firing rates and spike train cross-covariance (Eq. (2)). We take the weights to be constant on the fast timescale of , so that the firing rates and spike train cross-covariances are stationary on that timescale. We solve for the baseline firing rates in the network via the self-consistency relationship

for . This gives the equilibrium state of each neuron’s activity. In order to calculate the spike train cross-covariances, we must consider temporal fluctuations around the baseline firing rates.

With sufficiently weak synapses compared to the background input, we can linearize each neuron’s activity around the baseline state. Rather than linearizing each neuron’s firing rate around , we follow [47, 48, 15] and linearize each neuron’s spike train around a realization of background activity, the uncoupled spike train (Eq. (3)). The perturbation around the background activity is given by each neuron’s linear response function, , which measures the amplitude of firing rate fluctuations in response to perturbations of each neuron’s input around the baseline . We calculate using standard methods based on Fokker-Planck theory for the distribution of a neuron’s membrane potential [100, 101].

This yields Eq. (3), approximating a realization of each neuron’s spike train as a mixed point and continuous process. Spike trains are defined, however, as pure point processes. Fortunately, Eq. (2) shows that we do not need a prediction of individual spike train realizations, but rather of the trial-averaged spiking statistics. We can solve Eq. (3) for the spike trains in the frequency domain as:

where as in the Results, is an interaction matrix defined by and denotes the element-wise product. Averaging this expression over realizations of the background spike trains yields a linear equation for the instantaneous firing rates. Averaging the spike trains against each other yields the full cross-covariance matrix, Eq. (4). It depends on the coupling strengths , the synaptic filters and neurons’ linear response functions , and the covariance of the baseline spike trains, .

We can calculate the baseline covariance in the frequency domain, , by first noting that it is a diagonal matrix containing each neuron’s spike train power spectrum. We calculate these using the renewal relationship between the spike train power spectrum and the first passage time density [102]; the first passage time density for nonlinear integrate and fire models can be calculated using similar methods as for the linear response functions [101].

Self-consistent theory for network plasticity

We solve the system Eqs. (2),(4) for the evolution of each synaptic weight with the Euler method with a time step of seconds. A package of code for solving the self-consistent theory and running the spiking simulations, in MATLAB and C, is available at http://sites.google.com/site/gabrielkochocker/code. Additional code is available on request.

Derivation of motif dynamics

The baseline structure of the network is defined by the adjacency matrix . The frequencies of different motifs are:

(25)

Each of the parameters refers to a different two-synapse motif. In divergent motifs (), one neuron projects to two others, and . In convergent motifs (), two neurons and project to a third, . In chain motifs (), neuron projects to neuron , which projects to neuron . Finally, in recurrent motifs () two neurons connect reciprocally. In each of these equations, we subtract off to correct for the baseline frequencies expected in Erdös-Rényi random networks. So, these parameters measure above-chance levels of motifs in the adjacency matrix .

We extend this motif definition to a weighted version, given by Eqs. (13). Since our linear response theory for synaptic plasticity requires weak synapses, here we explicitly scale by the mean in-degree :

(26)

Here we have defined the two-synapse motifs, as well as five auxiliary variables, . These mixed motifs, defined by products of the weight and adjacency matrices, measure the strength of synapses conditioned on their being part of a motif. The motifs , on the other hand, measure the total strength of the motifs. While the variables are not of direct interest, we will see that they are required in order to close the system of equations. In comparison to the motif frequencies , which measure motif frequencies in comparison to an independently connected network, the motif strengths are defined relative to an independently weighted network.

We also scale the amplitude of individual synaptic changes, , by . We now go through the derivation of , and as examples; the other six variables follow the same steps. First, note that the spike train cross-covariance matrix of the network, Eq. (4), can be expanded in the Fourier domain around the baseline covariance :

(27)

where the interaction matrix is the element-wise product of the weight matrix and the matrix of filters, . Powers of represent lengths of paths through the network. Only taking into account up to length one paths yields (for ):

(28)

where we have inverse Fourier transformed for convenience in the following derivation and .

Differentiating each motif with respect to time, using the fast-slow STDP theory Eq. (2) and inserting the first-order truncation of the cross-covariance functions, Eq. (7), yields:

(29)