A hybrid model for large neural network description

A hybrid model for large neural network description


The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system) and its continuous counterpart (PDE system) obtained through a limit process in which the number of neurons confined in a bounded region of the brain is sent to infinity. Specifically, in the discrete model each cell of the low-density populations is individually described by a set of time-dependent variables, whereas in the continuum model the high-density populations are described as a whole by a small set of continuous variables depending on space and time. Communications among populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. Such an approach has been validated reconstructing the ensemble activity of the granular layer network of the Cerebellum, leading to a computational cost reduction. The hybrid model reproduced interesting dynamics such as local microcircuit synchronization, travelling waves, center-surround and time-windowing.

2010 Mathematics Subject Classication. Primary: 34C60, 35K57; Secondary: 92C42, 05C90

Keywords: Neural networks, Hybrid models, conductance-based models, continuum models, Cerebellum.

Contact: Anna Cattani: anna.cattani@iit.it; Sergio Solinas: smgsolinas@gmail.com; Claudio Canuto: claudio.canuto@polito.it.

1 Introduction

Interesting phenomena in the brain often involve complex networks with an extremely large number of neurons. The description at the microscopic level of the whole network, i.e., the modelling of each single neuron and synapse, would lead to numerical models of prohibitive computational cost, even on the most advanced computers. The difficulties of such a description may be alleviated to some extent by identifying a hierarchy among interacting populations of neurons, and by using models with different resolution and cost for simulating the behaviour of different populations. Cell density may be a criterion to identify families of neurons and to partition the network in a multi-level manner, where each level corresponds to one or more species with comparable density. In the simplest situation of a two-level organization, this option leads to describe each neuron of the low-density population(s) by means of an ODE system, and to characterize the high-density population(s) by exploiting a PDE system that describes the family as a continuum. The hybrid model collects the ODE and the PDE systems, as well as the fundamental interactions among them. Several efforts, which have resulted in the formalisation of different models, have been made to understand and reproduce the activity of high-density populations by reducing the degrees of freedom from many, i.e., the variable states for each neuron, to few. Mean-field, neural mass and neural-field models are some of the results of various “passage to the continuum” approaches. A review concerning these models can be found in [8]-[13]. The major difference between neural-field models - such as the one we are going to present - and the others lies in the fact that the formers account for the spatio-temporal evolution of the variables, rather than considering just the temporal evolution of them. A pillar formalisation of a neural-field model is proposed in [3]-[35]-[36], in which the macroscopic state variable is the mean firing rate. A more general neural-field model, not necessarily involving only firing rate variables, is presented in [32].

We obtain a continuum model for the action potential of a dense population of neurons by starting from a discrete model and letting the number of neurons tend to infinity while keeping them confined in a bounded region. We identify limit operators, acting on the continuous variables, describing specific interactions: in particular, electrical couplings (“gap junctions”) are modelled in the limit by the Laplace differential operator, as it has been rigorously justified in [9]; on the contrary, chemical synaptic couplings produce non-local integral operators, i.e., spatial convolutions with suitable kernels (see e.g. Sect. 9.2 in [16]). Once the expressions of both the discrete and the continuum model have been set, we describe in a fairly general form how the two models reciprocally interact, producing a hybrid model: aside of terms in the equations describing interactions between “homogeneous” (i.e., discrete-discrete, or continuous-continuous) variables, new terms are added to account for the “heterogeneous” interactions (i.e., between discrete and continuous, or continuous and discrete, variables).

To validate our new method in a complete workflow we applied it to a realistic computational problem, the reconstruction of the Cerebellum granular layer network (GLN). This brain area shows a simple network structure yet capable of generating complex activity patterns. This network layer is densely populate by granule cells (GrCs) and sparsely by Golgi cells (GoCs) providing an optimal application for our modeling approach. The proposed hybrid model was specialized to the description of the interactions between GrC and GoC populations in the Cerebellum. Interesting dynamics such as local microcircuit synchronization, center-surround and time-windowing, as already described in a previous and more biologically detailed model [30], are reproduced by the proposed model. Moreover, our model show the emergence of travelling waves of network activity elicited by a specific input configuration.

2 Materials and Methods

2.1 The hybrid model

In this section, in order to introduce the hybrid model, we first show how to model each individual neuron belonging to the same population. Here, intra-population communications are taken into account. Secondly, due to the fact that the number of neurons even in a small brain area is often huge, we perform a continuum limit of the discrete model that describes single neurons, obtaining a continuous model. Finally, we present the hybrid model in which the discrete and the continuous models interact with each other.

Let us start by analysing how to describe the dynamics of each individual neuron in the network, with , where is the number of neurons in a population. Precisely, we consider three variables: the voltage-like variable , the recovery variable , and the variable which describes the fraction of open channels in the synapses. In the most general fashion, each neuron is influenced by other neurons in the network by means of electrical and chemical synapses, and its dynamics is also driven by terms that describe the basic properties of neural excitability. All these ingredients are taken into account in the following general model:


where, is the input current that accounts for electrical synapses, and is that for chemical synapses. In particular,


where and , resp., collect the indexes of neurons connected to the -th one by means of electrical and chemical synapses, resp., are positive weights describing the directed connection strength from to , is the diffusion coefficient, is the synaptic efficacy, and is the reversal potential of the presynaptic neuron whose sign determines the synapse nature, either excitatory or inhibitory. In [14] and [16], a detailed classification of synaptic reversal potentials, linked to distinct neurotransmitter/receptor pairs, is specified. Furthermore, among the wide variety of models which describe the basic properties of neural excitability, we select the FitzHugh-Nagumo model [17] phenomenologically extracted from the biophysically-based Hodgkin-Huxley model. Thus,


Here, are parameters chosen so that is a fast variable and is a slow one. Finally, in the third equation in (2.1), and are positive parameters describing the forward and backward rate constants for transmitters binding, is an a priori fixed threshold, and is the Heaviside function such that if and otherwise. The model (2.1) obviously should be supplemented by suitable initial conditions for the variables .

In order to avoid prohibitive computational costs when the density of cells in a population is too high, we perform a “passage to the limit” as the number of neurons tends to infinity in (2.1). In this way, we capture the dynamics of a neuronal population as a whole by describing three continuous variables , and (having the same meaning as in (2.1)), where is the spatial variable. Specifically, in the limit case of in a fixed and bounded spatial region , with , the discrete model (2.1) leads to the following integro-differential system of equations (in which the -dependence of each variable is ignored for simplifying notation):


supplemented by boundary conditions for and initial conditions for , , . Here, is the diffusion coefficient, is the synaptic efficacy, and denotes a region centered in . The whole electrical synapse term, i.e. , is the result of two equivalent methods that lead to a non-trivial continuum limit, as shown in [9]. On the other hand, the integral form of the chemical synapse term, i.e. , is due to the fact that the set in (2.1) does not shrink to a point as , as explained in [16] and [11]. Furthermore, we refer to [11] for a discussion on the mathematical well-posedness of this model. Afterwards, in order to distinguish between the discrete and the continuum systems, variables in the continuum configuration will be indicated by Greek letters.

As already mentioned in the introduction, by comparing the cell densities we may diversify the description of the populations in the network. Specifically, this comparison determines if a population is described by a set of discrete systems or by a continuous model. However, the key point is that neurons are linked to each others in a very intricate fashion depending on the brain areas. It follows that signal transmission among populations, in addition to intra-populations connectivity, is an important feature to be taken into account to explore the emergent network dynamics. The essence of the hybrid model lies in the interaction coupling terms among different populations.

By considering for simplicity two populations only, on the one hand the set of cells in the low-density population is described by an ODE system:




takes into account inputs from other cells belonging to the same low-density population, whereas


describes the signal transmission coming from the continuous population. Here, indicates the spatial position of the neuron labelled by from the discrete family, whereas is the region occupied by the neurons from the continuous family whose synapses influence neuron . The term represents an external current coming from sources different from the two species here considered. On the other hand, the high-density population is characterized by a PDE system:


where, similarly to (2.6),


concerns interactions within the continuum population, while


describes the interactions between species, and is an external current. We call hybrid the model constituted by systems (2.5)-(2.7) and (2.8)-(2.10).

2.2 Application to the Cerebellum granular layer network

The formalization of the hybrid model developed above is suitable for describing a variety of networks in the brain characterized by a large difference in their population densities. Among others, the olfactory bulb, the striatum, the granular layers of the dorsal cochlear nucleus and the Cerebellum cortex are suitable to be efficiently represented with our new method. Out of these exempts the Cerebellum cortex is the most extensively studied and modelled network.

The network structure can be abstracted following previous modelling works [29, 30] but keeping it sufficiently adeherent to the biological reality to show the versatility of the method to reproduce neural network dynamics observed in brain tissues. In the specific, the limit used to push to continuum the representation of the neuronal population with high density, could rise issues on the reproducibility of network dynamics generated by a network composed by many small, independent units yielding unwanted diffusion of the activity across the network. A case that cannot yet be excluded in the nature of the real brain tissues but that was excluded in the biologically realistic simulations [30]. Moreover, the practical test should highlight the cooperation of the discrete units with the continuum model.

Interests in Cerebellum dates back to the morphological studies carried out by Ramon y Cayal and Camillo Golgi, the electroencephalography studies carried out by Adrain [1] and the motor impairment manifest in World War I and II patients with cerebellar lesions studied by Holmes [19]. Only later on, the fine Cerebellum structure inspired theories linking the network structure to a function starting, with Braitemberg [7], Marr [25], Albus [2] and Ito [20], a research work yet to be accomplished. Its peculiar structure comprehends series of highly regular, repeating units, each of which contains the same basic microcircuit. The similarity in repeating units, from architectural and physiological perspectives, implies that different regions perform similar computational operations on different inputs. These inputs originate from different parts of the brain, spinal cord, and sensory system projecting directly into the Cerebellum. In turn, the Cerebellum projects to all motor systems. Despite the regularity of the Cerebellum facilitates its description, it remains a network able to generate complex dynamics whose potentialities and functionalities are not yet fully understood.

Few cellular populations in the Cerebellum cortex compose this geometrically regular network and are localised in three well distinct layers called molecular, Purkinje, and granular. The latter is densely populated by GrCs (density ) and sparsely by GoCs. The key point to support the application of our new modelling method is that the number of GoCs significantly differs from that of GrCs: GoCs are very few compared to GrCs [21, 30, 6] in the reason of about . Thus, by virtue of this strong density difference, the exploitation of combined discrete and continuum models becomes interesting. In particular, the variables describe each GoC through (2.5), while portray the GrC species as a whole by means of (2.8). We focused our test study to reproduce the transformation imposed to the input signals by the Cerebellum granular layer network (GLN). The ultimate output of the GLN provide excitatory input through their axons in the molecular layer to the Purkinje cells which constitute the only output pathway of the cerebellum cortex. The GLN is composed of two main network pathways, a feedforward path and a loop or feedback path, where both Granular cells (GrCs) and Golgi cells (GoCs) receive external excitatory inputs by the Mossy fibers (MFs) originating from the precerebellar nuclei neurons. MFs excite both cell populations duplicating their input into two pathways. Along the feedback path MFs excite GrCs. These excite GoCs through the ascending axon and the parallel fibers (PFs), and GoCs, in turn, inhibit GrCs. In a compact writing: MF-GrCs-PFs-GoCs-GrCs. The second or feedforward path is constituted by the excitatory input from MFs to GoCs which terminates inhibiting GrCs. This pathway is MF-GoCs-GrCs.

Figure 2.1: Connection topology between GrCs and GoCs from a postsynaptic neuron perspective: GrCs linked to the -th GoC (left) and GoCs which are connected to the GrC at the point (right).

Inspired by assumptions in [28] and for modelling purposes, we consider the two populations belonging to two-dimensional parallel layers, as described in Fig. 2.1. The bottom one is constituted by the GrC continuum and the upper one collects GoCs. A third layer, above them, collects PFs. In reality, GoC somata and GrCs are located in the just mentioned granular layer while the site where GoC dendrites receive input from the GrC axons (PFs) is in the molecular layer.

Let us now define our model topology and connectivity in the GLN taking into account the fine structure of the biological network. The model was build to reproduce a GLN fraction with size \unit1500\micro\meter along the sagittal axis, \unit500\micro\meter on the transverse axis and \unit100\micro\meter thick. However, in our representation the thickness of this flat volume is disregarded. In our model all spatial units are normalized to the network edge length. GoCs and GrCs are assumed to belong to two rectangular domains of size , one on top of the other (Fig. 2.1). The projection of MFs inside the GLN shows an abundant parasagittal branching. Each MF innervates multiple cerebellum lobules. Within the lobule, local branching gives origin to small clusters of about 8 MF terminals in a rectangular area of \unit200\micro\meter along the transverse axis and \unit150\micro\meter along the sagittal axis [31, 30], data from the rat cerebellum. About 50 GrCs project their dendrites (maximum length \unit40\micro\meter, mean length \unit13.6\micro\meter) on a MF terminal. Therefore, the activation of a single MF should give rise to small spots of activated GrCs with response intensity degrading from centre to periphery. In our model, the GrC population is represented as a continuos sheet split into vertices by tessellation allowing the calculation of numerical solutions. In this configuration, we assume that MF terminals provide excitatory input to a subset of the vertices. A diffusive term in the PDE spreads the input to the neighbouring vertices, the intensity fading to none for a distance equal to \unit40\micro\meter. GoCs receive excitatory input from MF terminals from a wider area as GoC dendrites are longer than GrC dendrites and span a larger GLN volume [15].

Each GoC arborized axon reaches the granular layer throughout a parallelepiped volume [5] elongated along the sagittal direction, whose projection on the two-dimensional granular layer is a rectangle \unit650\micro\meter long and \unit180\micro\meter wide. A GoC sparsely inhibits GrCs lying inside the rectangle. GrC axons, i.e., PFs, ascend to the molecular layer, bifurcate, and run parallel to each other in either direction along the transversal axis, our -axis, for a few crossing the GoC apical dendrites. Each PF synapses onto many GoC dendrites along its path. The GoC apical dendrites branch out in all directions sampling PF input from a cylinder in the ML represents in the original model by a circle of radius \unit50\micro\meter [15]. Therefore in our model, a GoC provide inhibitory input to all the GrCs located within a rectangle elongated along the sagittal axis, with length and width . Each GrC influences all GoCs in a rectangle elongated along the transverse axis, covering the entire GLN extension, and narrow along the sagittal axis, covering on either side of the PF wide stripe of the GLN (see Fig. 2.1). Notably, GoGs receive chemical excitatory synapses by GrCs. Furthermore, GoCs are linked among each other by gap junctions connecting their apical dendrites [33]. This electrical coupling is represented in our model by a diffusion term between the vertixes of the discrete model, i.e. in a first approximation a GoC is coupled only with its nearest neighbours.

As already mentioned above, the Golgi cell system can be described by the model (2.5); the general expression of the functions and , given in (2.6) and (2.7), takes here the following specific form:


Moreover, is the excitatory input due to the MFs. Let us recall that, in (2.11), the reversal potential may depend upon the presynaptic neurons and, thus, it must be included in the integral term. However, since only GrCs influence GoCs by means of excitatory chemical synapses, we suppose to be constant and we bring it out of the integral, obtaining

The set determines the area containing those GrCs which synapse onto the -th Golgi cell. Taking into account that GrCs excite GoCs through the PFs, as specified above, we consider as a thin rectangle whose horizontal symmetry axis is determined by the -th cell projection (see Fig. 2.1, left). The rectangle area is chosen by fixing a reasonably small parasagittal extension.

Furthermore, concerning the coupling term between the two populations, it is well known that GrCs receive inhibitory chemical synapses from GoCs. Thus, the GrC continuum is described by the model (2.8), where the functions and , introduced in (2.9) and (2.10), take the following specific form:


As above, the reversal potential of presynaptic GoCs is supposed to be constant and then it is not involved in the summation. In order to consider inputs from Mossy Fibers, we set . The discrete set collects the indexes of GoCs which influence the GrC continuum at the point , thus describing the connection topology. According to [5], a GoC axon reaches a rectangular region in the granular layer, centered on its soma; therefore, a possible choice is:


where denotes such a rectangle centered on the projection of on the GoC plane and oriented perpendicularly to the direction (see Fig. 2.1, right). Since cells are described by the FitzHugh-Nagumo model, it is important to recall that the threshold is not involved in the single-neuron dynamics but it concerns presynaptic neurons at the synapse level. Indeed, when the presynaptic neuron exceeds the threshold, neurotransmitter release starts and influences the postsynpatic one.

We close this section by a few words about the numerical treatment of our model. Concerning GoCs - which form a discrete set - they are placed at the vertices of a quasi-uniform triangulation of the upper domain; we use the triangular mesh generator BBTR, described in [4], with the mesh refinement parameter chosen to yield vertices (RefiningOptions parameter set to ; Fig. 3.1). On the other hand, GrCs - which form a continuum in our model - are described by a set of partial differential equations that need to be discretized in space. To this end, we resort to a classical second-order centered finite difference method (see, e.g. [27]). In particular, we consider 24000 nodes in the domain, lying on a regular grid, to represent the 300.000 GrCs. Therefore, using this grid size each vertex represents 12 or 13 GrCs. However, the results of the simulations turn out to be nearly independent on the GrCs grid refinement, as it will be documented at the end of Section 3.2. At last, time integration of the resulting coupled system of ordinary differential equations is accomplished by the MATLAB routine ODE45. We remark that the spatial discretization might be accomplished by finite elements instead of finite differences, thus allowing for the easy use of unstructured grids that adapt themselves to the formation of localized patterns; this will be object of future work.

3 Results and Discussion

3.1 Oscillatory activity in the granular layer

Numerical simulations have been performed with the aim to validate the capability of the hybrid network model, composed of (2.5)+(2.11) and (2.8)+(2.12), to reproduce the GLN activity simulated in a biologically realistic model [30]. The network size is equivalent to a box with \unit500\micro\meter edge length along both the transverse and sagittal axes, and \unit100\micro\meter thickness containing the cubic volume (\unit500\micro\meter edge length) of brain tissue simulated in [30].

Figure 3.1: Domain decompositions obtained by exploiting the triangular mesh generator BBTR in [4]. The RefiningOptions parameter is set to 0.01, leading to a sparse mesh.

Inspired by the orders of magnitude of the parameters in [28, 30], we set:


for the Golgi cell discrete model, and


for the Granular cell continuous one. In particular, is applied to of GrC nodes randomly chosen with uniform distribution. It is well known that MFs input GoCs, as well as GrCs. Since in the real GLN also GoCs receive excitatory input from MFs. We assume that of GoCs receive . The current is applied for all ms. In the meanwhile, MF current is maintained active to of GrCs from ms. Both GrCs and GoCs which receive the external current are randomly chosen with uniform distribution. The thresholds and for GoCs and GrCs are both set to . The GoC potentials are described with vertical bars while GrC dynamics is shown with a continuous surface.

A portrait of the GoC-GrC dynamics has been obtained by exploiting (2.5)+(2.11) and (2.8)+(2.12), assuming the topology described by (2.13). The excitatory input delivered by MFs to GrCs drives their activity above threshold and induces an increase in GoC potentials. The subsequent inhibition elicited in GrCs by the GoC inhibitory feedback loop (MF-GrCs-PFs-GoCs-GrCs) suppresses the GrC activity and the cycle restarts. The same local microcircuit synchronous phenomena do arise in the biologically realistic model of reference [30] and it is a characteristic dynamics observed in the GLN in vivo [34] and models [22]. This dynamics is replicated with a specific period. Some significant snapshots are shown in Fig. 3.2. Moreover, at a later time of the simulation, ms, the synchronous dynamics spontaneously converts in an interesting dynamics where excitatory waves travel in the whole domain involving both GoCs and GrCs, see Fig. 3.2.

Figure 3.2: Ensemble dynamics in the hybrid model. After an initial period of initialization ( ms), a synchronous phenomenon within each population arises and the network activity shows oscillations with a frequency of Hz. After a few cycles ( ms) a travelling wave phenomenono arises. The oscillatory frequency is unaffected by the spontaneous emergence of the waves. The waves of network activity diffuse along the sagittal axis. GrCs are represented with the coloured continuous graph; GoCs are described with bars showing potentials multiplied by a factor for graphical reasons.

3.2 Center-surround and time-windowing

Over the recent years several studies on the GoCs-GrCs network have been focused on the analysis of the integration of excitatory and inhibitory input by GrCs [23, 24, 18, 26, 30]. To further validate our modelling reconstruction we focused on reproducing the spatial and temporal interaction of excitation and inhibition in the GLN following the work presented in [30]. According to [18, 30], the input delivered by a small bundle of MFs in the GLN elicits the activation of a cluster of GrCs, a spot wide at of the peak amplitude [23]. The spot is limited in size and in time by the properties of the feed-forward and feed-back inhibitory loops, due to the GoC integration properties and the arrangement of their axons. This phenomenon, defined center-surround and time-windowing in [12], is the result of the mismatch between the small area excited by the MFs and the wider area inhibited by GoCs activated directly and indirectly, through GrCs, by the same MFs, in combination with the inherent delay of the inhibitory loops.

This section is devoted to present noticeable center-surround and time-windowing phenomena reproduced by models (2.5) and (2.8). Furthermore, significant comparisons with results in [30] are presented. The aim of such comparisons is to show that our hybrid model reproduces the same dynamics shown in reference articles in the field. Now we set:


for the Golgi cell discrete model, and


for the Granular cell continuous one.

The activation of a spot in the network centre was achieved in the original model by activating the MF terminals located within a sphere of radius equal to \unit20\micro\meter in the network centre. Considering that the average length of GrC dendrites was set to \unit14\micro\meter yielding an overall excited area of about \unit34\micro\meter. In the simulations we run to reproduce the impulse response of the GLN, we mimicked this activation by providing excitatory input to GrC vertices within a circle with radius equal to \unit34\micro\meter, in our normalised units, and located in the network centre. The simulation reproduced an activated spot in the network centre of the same size shown in [30] (data not shown). In a second simulation, we increased the activated area to \unit50\micro\meter in order to achieve a spot wide at of the peak amplitude [23] as shown in Figure 3.3 at ms.

Figure 3.3: Snapshots describing center surround phenomenon. No excitatory inputs from MFs reach GoCs. On the contrary, GoCs are excited by GrCs through the PFs. In turn, each active GoC inhibits GrCs lying on a thin rectangle. The stimulus was set on at and set off at .

Let us assume that the connection topology is again described by (2.13) and let us consider MFs exciting GrCs in a circle, having radius , located in the centre of the domain. Figure 3.4 shows the GLN response to a stimulus set on at and set off at when the inhibitory connections were left active or blocked and their difference. The center-surround organisation of the inhibitory projections shapes the GLN response in space as it is evident from the enlargement of the active spot when those connections were blocked.

Figure 3.4: The GLN was activated by a 10 ms pulse delivered by MFs to GrCs. The GLN activation at time 7 ms from the initiation of the stimulus shows the maximal activation yielded by the excitatory input (E peak; upper left panel). After 6 ms the GLN activation fades off due to the emergence of the inhibitory feedback ( peak; upper right panel). After the block of inhibitory synapses the peak increses in amplitude and extension (; upper right panel). The amount of inhibition is calculated as the change in GLN amplitude due to the block of inhibition at 13 ms from the stimulus initiation (; lower left panel). The center-surround is represented as in [30] as the difference from the peak and the inhibition (lower right panel).

In order to compare our result with those shown in Fig. 5 of [30], let us reproduce in Fig. 3.4 the same computational steps to evaluate the effect of inhibition of the GLN activation. After the onset of the MF input the GLN initiates its response with ms of delay reaching its maximal activation after ms, indicated as E peak in Fig. 3.4. After 6 ms the GLN activation fades off due to the emergence of the inhibitory feedback and we choose this time to measure the peak. Blocking the inhibitory synapses the peak increases in amplitude and extension (inhibition blocked: ). As in [30], the amount of inhibition is calculated as the change in GLN activity amplitude due to the block of inhibition. The center-surround is represented as in [30] as the difference from the peak and the inhibition .

Let us recall that our model constituted by (2.5) and (2.8) has been designed under strong simplifying assumptions that do not allow us to take into account the wide variety of phenomena in the single cell and in the whole network. Furthermore, the GrC layer has been described as a continuum. Nonetheless, the remarkable result obtained is that our model is able to reproduce the benchmark dynamics on the right-hand side, at least in the significant time range in which the center-surround phenomenon arises. Concurrently, the delayed activation of GoCs allows the response of GrCs to the stimuli to survive till the GoCs inhibition arises. This configures a time window where GrCs are allowed to transfer their activity to the subsequent network layers. The intervention of GoCs inhibition closes this window resetting the GrCs activity and making them ready to reliably transmit a new stimulus.

Finally, we conclude the present section by stressing that the simulations provided in this paper turn out to be nearly independent on the GrCs continuous population grid refinement. Indeed, focusing on the framework that describes the center-surround phenomenon, we exhibit a comparison among the solutions produced by the model with increasing number of nodes in the space discretization of the GrC population. In Fig. 3.5, the evolution in time of the membrane potential of two cells in the domain is shown, for different values of the spatial resolution. Convergence is clearly documented, thereby providing a sound background to the use of our numerical simulator.

Figure 3.5: Grid convergence for different structured grid resolutions of the continuous model for GrCs: 64, 256, 1024, 4096 nodes. The left panel shows the membrane potential plotted as a function of each time step for the node placed at in the centre of stimulation, i.e. subject to the excitatory input. The right panel shows the same plot for a node at outside the stimulated area and receiving only inhibitory input indirectly elicited by the feedback inhibitory loop.

3.3 Computational comparison

The computational performance of our new modeling method was assessed by running a simulation with an equivalent representation of a portion of the GLN in both simulators: NEURON [10] and our hybrid model simulator. The simulation used as reference is the one reproducing the center-surround effect in [30]. On the one hand, in [30] the full model simulation of 250 ms of network activity (simulation run using the code available at [29] required about 428 sec on a AppleM̊acBook Pro (Intel Core 2 Duo GHz) for a network of 4001 GrCs and 27 GoCs. On the other hand, our network consists in GrC vertices and GoCs. Considering an equivalent of 250 ms of activity, our simulation required about sec. Therefore, our network simulator is roughly 6.7 times faster than the NEURON simulator. We must also recall that the output of our simulator is immediately available for visualization in MATLAB while the output generated by the NEURON simulator requires an additional min of post processing to be visualized. This analysis quantitatively confirms the reduced computational cost of employing our simplified model instead of a detailed one, without losing information about such fundamental activity in time and space as the center-surround and the time-windowing. Let us stress that improvements of our codes will lead to further time simulation savings. The most significant one will consist in translating our routines into a programming language that could be compiled rather than interpreted, i.e. C rather than Matlab, and in restructuring our code in order to take advantage of the multithreading or parallelization programming feature of the C programming language.

4 Conclusions

With the aim of efficiently describing the dynamics of neuronal populations having a strong density difference in specific brain areas, the present work collects new results next to the ones presented in [9]. We started by stating the discrete conductance-based model (2.1) which describes the single cell membrane potential variation in time due to both electrical and chemical synapses. Afterwards, the derivation of the continuum model was obtained. By letting the number of neurons tend to infinity, we arrived at the complete model in (2.4). The two models, discrete and continuous, were then coupled to describe populations exhibiting in specific areas of the brain significant differences in their densities, allowing us to formalize the hybrid model. Specifically, each cell of the low-density population was modelled by the discrete model, whereas the whole high-density population was described by the continuum model. Communications among populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we presented. Such an approach, which may lead to a significant computational cost reduction, was applied to the Golgi-Granular network in the Cerebellum. Interesting dynamics such as synchronization, travelling waves, center-surround and time-windowing were reproduced by the hybrid model. The two latter dynamics were compared with recent results in literature devoted to this specific network, confirming the capability of our approach to reproduce significant dynamics.

By proceeding on the path here traced, some improvements should be taken into account in a forthcoming work. A major objective should concern how much the network behaviours here reproduced are related to the specific properties of the FitzHugh-Nagumo single-cell description. Moreover, one should evaluate if a different single-cell model is able to reproduce other significant behaviours such as resonant dynamics. Finally, in order to make the model more adherent to the reality, a future work should include the plasticity in communication strength among neurons.


Part of this work has been done by the first author during the Ph.D. programme at the Polytechnic Institute of Turin.

We would like to thank Thierry Nieus and Diego Fasoli for enlightening discussions on various aspects of the present work.

The authors declare that there is no conflict of interest regarding the publication of this paper.


  1. E. D. Adrian. Discharge frequencies in the cerebral and cerebellar cortex. Proc Phys Soc, 83:32–33, 1935.
  2. James S. Albus. A theory of cerebellar function. Mathematical Biosciences, 10(1-2):25–61, 1971.
  3. S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics, 27(2):77–87, 1977.
  4. A.M.A. Barbera and S. Berrone. BBTR: an unstructured triangular mesh generator. Quaderni del Dipartimento di Matematica, Politecnico di Torino, 2008.
  5. N. H. Barmack and V. Yakhnitsa. Distribution of granule cells projecting to focal Purkinje cells in mouse uvula-nodulus. Neuroscience, 156(1):216–221, 2008.
  6. G. Billings, E. Piasini, A. Lorincz, Z. Nusser, and R. A. Silver. Network structure within the cerebellar input layer enables lossless sparse encoding. Neuron, pages 1–15, August 2014.
  7. V. Braitenberg and R. P. Atwood. Morphological observations on the cerebellar cortex. Journal of Comparative Neurology, 109(1):1–33, 1958.
  8. P. C. Bressloff. Spatiotemporal dynamics of continuum neural fields. J. Phys. A: Math. Theor., 45:033001, 2012.
  9. C. Canuto and A. Cattani. The derivation of continuum limits of neuronal networks with gap-junction couplings. Network and Heterogeneous media, 9(1):111–133, March 2014.
  10. N. T. Carnevale and M. L. Hines. The NEURON Book, volume 30. Cambridge University Press, 2006.
  11. A. Cattani. “Multispecies” models to describe large neuronal networks. PhD thesis, Politecnico di Torino, Italy, DOI:10.13140/RG.2.1.1819.0882, March 2014.
  12. E. D’Angelo and C. I. De Zeeuw. Timing and plasticity in the cerebellum: focus on the granular layer. Trends Neurosci, 32(1):30–40, 2009.
  13. G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear, and K. J. Friston. The dynamic brain: From spiking neurons to neural masses and cortical fields. PLoS Computational Biology, 4(8):1–35, 2008.
  14. A. Destexhe, Z. F. Mainen, and T. J. Sejnowski. Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. Journal of Computational Neuroscience, 1(3):195–230, 1994.
  15. S. Dieudonné. Submillisecond kinetics and low efficacy of parallel fibre-Golgi cell synaptic currents in the rat cerebellum. J Physiol, 510 (Pt 3):845–866, 1998.
  16. G. B. Ermentrout and D. H. Terman. Mathematical Foundations of Neuroscience. Springer, 1st edition, July 2010.
  17. R. FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1(6):445–466, 1961.
  18. D. Gandolfi, P. Pozzi, M. Tognolina, G. Chirico, J. Mapelli, and E. D’Angelo. The spatiotemporal organization of cerebellar network activity resolved by two-photon imaging of multiple single neurons. Frontiers in cellular neuroscience, 8(April):92, 2014.
  19. G. Holmes. The symptoms of acute cerebellar injuries due to gunshot injuries. Brain, 40, 1917.
  20. M. Ito. The cerebellum and neural control. Raven Pr, 1984.
  21. L. Korbo, B. B. Andersen, O. Ladefoged, and A. Møller. Total numbers of various cell types in rat cerebellar cortex estimated using an unbiased stereological method. Brain research, 609(1-2):262–268, 1993.
  22. R. Maex and E. DeÊSchutter. Synchronization of Golgi and granule cell firing in a detailed network model of the cerebellar granule cell layer. Journal of neurophysiology, 80(5):2521–2537, 1998.
  23. J. Mapelli, D. Gandolfi, and E. D’Angelo. Combinatorial responses controlled by synaptic inhibition in the cerebellum granular layer. Journal of neurophysiology, 103(1):250–261, 2010.
  24. J. Mapelli, D. Gandolfi, and E. D’Angelo. High-pass filtering and dynamic gain regulation enhance vertical bursts transmission along the Mossy Fiber pathway of cerebellum. Frontiers in cellular neuroscience, 4(May):14, 2010.
  25. David Marr. A theory of cerebellar cortex. The Journal of Physiology, 202(2):437–470, June 1969.
  26. T. R. Nieus, L. Mapelli, and E. D’Angelo. Regulation of output spike patterns by phasic inhibition in cerebellar granule cells. Front Cell Neurosci, 8(August):1–18, 2014.
  27. A. Quarteroni, R. Sacco, and F. Saleri. Numerical Mathematics. Texts in applied mathematics. Springer, 2000.
  28. F. Simoes de Souza and E. De Schutter. Robustness effect of gap junctions between Golgi cells on cerebellar cortex oscillations. Neural Systems and Circuits, 1(7):1–19, 2011.
  29. S. Solinas, T. Nieus, and E. D’Angelo. A realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties. SENSOPAC, http://www.sensopac.org/fileadmin/user/gl_net.zip, 2010.
  30. S. Solinas, T. Nieus, and E. D’Angelo. A realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties. Front Cell Neurosci, 4(12):1–17, 2010.
  31. F. Sultan and D. Heck. Detection of sequences in the cerebellar cortex: Numerical estimate of the possible number of tidal-wave inducing sequences represented. Journal of Physiology Paris, 97(4-6):591–600, 2003.
  32. J. Touboul. Propagation of chaos in neural fields. The Annals of Applied Probability, 24(3):1298–1328, 2014.
  33. K. Vervaeke, A. Lorincz, P. Gleeson, M. Farinella, Z. Nusser, and R. A. Silver. Rapid desynchronization of an electrically coupled interneuron network with sparse excitatory synaptic input. Neuron, 67(3):435–51, August 2010.
  34. B. P. Vos, R. Maex, A. Volny-Luraghi, and E. De Schutter. Parallel fibers synchronize spontaneous activity in cerebellar Golgi cells. J Neurosci, 19(11):RC6, 1999.
  35. H.R. Wilson and J.D. Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J, 12:1–24, 1972.
  36. H.R. Wilson and J.D. Cowan. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13:55—80, 1973.
This is a comment super asjknd jkasnjk adsnkj
The feedback cannot be empty
Comments 0
The feedback cannot be empty
Add comment

You’re adding your first comment!
How to quickly get a good reply:
  • Offer a constructive comment on the author work.
  • Add helpful links to code implementation or project page.