Building a Dynamical Network Model from Neural Spiking Data

Building a Dynamical Network Model from Neural Spiking Data: Application of Poisson Likelihood

Application of Poisson Likelihood

Abstract

Research showed that, the information transmitted in biological neurons is encoded in the instants of successive action potentials or their firing rate. In addition to that, in-vivo operation of the neuron makes measurement difficult and thus continuous data collection is restricted. Due to those reasons, classical mean square estimation techniques that are frequently used in neural network training is very difficult to apply. In such situations, point processes and related likelihood methods may be beneficial. In this study, we will present how one can apply certain methods to use the stimulus-response data obtained from a neural process in the mathematical modeling of a neuron. The study is theoretical in nature and it will be supported by simulations. In addition it will be compared to a similar study performed on the same network model.

Keywords:
Neural Spiking Continuous Time Recurrent Neural Network Poisson Processes Likelihood Methods

∎plus 1fil

1 Introduction

Modeling of neurons go back to the mid of 1900s. The well known Hodgkin-Huxley (HH) model is derived to explain the quantitative behavior of the electrochemical activities in the squid giant axon Hodgkin and Huxley (1952). After this revolutionary development numerous studies followed such as Morris- Lecar Morris and Lecar (1981), Fitzhugh - Nagumo FitzHugh (1961) and Hindmarsh - Rose Hindmarsh and Rose (1982, 1984) models. The models developed after HH either adds extra details (such as calcium channel dyanmics) or simplify the overall model. Models targeting the simplification do this either by explaining a major function and discarding the other features or lumping all channels into a single variable. For example Morris and Lecar (1981) defines the activation mechanism with one single recovery variable (though there are lots of biophysical parameters and variables still existing). On the other hand in models such as FitzHugh (1961); Hindmarsh and Rose (1982, 1984) explains the behavioral details such as repetitive firing, bursting etc. Thus, they will not involve physical parameters or only have a very few of them. Depending on the research/application the existence of physical parameters in a neuron may or may not be necessary. In fact, one major criterion in building/selection of the neuron model is the answer to the question: How will one collect the data? In the development of models such as HH, the data is collected through a voltage-clamp Pehlivan (2009) experiment. This is a standard method in electrophysiology. However, it requires the neuron to be isolated (or in vitro experiment). In an in-vivo application, placing an electrode to the neuron’s membrane is risky as it will alter the operation of the cell (propagation of the action potential might be delayed etc.). That will be disastrous. Measurement without touching to the neuron in consideration is possible but one will not be able to measure the level of action potential. However, one can collect the individual peaks of successive action potentials thanks to the local current flows from the membrane. This will yield an array of time values which defines the locations of the peaks of the successive action potentials. This array is often called as neural spike train as each action potential is considered like a sudden voltage jump. We will not have continuous data collection here. We will only have timing information but no potential levels. At a first look one may think that this data has no meaning but the reality seems different. Studies such as Shadlen and Newsome (1994) showed that this temporal data carries the actual coding rather than the action potential itself. In addition, it is also understood that the spike trains are not deterministic and they appear at different locations even the same stimulus is applied. It is discovered that the stochastic distributions of the spike timings obey an Inhomogeneous Poisson Process with the event rate being the neural firing rate of the neurons.

Being aware of that one might propose a generic model that can be trained using point process likelihood methods Myung (2003). These are asymptotically efficient methods where the mean square error approaches the Cramer-Rao lower bound as the number of data samples increases. Knowing the fact that neural spiking obeys Poisson processes, one can implement a likelihood function from the Poisson’s probability mass function. One can make use of the spike count for the evaluation.

A similar study was performed by DiMattina and Zhang (2011) and DiMattina and Zhang (2013). Here, static feedforward neural network is fitted from the discrete neural spiking data. The parameter estimation procedure is based on the maximum a-posteriori (MAP) estimation technique Murphy (2012) which is known as an extension to the maximum likelihood method (ML). The model targets a stimulus-response relationship and includes a very few physical parameters. Though it is relatively easier to process a static neural network model, it lacks the features such as time dependentness which may not be adequate to express the behavior of a realistic neuron or neural network. In fact there is a dynamical version of this network which is called as continuous time recurrent neural network (CTRNN) Beer (1995). This includes some nonlinear features existent in realistic neural networks and also has the advantage of universal approximation capability. In this work, we will work on a CTRNN type model.

We can summarize what to be done in this research as follows:

  1. We will present an approach on how we can fit a model from a discrete neural spiking data.

  2. The study targets the testing of efficiency of the algorithms that is to be used in estimation of the parameters of the neuron.

  3. The neural spike data will be generated from the firing rate through Poisson process simulation.

  4. The parameters of the model will be estimated using Maximum Likelihood Estimation and the probability mass function of the Inhomogeneous Poisson Process is used as a likelihood function.

  5. The optimization of the likelihood function is performed through MATLAB’s fmincon script.

  6. The comparison of the findings with a study Doruk and Zhang (2017) using a different likelihood function will also be briefly presented.

2 Theoretical Methods

2.1 Model of a Neuron

As in several physical processes neurons exhibit a highly nonlinear behavior. So one can express the stimulus response relationship of a neuron or a neural network as a generic nonlinear system:

(1)

In the above equation is the stimulus and is the response. Here the state variable represent the time dependentness of the neurons. This may be a membrane potential, firing rate or a dimensionless variable. (1) can represent a single neuron or an average response of a group of neurons. For a successful representation, it is recommended that at least two state variables should exist in the model. In Section 1 we stated that, we will model the neuron by a continuous time recurrent neural network. So a CTRNN in general can be shown as:

(2)

Here and are the state and stimulus variables like in (1). is a weight parameter representing the synaptic connections between neurons and . The term is a weight parameter between the stimulus and the neurons. The function is a soft saturable function relating the firing rate response of the neuron to its dynamical variable . Mathematically it is:

(3)

The above is also called as a sigmoid function. Here is the maximum firing rate that the neuron can produce. and are a soft threshold and a slope parameter for the same neuron respectively. is a time constant parameter. This is the major physical parameter here. In this work, we will work on a two neuron CTRNN model. This model will have one excitatory and one inhibitory neuron. More truely speaking the collective behavior of a group of excitatory and inhibitory neurons is lumped into two neurons. Mathematically this will be:

(4)

In the above, and stand for excitatory and inhibitory units , and are the time constants of the excitatory and inhibitory units respectively, is the self excitation weight for the excitatory neuron, is the self inhibition constant for the inhibitory neuron, is the synaptic coefficient that represent the synapse that inhibits the excitatory neuron, is the synaptic coefficient that represents the synapse that excites the inhibitory neuron. All those parameters are positive in value and their excitatory characteristics are determined by their signs in (4) (positive for excitatory and negative for inhibitory). ve are the coefficients of interaction between stimulus and the neurons. ve functions are obtained by replacing in (3) by and . In order to rewrite (4) in the form of (1) we need to transfer the time constants as reciprocal forms to right hand side as:

(5)

where and . The stimulus entering the model is here. The response will be the firing rate of the excitatory neuron which is denoted by . Of course this is not a measurable variable but it reveals itself in the spiking response of the excitatory neuron. The relationship to the excitatory dynamic variable :

(6)

In the above equations and are dynamical variables representing the excitatory and inhibitory neurons. They do not have to represent any physical quantity or process.

2.2 Neural Spiking, Poisson Random Processes and Likelihood

Neural spiking is the phenomena which occurs due to successive action potentials occurring in a neural transmission Rieke (1999). One can see an example in Figure 1. In this figure, the instant where the action potential is fired is recorded as ’1’ and ’0’ is recorded when the neuron is at rest state. This will be like a serial data recorded as a binary number (like RS-232 serial transmission).

Figure 1: Action potentials and neural spiking: Peak points of individual action potentials taken as ones and the remaining regions correspond to a zero. This requires precision sampling as the spiking event can occur in a time bin of 1 ms or less.

The transmitted information may be coded in the firing rate of the spikes Kandel et al (2000); Adrian (1926), their count Forrest (2014a, b) or timing Dayan et al (2003); Butts et al (2007); Singh and Levy (2017). The mechanism may differ from neuron to neuron. We also reminded in Section 1 that the spike trains such as the one shown in Figure 1 is not deterministic and found to obey an Inhomogeneous Poisson Process Shadlen and Newsome (1994).

In order to talk about the statistics of an Inhomogeneous Poisson Process one needs to write its probability mass function:

(7)

The above expression provides the probability of number of events to occur in the interval . This probability depends on a critical parameter called as event rate . In homogeneous Poisson process, this parameter is constant. In Inhomogeneous versions it will be time varying and equivalent to the neural firing rate . Thus we will need to define the above in terms of the mean firing rate:

(8)

In the neural spiking phenomenon, the number of spikes will be equivalent to the event count parameter which is . So one can determine the expected number of spikes by simulating the process defined by (7). As firing rate depends on the state variable one can also write the following:

(9)

In the above we redefine firing rate as a function of the model parameters. That is:

(10)

So we can say that, the probability of spikes to occur in the interval is given by:

(11)

regarding the fact that firing rate generated by (5) and (6). The above equation is also the likelihood function for trial . Statistically it is not enough to proceed with a single set of data and we often need multiple trials. In addition the stimulus should be different for each trial (so becomes ). So, if we have different stimuli and corresponding different responses we can write the following:

(12)

In the above we have different stimuli and thus different response through the states . If represents the firing rate of the trial :

(13)

is written. (12) gives the joint likelihood of independent neural spike data collection trials. In the optimization we generally prefer its logarithmic version as:

(14)

The term may be neglected. One can write the estimate of parameter in (10) as shown below:

(15)

The above optimization problem can be solved by MATLAB® fmincon script.

2.3 Simulation of Poisson Processes and Spiking Data

In order to test the methodologies obtained in the last section one has to generate a spike train. The best way to achieve that is to obtain a set of timing events by simulating an Inhomogeneous Poisson Process (using (7)) as a function of time dependent firing rate . This even better for the cases where the neuron model has a very few or no physical parameters. There are few different methods to simulate an Inhomogeneous Poisson Process by computational tools such as MATLAB. One feasible method when one has discrete time bins is the local Bernoulli approximation Eden (2008). Here one can assume that each spike is formed at a very narrow time bin such as 1 ms. Based on those one can say that:

  1. Assume that the firing rate of the neurons are given by .

  2. Suppose that is an interval such that only one spike can appear in.

  3. So, the probability of a spike to appear in the interval will be .

  4. The probability of a spike not to exist in interval will be .

From the above properties one can develop the following algorithm to generate a neural spike train:

  1. Divide the simulation interval to spaced discrete time bins. Now one will have equally spaced time bins in the simulation interval. Note that so small that only one spike can fit.

  2. Suppose that current time instant is denoted by . In order to test whether there is any spike occurring at , first generate a uniformly distributed random variable in the range . In MATLAB this can be done by =unifrnd(0,1).

  3. If fire a spike at .

  4. If , nothing happens.

  5. The steps up-to this point should be repeated for each time bin in . So one has operations to process, but this should be fairly easy thanks to the vectorial computation capabilities of MATLAB.

3 Example Application

3.1 Definition of the Problem

In this section, we will present an example to demonstrate the theoretical information presented in Section 2.2. We will attempt to estimate the parameters of (5) through the collected spike timing information (spike train). The parameters to be estimated are given in (10). We can summarize the goals and procedures as follows:

  1. The model in (5) will be simulated using the nominal value of parameters in Table 1. This will provide the true firing rate information that is to be encoded in the spike trains.

  2. The method in Section 2.3 will be used to simulate an Inhomogeneous Poisson Process to obtain the expected number and timings of the spikes. The event rate will be the firing rate obtained in the previous step.

  3. The simulation will be repeated times to obtain statistically adequate information. Each trial will involve a different stimulus. Information on the numerical value of is given in Section 3.3. The generation of stimulus is described in Section 3.2.

  4. After completion of data collection one can use joint likelihood function defined in (14). The optimization problem is solved by MATLAB fmincon script.

  5. The evaluation will be repeated a few times to see whether the algorithm works efficiently. Especially the convexity of the problem can only be understood this way.

  6. The results will be presented as tables which reveals the mean value of estimated parameters, percent and mean square errors.

Parameter Unit Nominal Value
-
-
-
-
-
-
Table 1: Nominal values of the parameters in (5)

In the current problem it is assumed that and are known and their parameters are given in Table 2.

Parametre Değer
100
0.04
70
50
0.04
35
Table 2: The values of parameters of the functions and . They are assumed to be known and will not be estimated.

3.2 Formation of the Stimulus

The stimulus is applied as an input to the model. This may represent any physical exciter such as temperature, pressure, light etc. It can also be thought as the firing of the pre-synaptic neuron which enters as an stimulating input to the examined neuron or network. In view of mathematics, one has to define the input variable as a variable . Concerning the fact that we are discussing sensory neurons we may think of a Fourier series describing the stimulus as:

(16)

In the above is amplitude, is the base frequency of each stimulus component and is their phase. For multiple trials one can modify the above so that it is given for each trial:

(17)

In the above . is the number of trials as discussed in Section 3.1. The stimulus should be defined different in each case by assigning the phase randomly as a uniformly distributed number between . This can be done in MATLAB by phi=unifrnd(-pi,pi).

Finally one will be able to see what happens when a stimulus is configured with , Hz, and a random phase . That is presented in Figure 2.

(a) Stimulus
(b) Excitatory Neuron Variable
(c) Inhibitory Neuron Variable
(d) Firing Rate of the Excitatory Neuron
(e) Excitatory Neuron Spike Train
Figure 2: The response of the neural network in (5) with the nominal parameters given in Table 1 against stimulus defined in (16) configured with the , Hz, and a random phase parameters.

3.3 Example’s Scenario

In Table 3 one can see the scenario associated with the example application. For comparison some of the parameters such as amplitude and sample size are also tried at different values.

Parameter Notation Value
Duration of Simulation 3 sec.
Number of Repeats 25,50,100,200,400
Size of Stimulus 5,10,20,30,40,50
Optimization Algorithm N/A Interior-Point Gradient Descent (MATLAB)
Number of Parameters Size() 8
Stimulus Amplitude 25,50,100
Base Frequency of Stimulus 3.333 Hz
Bin Size 1 ms
Table 3: The details associated with the scenario of the example application. Some parameters are multiple as they indicate the different cases examined for comparison.

4 Presentation of the Results and Concluding Remarks

4.1 General Evaluation

In this section, one will be able to see the results obtained by the solution of the maximum likelihood estimation problem for the example presented in Section 3 using the likelihood function defined in (14). The parameters to be estimated are given in (10). The results are presented in two forms. The mean estimation results are presented in tabular form (Table 4) where as the variation of the mean square errors of estimation against stimulus component size and the sample size are presented separately as graphs (Figure 3 and 4).

For most of the parameters (except ) the sample size leads to decrease in mean square error. One can note that from Figure 4 and also roughly from Table 4. The stimulus component size does not seem to have a definable pattern. Here, it is recommended that large values of are definitely unnecessary.

Case
1 25 25 5 55.533727 30.213046 0.921087 1.032323 1.732064 2.028027 0.848702 0.32529
2 25 50 5 56.608858 20.548860 1.029538 0.883940 1.349837 2.575930 0.804324 0.49538
3 25 100 5 57.188796 23.585112 1.070947 0.655189 1.517467 3.075277 0.662610 0.31510
4 50 25 5 50.313627 32.285748 0.973964 1.100521 1.699105 1.984025 0.781615 0.33855
5 50 100 5 56.597545 22.221102 1.097007 0.757587 1.361170 2.611427 0.772177 0.46856
6 100 25 5 55.119586 26.456978 0.806735 1.081219 1.609574 1.897204 0.694758 0.37113
7 100 100 5 57.238965 21.655775 0.913813 0.699147 1.370589 2.137143 0.877339 0.41435
8 200 100 5 51.998178 22.484430 0.951887 0.748606 1.260704 1.962527 0.790993 0.47072
9 400 100 5 50.205221 23.943961 0.964450 0.666517 1.298419 2.092121 0.797156 0.54510
10 100 100 10 53.484995 20.802459 1.008136 0.687495 1.268520 2.430492 0.645363 0.382041
11 100 100 20 48.692376 22.146853 1.153791 0.843028 1.206127 2.463636 0.586859 0.400202
12 100 100 30 60.642974 26.017975 0.918924 0.559221 1.602176 2.628880 0.772155 0.429427
13 100 100 40 53.158671 22.334121 1.168865 0.728257 1.416619 3.087837 0.758917 0.380532
14 100 100 50 55.625371 28.878888 1.100684 0.583886 1.556403 2.970991 0.859251 0.449933
Table 4: The results of application of the theory presented in Section 2.2 to the example problem in Section 3. It is understood that the closeness of the mean values to the nominal parameters in Table 1 depends mainly on . Concerning the influence of the stimulus component size , one can not say that there is a definite pattern. This is aligned with the general properties of likelihood estimation.
Figure 3: The variation of mean square errors (MSE) of each parameter against varying stimulus component size .
Figure 4: The variation of mean square errors (MSE) of each parameter against varying sample (or iteration) size .

4.2 Comparison with Another Likelihood Function

In this section we will compare the results obtained from this research with that of another similar research using a different likelihood function Doruk and Zhang (2017) which depends on the temporal locations of individual spikes (denoted as a set by ) instead of just their count:

(18)

In the compared study, the model and its parameters are same as that of (5) and the stimulus is also the same given by (17). Applying the same scenario as Table 3 reveals that the work in Doruk and Zhang (2017) yields much smaller estimation mean square errors when considered the same sample size . In addition, smaller sample sizes in Doruk and Zhang (2017) generate much smaller estimation errors appear at smaller values of than larger values of in this study. The principal reason lying under that result should be the inclusion of the temporal locations of the individual spikes in the likelihood function (18) from the collected spike trains. Of course this advantage required higher computational efforts and memory than the study in this research. Thus depending on the resources one can prefer the current work or Doruk and Zhang (2017). However, the covariance of the estimation error seems much higher in the application of this paper.

Footnotes

  1. email: resat.doruk@atilim.edu.tr
  2. email: resat.doruk@atilim.edu.tr

References

  1. Adrian ED (1926) The impulses produced by sensory nerve endings. The Journal of physiology 61(1):49–72
  2. Beer RD (1995) On the dynamics of small continuous-time recurrent neural networks. Adaptive Behavior 3(4):469–509
  3. Butts DA, Weng C, Jin J, Yeh CI, Lesica NA, Jose-Manuel A, Stanley GB (2007) Temporal precision in the neural code and the timescales of natural vision. Nature 449(7158):92
  4. Dayan P, Abbott L, et al (2003) Theoretical neuroscience: computational and mathematical modeling of neural systems. Journal of Cognitive Neuroscience 15(1):154–155
  5. DiMattina C, Zhang K (2011) Active data collection for efficient estimation and comparison of nonlinear neural models. Neural computation 23(9):2242–2288
  6. DiMattina C, Zhang K (2013) Adaptive stimulus optimization for sensory systems neuroscience. Frontiers in neural circuits 7
  7. Doruk RO, Zhang K (2017) Fitting of dynamic recurrent neural network models to sensory stimulus-response data. arXiv preprint arXiv:170909541
  8. Eden U (2008) Point process models for neural spike trains. Neural Signal Processing: Quantitative Analysis of Neural Activity Washington, DC: Society for Neuroscience
  9. FitzHugh R (1961) Impulses and physiological states in theoretical models of nerve membrane. Biophysical journal 1(6):445–466
  10. Forrest MD (2014a) Intracellular calcium dynamics permit a purkinje neuron model to perform toggle and gain computations upon its inputs. Frontiers in computational neuroscience 8
  11. Forrest MD (2014b) The sodium-potassium pump is an information processing element in brain computation. Frontiers in physiology 5
  12. Hindmarsh J, Rose R (1982) A model of the nerve impulse using two first-order differential equations. Nature 296(5853):162–164
  13. Hindmarsh JL, Rose R (1984) A model of neuronal bursting using three coupled first order differential equations. Proceedings of the royal society of London B: biological sciences 221(1222):87–102
  14. Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology 117(4):500
  15. Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ, et al (2000) Principles of neural science, vol 4. McGraw-hill New York
  16. Morris C, Lecar H (1981) Voltage oscillations in the barnacle giant muscle fiber. Biophysical journal 35(1):193–213
  17. Murphy KP (2012) Machine learning: a probabilistic perspective. MIT press
  18. Myung IJ (2003) Tutorial on maximum likelihood estimation. Journal of Mathematical Psychology 47(1):90–100
  19. Pehlivan F (2009) Biyofizik. Hacettepe Taş Kitapçılık Limited Şti.
  20. Rieke F (1999) Spikes: exploring the neural code
  21. Shadlen MN, Newsome WT (1994) Noise, neural codes and cortical organization. Current opinion in neurobiology 4(4):569–579
  22. Singh C, Levy WB (2017) A consensus layer v pyramidal neuron can sustain interpulse-interval coding. PloS one 12(7):e0180,839
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
214435
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description