Bifurcation Spiking Neural Network
Abstract
Spiking neural networks (SNNs) has attracted much attention due to its great potential of modeling timedependent signals. The firing rate of spiking neurons is decided by control rate which is fixed manually in advance, and thus, whether the firing rate is adequate for modeling actual time series relies on fortune. Though it is demanded to have an adaptive control rate, it is a nontrivial task because the control rate and the connection weights learned during the training process are usually entangled. In this paper, we show that the firing rate is related to the eigenvalue of the spike generation function. Inspired by this insight, by enabling the spike generation function to have adaptable eigenvalues rather than parametric control rates, we develop the Bifurcation Spiking Neural Network (BSNN), which has an adaptive firing rate and is insensitive to the setting of control rates. Experiments validate the effectiveness of BSNN on a broad range of tasks, showing that BSNN achieves superior performance to existing SNNs and is robust to the setting of control rates.
piking Neural Network \sepFiring Rate \sepControl Rate \sepEigenvalues \sepBifurcation
1 Introduction
Spiking neural networks (SNNs) take into account the time of spike firing rather than simply relying on the accumulated signal strength in conventional neural networks, and thus offer the possibility for modeling timedependent data series Gerstner and Kistler (2002); VanRullen et al. (2005). So the firing rate of spiking neurons becomes arguably the most important measure for characterizing the ability of SNNs for modeling time series Barrett et al. (2013); Chou et al. (2019). The firing rate is dominated by many factors, such as neural input, connection weights, and control rates of the spike generation function. And it is sensitive to the setting of these factors, especially the control rate. In previous works, control rate is usually pregiven and fixed, which results in the learning performance of SNNs is dependent on a careful tuning of this hyperparameter.
However, achieving an adaptive firing rate with respect to the control rate is a very tricky task since the control rate and connection weights are entangled during the training process. So the approaches of learning hyperparameters in conventional neural networks cannot be directly used to solve this issue. An alternative way is to sample the control rates from a certain predefined distribution and find the optimal ones by alternating optimization. Nevertheless, this method usually succeed depends on an apposite distribution setting, and would result in a larger computation and storage.
In this paper, we propose the Bifurcation Spiking Neural Network (BSNN) for achieving adaptive firing rates. We first show that the firing rate of spiking neurons is related to the eigenvalues of spike generation functions. And then by exploiting the bifurcation theory, we convert the issue of parameterizing the control rates into a new problem of learning apposite eigenvalues. So BSNN not only tackles the challenge that control rates interact with connection weights, leading to a robust setting of control rates, but also works with considerable less computation and storage in comparison with the alternating optimization approaches. The experiments conducted on a delayedmemory XOR task and 3 benchmark datasets demonstrate the effectiveness of BSNN, showing that its performance not only surpasses existing SNNs but also robust to the setting of control rates.
The rest of this paper is organized as follows. We first review some preliminary knowledge about spiking neural models in Section 2, and then reveal the closeknit relation of firing rates and eigenvalues of spike generation functions in Section 3. In Section 4, we formally introduce BSNN and present a concrete approach for implementing BSNN with a multilayer architecture. The experiments are conducted in Section 5. Finally, we conclude our work in Section 6.
2 Spiking Neural Model
The leaky integrateandfire (LIF) neuron is probably one of the simplest spiking generation functions, but it is still very popular due to the ease that it can be analyzed and simulated Hunsberger and Eliasmith (2015). Here, we review a general form of the LIF equation, which with dimensional input signals and a rest voltage as follows:
(1) 
where is the membrane time constant, represents the membrane potential of the th neuron at time , is the corresponding connection weight, and denotes the control rate with respect to the th neuron, which is usually preset a fixed value according to existing SNNs.
Particularly, the eigenvalue of the th LIF neuron is equal to the quotient of and by solving the following algebraic formulation:
The supplemental materials about the eigenvalues and the algebraic formulation of an ODE dynamic system can be seen from Appendix.
Based on Spike Response Model (SRM) scheme Gerstner (1995), the LIF equation has a general solution with as follows:
(2) 
where denotes the last firing time .
The LIF model mentioned above describes a resistorcapacitor circuit that the spiking neuron can only be activated when the membrane potential reaches a certain threshold (firing threshold). After firing, the neural membrane potential is instantaneously reset to a lower value (rest voltage). Formally, we can employ a spike excitation function to formulate this procedure:
Neural Encoding
In SNNs, input signals are usually preconverted into a spiking version, that is, encoded by a Poisson distribution or recorded by a Dynamic Vision Sensor (DVS) Quiroga et al. (2005); Anumula et al. (2018). Speaking formally, a period of input received by an input channel can be regarded as one sampled from a certain Poisson process with an underlying parameter , that is,
By exploiting the spike generation function, the input spike train is integrated as
in Presynapse. Ideally, without the “leaky” term, or equally , the integrated voltage dynamic in Presynapse would obey a standard Poisson process. For a popular setting , it is obvious that the energy of voltage trains is less than the standard Poisson process. Furthermore, this “leaky” Poisson process can be regarded as an integration of some underlying distribution . In other words, leaky integration is equivalent to integrating a “leaky” distribution. For a more general case, it is not hard to infer that the spike generation function with control rate is able to convert a regularized Poisson distribution into a reproducing distribution with the equal expectation and standard deviation . And different control rates lead to different temporal reproducing representations. Figure 1 gives a vivid illustration for this procedure.
3 Firing Rate of Spiking Neurons
The firing rate of spiking neurons is a significant indicator for indicating the representation ability of SNNs. Here, we are going to investigate the roles of firing rates in SNNs by a simple reproducing representation experiment, and then show the importance of eigenvalues on the firing rate.
For simplicity, we take a feedforward SNN with one input channel, one hidden layer ( spiking neurons), and one output spiking neuron as an example. For a period of input , the spike trains transmitted between layers abide by the following procedure:
where and indicate the eigenvalue of th hidden spiking neuron and output neuron, respectively. and denote the connection weights. Merging the two formulas above and approximating the spike excitation function (because will never cause any energy wastage), the total spike count fired from the output neuron during becomes:
So we can calculate the firing rate according to . Suppose are sampled from a Poisson process , then the firing rate obeys a parametric stochastic process with learnable connection weights and :
(3) 
Next, we are going to verify whether the firing rate described in Equation 3 is adaptive, that is, whether the concerned stochastic process led by Equation 3, that has sufficient neurons , can approximate any Poisson process in . Let denote the target Poisson process in , then the aforementioned approximation issue is equivalent to solve the following implicit equations:
(4) 
So the issue of achieving adaptive firing rates is converted into a new problem of how to solve these implicit equations. Equation 4 comprises two parts: a quasilinear equation (the first one) with respect to the eigenvalues and a compound equation (the second one) related to connection weights and . Due to the exponential operation, these two equations do not conflict with each other. In neural networks, it is not difficult to solve the compound equation for obtaining a group of apposite connection weights. So next, we are going to discuss the solution of the quasilinear one.
If we preset all eigenvalues as a uniformly constant as existing SNNs usually do, the firing rate of spiking neurons is nonadaptive, leading to a limited representation ability of SNNs. For example that , the stochastic process
only dependent on learnable connection weights and even cannot reproduce the raw input process . Furthermore, if we force the eigenvalues of spiking generation functions to be equal, the representation ability of SNNs is still limited according to the rewritten quasilinear equation:
where “2” denotes the number of spiking layers. So the eigenvalues of spike generation functions need to be diverse.
For a more complex case, that is, consider a SNN with input channels and dimensional hidden spiking neurons, where each input channel receives a sequence of spikes sampled from , , then the parametric energy process in Equation 3 becomes:
(5) 
Equation 5 suggests that the parametric eigenvalues leads to a basis function in SNN. So the roles of eigenvalues and learnable connection weights and are clearly distinguished. This means the parametric eigenvalues play an important and irreplaceable role in SNNs and cannot be covered by the learnable connection weights.
In summary, eigenvalues of the LIF function indeed have a great and sensitive influence on achieving an adaptive firing rate of SNNs. And the role of eigenvalues cannot be replaced by connection weights. On the contrary, both the ways that presets all eigenvalues to a fixed constant and that employs unified eigenvalues in SNNs would impede the performance of SNNs, leading to whether the firing rate is adequate for modeling actual time series relies on fortune.
3.1 Approaches for Parameterizing Control Rates
An intuitive idea for achieving an adaptive firing rate in SNNs is to parameterize the control rates, since the control rate of LIF model is equivalent to its eigenvalue. However, training SNNs with parametric control rates rather than original fixed constant is a brandnew challenge, where the experience of training hyperparameters in neural networks can hardly be employed. The difficulty is twofold. (1) The existing SNNs are almost trained based on SRM scheme. This leads to the membrane potential in Equation 2 is dominated by the product of connection weights and control rate . It is very hard to optimize this problem by simply utilizing gradientbased methods. (2) The roles of control rates and connection weights are distinct during the SNNs training procedure; the control rate is convolved with the received spikes aggregated by connection weights. So the spike errors caused by control rates spread temporally, while connection weights only transmit errors between layers. To sum up, it is a very tricky challenge for conventional approaches to training a SNN with parametric control rates.
An alternative approach to alleviate the issue is to employ alternating optimization for estimating control rate hyperparameters. The key idea is to regard the control rates is a group hyperparameters generated from a prior distribution, so that solving for each learnable variable (connection weights or control rates) reduces to wellknown methods. In general, we list this optimization procedure as follows:

Sampling a group of control rates from a pregiven distribution, such as the uniform distribution . Then spikes spread according to:

How to update the connection weights with fixed depends on the choice of errorpropagation techniques. Here, we employ a seminal work, SLAYER Shrestha and Orchard (2018) as a basic model.

We solve , where denotes the supervised signals. Thus, fast algorithm, such as alternating coordinate descent can be applied directly to find a collection of apposite control rates.
Obviously, the approaches based on alternating optimization place larger demands on computation and storage, and usually converge slowly in neural networks.
4 Bifurcation Spiking Neural Networks
In this section, we are going to introduce the BSNN for achieving adaptive firing rates of SNNs. The core idea of BSNN is to separate the eigenvalues of spike generation functions from the control rates, with contrast to the LIF mechanism that the eigenvalue of the LIF model is equal to its control rate. (1) By enabling the spike generation function to have adaptable eigenvalues, BSNN can poss an adaptive firing rate. (2) And since the newborn parameters for achieving adaptable eigenvalues are independent of the connection weights, the issue caused by entangled control rates and connection weights is selfdefeating. Further experiments demonstrate that the performance of BSNN is insensitive to the setting of control rates.
4.1 Basic Bifurcation Neurons
Bifurcation theory is the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change of the parameter values (often the bifurcation hyperparameters pass through a critical point) causes a sudden topological change in its behavior Onuki (2002); Kuznetsov (2013). Its central mechanism is to employ the mutual promotion made to other equations of the dynamic system to achieve diverse eigenvalues.
Inspired by this recognition, we propose the bifurcation neurons model:
(6) 
where is the control rate and and is the bifurcation hyperparameters. portrays the mutual promotion between neurons, for simplicity, we here denote the mutual promotion of the th neuron as , where denotes the highorder term of , then Equation 6 could be rewritten as:
As we can see, the basic building block of BSNN is a system of equations with respect to a cluster of spiking neurons. Regarding this cluster of spiking neurons as a spiking layer and reusing Equation 6 layer by layer, we can establish a feedforward multilayer architecture, as shown in Figure 2.
4.2 Adaptive Firing Rate
To ensure BSNN have an adaptive firing rate, we need to verify whether the stochastic process produced by BSNN is able to approximate any Poisson process in . Similar to the analysis of Equation 4, we have an equivalent solution:
(7) 
where and denote the eigenvalue related to th spiking neuron and output neuron, respectively. It is worth noting that here , since there is only one output neuron (without adjacent neurons). Obviously, with flexible eigenvalues, Equation 7 has nontrivial solutions with degree of freedom. Further, we can declare that BSNN has nontrivial solutions for achieving the adaptive firing rate.
Theorem 1
If the bifurcation hyperparameters are all great than 0, there are at most bifurcation solutions in Equation 6.
Proof.
The logic flow of Theorem 1 can be roughly proved by the following steps. First, finding the characteristic roots of our proposed BSNN model. According to Equation 6, we can obtain its algebraic representation as follows:
where
and
Suppose that the eigenvalues of the matrix are . So the eigenvalue of can be represented as the sum of that of and that of , that is,
Next, we can elucidate the bifurcation solutions with respect to the eigenvalues. For simplicity, we take the 2neuron model as an example, that is,
Let
then when , has two real eigenvalues:
and
Obviously, must be less than zero, but it is not necessary for . Let be the critical threshold, then the bifurcation solutions of Equation 6 are dominated by one pair of bifurcation eigenvalues:
Merging and into Equation 7, we have:
Then the solution of Equation 7 becomes:
So as long as the product of and is greater than 0, there exists a least one nontrivial solutions of Equation 7 in BSNN. So the existence of bifurcation solutions is equivalent to the existence of nontrivial solutions of Equation 7; one pair of bifurcation solutions induces a group of apposite eigenvalues for achieving adaptive firing rates. Especially, when , both neurons work in a “leaky” mode; weaker signals would hinder neuron excitation. While , a new bifurcation phenomenon occurs, one neuron still works in a “leaky” mode, filtering weaker signals, but the other neuron appears to be active frequently. Generally, for the case of neurons, the solution of Equation 6 possesses at most bifurcation solutions. ∎
On the basis of the results of Theorem 1, the eigenvalues of Equation 6 are dominated by a series of bifurcation hyperparameters . So we can convert the issue of achieving adaptive firing rates into the problem of how to calculate the bifurcation hyperparameters . The learning procedure will be implemented in the next section.
4.3 Implementation
Consider a feedforward BSNN with presynaptic input channels and dimensional spiking neurons, and approximate the mutual promotion from the th neuron to the th neuron is caused by the last spike of neuron , noted as , where . Then, for the th neuron, we have
(8) 
In Equation 8, the bifurcation hyperparameters are independent to connection weights, thus avoiding the problem of parameter entanglement.
Akin to the Spike Response Model (SRM) Gerstner (1995), Equation 8 has a closedform solution:
(9) 
where
(10) 
By employing the spike excitation function , the bifurcation spiking neurons can generate spikes to next neuron.
Error Backpropagation in BSNN
BSNN with supervised signals can also be optimized via error backpropagation. Firstly, we denote the input spike train to a neuron as the following general form Huh and Sejnowski (2018):
where is the spike time of the th input and is a corresponding Diracdelta function.
Then summing up the loss of the th target supervised signal related to in time interval :
(11) 
So for time , we have
(12) 
As shown in Figure 2, the first term of Equation 12 represents the error backpropagation of the excitatory neurons, while the third term is the backpropagation of basic bifurcation neuron error. Plugging Equation 9 and Equation 11 into Equation 12, the gradient term can be calculated as:
where
However, the derivative of the spike excitation function is always a problem for training SNNs with supervised signals. Recently, there have emerged many seminal approaches for addressing this problem. In this paper, we directly employ the result of Shrestha and Orchard (2018).
Therefore, we obtain the backpropagation pipeline related to connection weights :
Similar to the errorbackpropagation process with respect to , the correction formula with respect to is given by:
In general, we can also add a learning rate to help convergence, just like most deep artificial neural networks.
Here, BSNN is implemented by an extended BP algorithm. Compared with the existing SNNs, BSNN only needs to calculate one more set of gradients, i.e., during feedback. The records of do not cause additional storage, because we intrinsically need the membrane potential values of each spiking neuron during the gradient calculation procedure as shown in Equation 12. So both the computation and storage of BSNN are considerably less in comparison with the alternating optimization approaches.
5 Experiments
In this section, we conducted experiments on several tasks to evaluate the functional performance of BSNN. The experiments are performed to discuss the following questions:

Is the performance of BSNN comparable with stateoftheart SNNs?

Does the performance of BSNN surpass that of alternating optimization, especially in terms of accuracy and efficiency?

Concerning BSNN, is the performance robust to the control rate? In which conditions?
5.1 Delayedmemory XOR Task
We first consider a Delayedmemory XOR task, which performs the XOR operation on the input history stored over an extended duration Abbott et al. (2016). Specifically, the network receives two binary pulse signals, + or , through an input channel and a gocue channel. When the network receives two input pulses between two gocue pulses, it should output the XOR signal of both inputs. In other words, the network outputs a positive signal if the input pulses are of equal signs (+ + or  ), and a negative signal if the input pulses are of opposite signs (+  or  +). If there is only one input pulse between two gocue pluses, the network should generate a null output.
Based on the above introduction, we simulated a Delayermemory XOR dataset, which consists of 2400 input signals with 300 pulses, 2400 gocue signals with 200 pulses, and the corresponding output signals. We also train the networks with the rest voltage by the first 2160 units and predict the output signals of the last 240 signals.
Figure 3 displays the performance of the traditional SNN model with fixed control rate (i.e., staticLIF SNN) and BSNN on delayedmemory XOR task, in which BSNN can be highly qualified with the correct outputs, whereas the staticLIF SNN frequently makes mistakes because it cannot distinguish the roles of different channel signals. These comparative results confirm that our proposed BSNN can perform nonlinear computations over an extended time.
5.2 Benchmark Tasks
We also test the performance of BSNN on 3 benchmark datasets. Limited by the space, we here only provide the core information of datasets, preprocessing, and the contenders. Detailed introduction of this benchmark experiment are offered in Appendix.
Datasets: (1) The MNIST handwritten digit dataset
Preprocessing: The preprocessing steps for are the same as the ones in Pillow et al. (2005); Anumula et al. (2018), that is, each static image of (1) MNIST and (3) FashionMNIST is converted into a spike train using Poisson Encoding, while each example in NMNIST was encoded by a Dynamic Audio / Vision Sensor (DAS / DVS).
Contenders: We also employ 2 types of contenders to competing with the proposed BSNN: (1) several stateoftheart SNNs with SRM structure and (2) alternating optimization algorithms as shown in Section 3. In this work, all SNN models are without any convolution term. And the alternating optimization algorithms presample a group of control rates from two uniform distributions, that is, and . For these image classification tasks, we set 5 output spiking neurons, which are corresponding to the classification labels. And the output label of SNNs is the one with the greatest spike count.
Datasets  Contenders  Accuracy (%)  Setting  Control Rate ()  Epochs 
MNIST  Deep SNN O’Connor and Welling (2016)  97.80  282830030010    50 
Deep SNNBP Lee et al. (2016)  98.71  282880010    200  
SNNEP  97.63  282850010    25  
HM2BP Jin et al. (2018)  98.84 0.02  282880010    100  
SLAYER Shrestha and Orchard (2018)  98.39 0.04  282850050010    50  
SLAYER  98.53 0.03  282850050010      
SLAYER  98.59 0.01  282850050010      
BSNN (this work)  99.02 0.04  282850050010  0.21  50  
NMNIST  SKIM Cohen et al. (2016)  92.87  2*28281000010     
Deep SNNBP  98.78  2*282880010    200  
HM2BP  98.84 0.02  2*282880010    60  
SLAYER  98.89 0.06  2*282850050010    50  
SLAYER  99.01 0.01  2*282850050010      
SLAYER  99.07 0.02  2*282850050010      
BSNN (this work)  99.24 0.12  2*282850050010  0.49  50  
FashionMNIST  HM2BP  88.99  282840040010    15 
SLAYER  88.61 0.17  282850050010    50  
SLAYER  90.53 0.04  282850050010      
SLAYER  90.61 0.02  282850050010      
STRSBP Zhang and Li (2019)  90.00 0.13  2828400R40010    30  
BSNN (this work)  91.22 0.06  282850050010  0.32  50 

:300300 denotes two hidden layers with 300 spiking neurons, while 800 is one hidden layer with 800 spiking neurons.

: SNNEP OâConnor et al. (2019) proposes an implementation for training SNN with equilibrium propagation.

:  and  indicate the alternating optimization algorithms with parametric control rates sampled from and , respectively.

: R400 represents a recurrent layer of 400 spiking neurons.
The experimental results are shown in Table 1 that lists the comparative performance (accuracy) and configurations (setting and epoch) of the contenders and BSNN on 3 digit datasets. As we can see, BSNN performs best against other competing approaches, achieving very superior testing accuracy (i.e., more than 99% on MNIST, around 99.24% on NMNIST, and more than 91% on FashionMNIST). It is a laudable result for SNNs. In addition, the approaches based on alternating optimization algorithms, such as SLAYER and SLAYER, steadily surpass the existing SNNs without learnable control rates, which demonstrates the way of parameterizing eigenvalues of spike generation functions is significant and effective to SNNs.
Figure 3 illustrates the spike raster plots of SLAYER, SLAYER, and BSNN on the 4881th MNIST testing sample with label 0, showing the firing rates and neuron excitation snapshots of these 3 approaches in detail. Correspondingly, we firstly convert this image into a spike train using Poisson Encoding, and then mark the classification label according to the greatest spike count in Layer Output. The spike raster plots of spiking neurons (in Layer 1, Layer 2, and Layer Output) of SLAYER, SLAYER, and BSNN are successively shown in right 9 subplots. In SLAYER, the firing rate of spike neurons in the same layer is almost equal, so the output spikes are evenly generated, which causes the sample to be incorrectly classified as label 8. In contrast, both SLAYER and BSNN are able to adaptively generate spikes. The firing rates of spiking neurons show significant differences; the output spiking neurons relative to wrong labels are suppressed, while the neuron relative to correct label is “encouraged” to fire spikes and eventually won out with a big advantage.
We also demonstrate the robustness of BSNN to the control rate. This experiment is conducted on the MNIST dataset, setting the architecture of BSNN as 282850050010. For each control rate value, we ran BSNN 5 times, recorded the largest accuracy of each round within 50 epochs, and averaged 5 accuracy records as the testing performance. The results are plotted in Figure 5. Obviously, BSNN is able to perform better than the alternating optimization algorithms in a boardrange setting of control rates.
Based on the aforementioned experiments and analysis, we can declare that BSNN achieves the superior performance to the existing SNNs and the improved contenders  the approaches based on alternating optimization algorithms. Additionally, the performance of BSNN is no longer sensitive to the setting of control rates.
6 Conclusion and Discussions
In this paper, we attempt to achieve an adaptive firing rate in SNNs. We set out to address this issue with the eigenvalues of spike generation functions and reveal the closeknit relation between the two. Further, by employing the bifurcation theory to enable adaptable eigenvalues, we proposed the Bifurcation Spiking Neural Network (BSNN). Compared with the alternating optimization approaches, BSNN not only tackles the challenge that control rates interact with connection weights in training procedure, leading to a robust setting of control rates, but also works with a considerable less computation and storage. Finally, we demonstrate our model on a delayedmemory XOR task and 3 benchmark datasets. The experiments verify the effectiveness of BSNN.
We provided a series of theoretical discussions about the firing rates of spiking neurons and the bifurcation properties of BSNN, including and not limited to the relation between the firing rates and the eigenvalues of spike generation function, the algebraic structure of spike generation functions, and how to calculate the gradient for training BSNN. These results may promote the development of SNNrelated theories. Besides, we also declare that our work doesn’t aim at realizing a biological learning phenomenon but attempting to explore some new thoughts on SNNs. In this situation, Equation 8 that employs the last spikes of adjacent neurons to approximate the mutual promotion only provides a feasible paradigm of implementing dynamic bifurcation neurons. We are interested in scaling up our work.
Appendix A Eigenvalues and Algebraic Equations
For a system of firstorder linear differential equations as follows:
we have its algebraic formulation:
These algebraic equations are only related to the observation variables . So the spectrum values (eigenvalues) of the matrix (operator) lead to the evolution mechanism of this dynamic system.
Appendix B Bifurcation Structure in LIF Equation
We have done a lot of analysis in the text to illustrate the importance of the control rates to the performance of SNNs (that is, firing rates). In fact, the control rate intrinsically is a bifurcation hyperparameter of the LIF model. In this appendix, we are going to interpret this conjecture. We start our analysis from a simplest form of the LIF model, which with input and a rest voltage is generally formulated as follows:
(13) 
where represents the membrane potential at time , is the membrane time constant, is the control rate, usually preset to a fixed values , and is the membrane resistance. This equation describes a resistorcapacitor circuit that the spiking neuron can only be activated when the membrane potential reaches a certain threshold (firing threshold). After firing, the neural membrane potential is instantaneously reset to a lower value (rest voltage).
Particularly, as a mathematical ODE model, the LIF equation has fixed eigenvalues by solving its algebraic formulation. This means the control rate in the LIF model is equal to its eigenvalue . Correspondingly, the LIF equation has a general solution as follows:
where denotes the last firing time, that is, .
Consider a general case of constant input and generally assume . The solution of Equation 13 can be converted into:
Note that the next firing time , then the firing period is dervied below:
So the firing rate of the LIF neuron becomes:
(14) 
with the condition .
According to the formation of a twovariable function, there are obviously two core conclusions about : (1) is an increasing function with respect to and , respectively; (2) the firing rate of a spiking neuron is sensitive to the control rate. In detail, the establishment condition causes an inactivated area in the domain, which is plotted in Figure 6. This means that for a pregiven negative , the signals weaker than are detrimental to activate a LIF neuron, such as the simulated case that , , and in Figure 6. Additionally, the inactivated area is dominated by the sign of ; when is greater than 0, the neuron can always be activated, no matter what input, whereas once becomes negative, there is inevitably a situation in which neurons cannot be activated, although the input signals in reality are evolving according to time.
In summary, control rates of the LIF neuron are indeed have a great and sensitive influence on the firing rate of spiking neurons; both the magnitude and sign of the eigenvalue will affect the generation frequency of the spikes. When the control rates pass through the critical points, the LIF model has a topological change.
Footnotes
 http://yann.lecun.com/exdb/mnist/
 https://www.garrickorchard.com/datasets/nmnist
 https://www.kaggle.com/zalandoresearch/fashionmnist
References
 Building functional networks of spiking model neurons. Nature Neuroscience 19 (3), pp. 350. Cited by: §5.1.
 Feature representations for neuromorphic audio spike streams. Frontiers in Neuroscience 12, pp. 23. Cited by: §2, §5.2.
 Firing rate predictions in optimal balanced networks. In Advances in Neural Information Processing Systems 26 (NIPS), pp. 1538–1546. Cited by: §1.
 On the algorithmic power of spiking neural networks. In Proceedings of the 10th Innovations in Theoretical Computer Science (ITCS), Vol. 26, pp. 1–20. Cited by: §1.
 Skimming digits: neuromorphic classification of spikeencoded images. Frontiers in Neuroscience 10, pp. 184. Cited by: Table 1.
 Spiking neuron models: single neurons, populations, plasticity. Cambridge University Press. Cited by: §1.
 Time structure of the activity in neural network models. Physical Review E 51 (1), pp. 738. Cited by: §2, §4.3.
 Gradient descent for spiking neural networks. In Advances in Neural Information Processing Systems 31 (NIPS), pp. 1440–1450. Cited by: §4.3.
 Spiking deep networks with LIF neurons. arXiv:1510.08829. Cited by: §2.
 Hybrid macro/micro level backpropagation for training deep spiking neural networks. In Advances in Neural Information Processing Systems 31 (NIPS), pp. 7005–7015. Cited by: Table 1.
 Elements of applied bifurcation theory. Springer. Cited by: §4.1.
 Training deep spiking neural networks using backpropagation. Frontiers in Neuroscience 10, pp. 508. Cited by: Table 1.
 Deep spiking networks. arXiv:1602.08323. Cited by: Table 1.
 Training a spiking neural network with equilibrium propagation. In Proceesdings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1516–1523. Cited by: item .
 Phase transition dynamics. Cambridge University Press. Cited by: §4.1.
 Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. Journal of Neuroscience 25 (47), pp. 11003–11013. Cited by: §5.2.
 Invariant visual representation by single neurons in the human brain. Nature 435 (7045), pp. 1102. Cited by: §2.
 SLAYER: spike layer error reassignment in time. In Advances in Neural Information Processing Systems 31 (NIPS), pp. 1419–1428. Cited by: item Update connection weights:, §4.3, Table 1.
 Spike times make sense. Trends in Neurosciences 28 (1), pp. 1–4. Cited by: §1.
 Spiketrain level backpropagation for training deep recurrent spiking neural networks. In Advances in Neural Information Processing Systems 32 (NeurIPS), pp. 7800–7811. Cited by: Table 1.