SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes
Event-based neuromorphic systems promise to reduce the energy consumption of deep learning tasks by replacing expensive floating point operations on dense matrices by low power sparse and asynchronous operations on spike events. While these systems can be trained increasingly well using approximations of the backpropagation algorithm, these implementations usually require high precision errors for training and are therefore incompatible with the typical communication infrastructure of neuromorphic circuits. In this work, we analyze how the gradient can be discretized into spike events when training a spiking neural network. To accelerate our simulation, we show that using a special implementation of the integrate-and-fire neuron allows us to describe the accumulated activations and errors of the spiking neural network in terms of an equivalent artificial neural network, allowing us to largely speed up training compared to an explicit simulation of all spike events. This way we are able to demonstrate that even for deep networks, the gradients can be discretized sufficiently well with spikes if the gradient is properly rescaled. This form of spike-based backpropagation enables us to achieve equivalent or better accuracies on the MNIST and CIFAR10 dataset than comparable state-of-the-art spiking neural networks trained with full precision gradients. The algorithm, which we call SpikeGrad, is based on accumulation and comparison operations and can naturally exploit sparsity in the gradient computation, which makes it an interesting choice for a spiking neuromorphic systems with on-chip learning capacities.
SpikeGrad: An ANN-equivalent Computation Model for Implementing Backpropagation with Spikes
Johannes C. Thiele CEA, LIST 91191 Gif-sur-Yvette CEDEX, France email@example.com Olivier Bichler CEA, LIST 91191 Gif-sur-Yvette CEDEX, France firstname.lastname@example.org Antoine Dupret CEA, LIST 91191 Gif-sur-Yvette CEDEX, France email@example.com
noticebox[b]Preprint. Under review.\end@float
Spiking neural networks (SNNs) are a new generation of artificial neural network models , which try to harness potentially useful properties of biological neurons for energy efficient neuromorphic systems. In traditional artificial neural networks (ANNs), processing is based on operations on dense, real valued tensors. In contrast to this, SNNs communicate with asynchronous spike events, which potentially allows them to process efficiently information with high temporal and spatial sparsity if implemented in custom event-based hardware (see for instance  and ).
Previous work on optimizing SNNs with backpropagation
The recent years have seen a large number of approaches devoted to optimization of spiking neural networks with the backpropagation algorithm, either by converting ANNs to SNNs  or by simulating spikes explicitly in the forward pass and optimizing these dynamics with full precision gradients . These methods do usually not communicate gradients as spike signals (for a recent and more detailed review of training algorithms for SNNs, see  or ). It would however be desirable to enable on-chip learning in neuromorphic chips using the power of the backpropagation algorithm, while maintaining the advantages of spike-based processing also in the backpropagation phase. Recent work of  and  has discussed how forward processing in an SNN could be mapped to an ANN. Our work extends this analysis to the backward propagation pass, to yield a fully spike-based implementation of the backpropagation algorithm.
Previous work on approximating backpropagation with spikes
In the works of  and  a spike-based version of the backpropagation algorithm is implemented, using direct feedback to neurons via spike propagation through fixed weights to each layer of the network. While good performance on the MNIST dataset is achieved, they do not demonstrate the capacity of their algorithm on large ANNs and more realistic benchmarks. The exact backpropagation algorithm, which backpropagates through symmetric weights might however be required to reach good performance on large-scale problems .  uses an approximation of the backpropagation algorithm where the error is propagated via spike events to train a network for relational inference. However, no mathematical analysis of the approximate capacities of the algorithm is provided and no scalability to large scale classification problems is demonstrated.
Our contributions are twofold: First, we demonstrate how backpropagation can be seamlessly integrated into the spiking neural network framework by using a second accumulation compartment for error propagation, which discretizes the error into spikes. This way we obtain a system that is able to perform learning and inference based on accumulations and comparisons alone. As for the forward pass, this allows us to exploit the dynamic precision and sparsity provided by the discretization of all operations into asynchronous spike events. Secondly, we show that the system obtained in this way can be mapped to an ANN with equivalent accumulated responses in all layers. This allows us to simulate training of large-scale SNNs efficiently on graphic processing units (GPUs), using their equivalent ANN. We demonstrate classification accuracies equivalent or superior to existing implementations of SNNs trained with full precision gradients, and comparable to the precision of standard ANNs. Based on our review of the literature, our work provides for the first time an analysis of how the sparsity of the gradient during backpropagation can be exploited within a large-scale SNN processing structure. This is the first time competitive classification performances are reported on a large-scale spiking network where training and inference are fully implemented with spikes.
2 The SpikeGrad algorithm
We begin with the description of SpikeGrad, the spike-based backpropagation algorithm. For each training example/mini-batch, integration is performed from to for the forward pass and from to in the backward pass. Since no explicit time is used in the algorithm, represents symbolically the (very short) time between the arrival of an incoming spike and the response of the neuron, which is only used here to describe causality.
Integrate-and-fire neuron model
Our architecture consists of multiple layers (labeled by ) of integrate-and-fire (IF) neurons with integration variable and threshold :
The variable is the weight and a bias value. The spike activation function is a function which triggers a signed spike event depending on the internal variables of the neuron. It will be shown later that the specific choice of the activation function is fundamental for the mapping to an equivalent ANN. After a neuron has fired, its integration variable is decremented or incremented by the threshold value , which is represented by the second term on the r.h.s. of (1).
As a representation of the neuron activity, we use a trace which accumulates spike information over a single example:
By weighting the activity with the learning rate we avoid performing a multiplication when weighting the input with the learning rate for the weight update (8).
Implementation of implicit ReLU and surrogate activation function derivative
It is possible to define an implicit activation function based on how the neuron variables affect the spike activation function . In our implementation, we use the following fully symmetric function to represent linear activation functions (used for instance in pooling layers):
The following function corresponds to the rectified linear unit (ReLU) activation function:
The pseudo-derivative of the activation function is denoted symbolically by . We use for the linear case. For the ReLU, we use a surrogate of the form:
These choices will be motivated in the following sections. Note that the derivatives depend only on the final states of the neurons at time .
Discretization of gradient into spikes
For gradient backpropagation, we introduce a second compartment with threshold in each neuron, which integrates error signals from higher layers. The process discretizes errors in the same fashion as the forward pass discretizes an input signal into a sequence of signed spike signals:
To this end, we introduce a ternary error spike activation function which is defined in analogy to (3) using the error integration variable and the backpropagation threshold . The error is then obtained by gating this ternarized variable with one of the surrogate activation function derivatives of the previous section (linear or ReLU):
This ternary spike signal is backpropagated through the weights to the lower layers and also applied in the update rule of the weight increment accumulator :
which is triggered every time an error spike signal (7) is backpropagated. The weight updates are accumulated during error propagation and are applied after propagation is finished to update each weight simultaneously. In this way, the backpropagation of errors and the weight update will, exactly as forward propagation, only involve additions and comparisons of floating point numbers.
Loss function and error scale
We use the cross entropy loss function in the final layer applied to the softmax of the total integrated signal (no spikes are triggered in the top layer during inference). This requires more complex operations than accumulations, but is negligible if the number of classes is small. To make sure that sufficient error spikes are triggered in the top layer, and that error spikes arrive even in the lowest layer of the network, we apply a scaling factor to the error values before transferring them to . This scaling factor also implicitly sets the precision of the gradient, since a higher number of spikes means that a large range of values can be represented. To counteract the relative increase of the gradient scale, the learning rates have to be rescaled by a factor .
As pointed out in  and , it is crucial to maintain the full precision of the input image to obtain good performances on complex standard benchmarks with SNNs. One possibility is to encode the input in a large number of spikes . Another possibility, which has been shown to require a much lower number of spikes in the network, is to multiply the input values directly with the weights of the first layer (just like in a standard ANN). The drawback is that the first layer then requires multiplication operations. The additional cost of this procedure may however be negligible if all other layers can profit from spike-based computation. This problematic does not exist for stimuli which are natively encoded in spikes.
3 Formulation of the equivalent ANN
The simulation of the temporal dynamics of spikes requires a large number of time steps or events if activations are large. It would therefore be extremely beneficial if we were able to map the SNN to an equivalent ANN that can be trained much faster on standard hardware. In this section, we demonstrate that it is possible to find such an ANN using the forward and backward propagation dynamics described in the previous section.
Spike discretization error
We start our analysis with equation (1). We reorder the terms and sum over the increments every time the integration variable is changed either by a spike that arrives at time via connection , or by a spike that is triggered at time . With the initial conditions , , we obtain the final value :
By defining the total transmitted output of a neuron as we obtain:
The same reasoning can be applied to backpropagation of the gradient. We define the summed responses over error spikes times as to obtain:
In both equation (10) and (11), the terms and are equivalent to the output of an ANN with signed integer inputs and . The scaling factors and can be interpreted as a linear activation function in the case of the forward pass, and a gradient rescaling in the case of the backward pass. If gradients shall not be explicitly rescaled, backpropagation requires . The values of the residual integrations and therefore represent the spike discretization error or between the ANN outputs and and the accumulated SNN outputs and . Since we know that and , this gives bounds of and .
So far we can only represent linear functions. We now consider an implementation where the ANN applies a ReLU activation function instead. The SDE in this case is:
We can calculate the error by considering that (4) forces the neuron in one of two regimes (note that ): In one case, (this includes ). This implies and therefore (or even if ). In the other case, , where (4) is equivalent to (3).
This equivalence motivates the choice of (5) as a surrogate derivative for the SNN: the condition can be seen to be equivalent to , which defines the derivative of a ReLU. Finally, for the total weight increment , it can be seen from (2) and (8) that:
which is exactly the weight update formula of an ANN defined on the accumulated variables. We have therefore demonstrated that the SNN can be represented by an ANN by replacing recursively all and by and and applying the corresponding activation function directly on these variables. The error that will be caused by this substitution compared to using the accumulated variables and of an SNN is described by the SDE. This ANN can now be used for training of the SNN on GPUs. The SpikeGrad algorithm formulated on the variables , , and represents the algorithm that would be implemented on a event-based spiking neural network hardware platform. We will now demonstrate how the SDE can be further reduced to obtain an ANN and SNN that are exactly equivalent.
For a large number of spikes, the SDE may be negligible compared to the activation of the ANN. However, in a framework whose objective it is to minimize the number of spikes emitted by each neuron, this error can have a potentially large impact.
One option to reduce the error between the ANN and the SNN output is to constrain the ANN during training to integer values. One possibility is to round the ANN outputs:
The function here rounds to the next integer value, with boundary cases rounded away from zero. This behavior can be implemented in the SNN by a modified spike activation function which is applied after the full stimulus has been propagated. To obtain the exact response as the ANN, we have to take into account the current value of and modify the threshold values:
Because this spike activation function is applied only to the residual values, we call it the residual spike activation function. The function is applied to a layer after all spikes have been propagated with the standard spike activation function (3) or (4). We start with the lowest layer and propagate all residual spikes to the higher layers, which use the standard activation function. We then proceed with setting the next layer to residual mode and propagate the residual spikes. This is continued until we arrive at the last layer of the network.
By considering all possible rounding scenarios, it can be seen that (16) indeed implies:
The same principle can be applied to obtain integer-rounded error propagation:
We have to apply the following modified spike activation function in the SNN after the full error has been propagated by the standard error spike activation function:
We have therefore shown that the SNN will after each propagation phase have exactly the same accumulated responses as the corresponding ANN. The same principle can be applied to obtain other forms of rounding (e.g. floor and ceil), if (16) and (19) are modified accordingly.
Computational complexity estimation
Note that we have only demonstrated the equivalence of the accumulated neurons responses. However, for each of the response values, there is a large number of possible combinations of and values that lead to the same response. The computational complexity of the event-based algorithm depends therefore on the total number of these events. The best possible case is when the accumulated response value is represented by exactly spikes. In the worst case, a large number of additional redundant spikes is emitted which sum up to . The maximal number of spikes in each layer is bounded by the largest possible integration value that can be obtained. This depends on the maximal weight value , the number of connections and the number of spike events each connection receives, which is given by the maximal value of the previous layer (or the input in the first layer):
The same reasoning applies to backpropagation. Our experiments show that for input encodings where the input is provided in a continuous fashion, and weight values which are much smaller than the threshold value, the deviation from the best case scenario is rather small. This is because in this case the sub-threshold integration allows to average out the fluctuations in the signal. This way the firing rate stays rather close to its long term average and few redundant spikes are emitted. For the total number of spikes in the full network on the CIFAR10 test set, we obtain empirically .
For all experiments, the means, errors and maximal values are calculated over 20 simulation runs.
Tables 1 and 2 compare the state-of-the-art results for SNNs on the MNIST and CIFAR10 datasets. It can be seen that in both cases our results are competitive with respect to the state-of-the-art results of other SNNs trained with high precision gradients. Compared to results using the same topology, our algorithm performs at least equivalently.
The final classification performance of the network as a function of the error scaling term in the final layer can be seen in figure 1. Previous work on low bitwidth gradients  found that gradients usually require a higher precision than both weights and activations. Our results also seem to indicate that a certain minimum number of error spikes is necessary to achieve convergence. This strongly depends on the depth of the network and if enough spikes are triggered to provide sufficient gradient signal in the bottom layers. For the CIFAR10 network, convergence becomes unstable for approximately . If the number of operations is large enough for convergence, the required precision for the gradient does not seem to be extremely large. On the MNIST task, the difference in test performance between a gradient rescaled by a factor of 50 and a gradient rescaled by a factor of 100 becomes insignificant. In the CIFAR10 task, this is true for a rescaling by 400 or 500. Also the results obtained with the float precision gradients in tables 1 and 2 demonstrate the same performance, given the range of the error.
|Architecture||Method||Rec. Rate (max[meanstd])|
|Wu et al. *||Direct training float gradient|
|Rueckauer et al. ||CNN converted to SNN|
|Jin et al. *||Direct Macro/Micro BP|
|This work*||Direct float gradient|
|This work*||Direct spike gradient|
|Architecture||Method||Rec. Rate (max[meanstd])|
|Rueckauer et al. ||CNN converted SNN (with BatchNorm)|
|Sengupta et al. ||VGG-16 converted to SNN|
|Wu et al. *||Float gradient (no NeuNorm)|
|This work*||Direct float gradient|
|This work*||Direct spike gradient|
Sparsity in backpropagated gradient
To evaluate the potential efficiency of the spike coding scheme relative to an ANN, we use the metric of relative synaptic operations. A synaptic operation corresponds to a multiply-accumulate (MAC) in the case of an ANN, and a simple accumulation (ACC) in the case of an SNN. This metric allows us to compare networks based on their fundamental operation. The advantage of this metric is the fact that it does not depend on the exact implementation of the operations (for instance the number of bits used to represent each number). Since an ACC is however generally cheaper and easier to implement than a MAC, we can be sure that an SNN is more efficient in terms of its operations than the corresponding ANN if the number of ACCs is smaller than the number of MACs.
In figure 1 it can be seen that the number of operations (i.e. the number of spikes) decreases with increasing inference precision of the network. This is a result of the decrease of error in the classification layer, which leads to the emission of a smaller number of error spikes. Numbers were obtained with the integer activation of the equivalent ANN to keep simulation times tractable. As explained previously, the actual number of events and synaptic operations in an SNN may therefore slightly deviate from these numbers. Figure 2 demonstrates how the number of operations during the backpropagation phase is distributed in the layers of the network (float precision input layer and average pooling layers were omitted). While propagating deeper into the network, the relative number of operations decreases and the error becomes increasingly sparse. This tendency is consistent during the whole training process for different epochs.
5 Discussion and conclusion
Using spike-based propagation of the error gradient, we demonstrated that the paradigm of event-based information propagation can be translated to the backpropagation algorithm. We have not only shown that competitive inference performance can be achieved, but also that gradient propagation seems particularly suitable to leverage spike-based processing by exploiting high signal sparsity. For both forward and backward propagation, SpikeGrad requires a similar communication infrastructure between neurons, which simplifies a possible spiking hardware implementation. One restriction of our algorithm is the need for negative spikes, which could be problematic depending on the particular hardware implementation.
In particular the topology used for CIFAR10 classification is rather large for the given task. We decided to use the same topologies as the state-of-the-art to allow for better comparison. In an ANN implementation, it is generally undesirable to use a network with a large number of parameters, since it increases the need for memory and computation. The relatively large number of parameters may to a certain extent explain the very low number of relative synaptic operations we observed during backpropagation. In an SNN, a large number of parameters is however less problematic from a computational perspective, since only the neurons which are activated by input spikes will trigger computations. A large portion of the network will therefore remain inactive. It would still be interesting to investigate signal sparsity and performance of SpikeGrad in ANN topologies that were explicitly designed for minimal computation and memory requirements.
-  P. Baldi and P. Sadowski. A theory of local learning, the learning channel, and the optimality of backpropagation. Neural Networks, 38:51–74, 2016.
-  S. Bartunov, A. Santoro, B. A. Richards, G. E. Hinton, and T. P. Lillicrap. Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures. In Advances in Neural Information Processing Systems (NIPS) 2018, 2018.
-  G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass. Long short-term memory and learning-to-learn in networks of spiking neurons. In Advances in Neural Information Processing Systems (NIPS) 2018, 2018.
-  J. Binas, G. Indiveri, and M. Pfeiffer. Deep counter networks for asynchronous event-based processing. arXiv:1611.00710v1, NIPS 2016 workshop "Computing with Spikes", Barcelona, Spain, 2016.
-  P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer. Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing. In International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 2015.
-  S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha. Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences, 113(41):11441–11446, 2016.
-  Y. Jin, P. Li, and W. Zhang. Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks. In Advances in Neural Information Processing Systems (NIPS) 2018, pages 7005–7015, 2018.
-  J. H. Lee, T. Delbruck, and M. Pfeiffer. Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience, (10:508), 2016.
-  W. Maass. Networks of Spiking Neurons: The Third Generation of Neural Network Models. Electronic Colloquium on Computational Complexity, (9):1659–1671, 1997.
-  P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner, W. P. Risk, R. Manohar, and D. S. Modha. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 345(6197):668–673, 2014.
-  E. Neftci, C. Augustine, P. Somnath, and G. Detorakis. Event-Driven Random Backpropagation: Enabling Neuromorphic Deep Learning Machines. Frontiers in Neuroscience, 11(324), 2017.
-  P. O’Connor and M. Welling. Deep Spiking Networks. arXiv:1602.08323v2, NIPS 2016 workshop "Computing with Spikes", Barcelona, Spain, 2016.
-  M. Pfeiffer and T. Pfeil. Deep Learning With Spiking Neurons: Opportunities and Challenges. Frontiers in Compututational Neuroscience, 12(774), 2018.
-  N. Qiao, H. Mostafa, F. Corradi, M. Osswald, F. Stefanini, D. Sumislawska, and G. Indiveri. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Frontiers in Neuromorphic Engineering, 2015.
-  B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification. Frontiers in Neuroscience, 11(682), 2017.
-  A. Samadi, T. P. Lillicrap, and D. B. Tweed. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights. Neural Computation, (29):578–602, 2017.
-  A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy. Going Deeper in Spiking Neural Networks: VGG and Residual Architectures. Frontiers in Neuroscience, 13:95, 2019.
-  W. Severa, C. M. Vineyard, R. Dellana, S. J. Verzi, and J. B. Aimone. Training deep neural networks for binary communication with the Whetstone method. Nature Machine Intelligence, (1):86–94, 2019.
-  Z. Shuchang, W. Yuxin, N. Zekun, Z. Xinyu, W. He, and Z. Yuheng. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv:1606.06160v3, 2018.
-  A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida. Deep learning in spiking neural networks. Neural Networks, 111:47 – 63, 2019.
-  J. C. Thiele, O. Bichler, A. Dupret, S. Solinas, and G. Indiveri. A Spiking Network for Inference of Relations Trained with Neuromorphic Backpropagation. arXiv:1903.04341, 2019.
-  J. Wu, Y. Chua, M. Zhang, Q. Yang, G. Li, and H. Li. Deep Spiking Neural Network with Spike Count based Learning Rule. arXiv:1902.05705v1, 2019.
-  Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi. Direct Training for Spiking Neural Networks: Faster, Larger, Better. arXiv:1809.05793v1, 2018.
-  Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks. Frontiers in Neuroscience, (12:331), 2018.
-  S. Yin, S. K. Venkataramanaiah, G. K. Chen, R. Krishnamurthy, Y. Cao, C. Chakrabarti, and J.-s. Seo. Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations. arXiv:1709.06206v1, 2017.
-  F. Zenke and S. Ganguli. Superspike: Supervised learning in multilayer spiking neural networks. Neural Computation, (30):1514–1541, 2018.