Training Dynamic Exponential Family Models with Causal and Lateral Dependencies for Generalized Neuromorphic Computing

Training Dynamic Exponential Family Models with Causal and Lateral Dependencies for Generalized Neuromorphic Computing

Abstract

Neuromorphic hardware platforms, such as Intel’s Loihi chip, support the implementation of Spiking Neural Networks (SNNs) as an energy-efficient alternative to Artificial Neural Networks (ANNs). SNNs are networks of neurons with internal analogue dynamics that communicate by means of binary time series. In this work, a probabilistic model is introduced for a generalized set-up in which the synaptic time series can take values in an arbitrary alphabet and are characterized by both causal and instantaneous statistical dependencies. The model, which can be considered as an extension of exponential family harmoniums to time series, is introduced by means of a hybrid directed-undirected graphical representation. Furthermore, distributed learning rules are derived for maximum likelihood and Bayesian criteria under the assumption of fully observed time series in the training set.

I Introduction

The current dominant computing framework for supervised learning applications is given by feed-forward multi-layer Artificial Neural Networks (ANNs). These systems process real numbers through a cascade of non-linearities applied successively over multiple layers. It is well-understood that training and running ANNs for inference generally require a significant amount of resources in terms of space and time (see, e.g., [1]). Neuromorphic computing, currently backed by recent major projects by IBM, Qualcomm, and Intel, offers a fundamental paradigm shift that takes the trend towards distributed computing initiated by ANNs to its natural extreme by borrowing insights from computational neuroscience. A neuromorphic chip consists of a network of spiking neurons with internal temporal analogue dynamics and digital spike-based synaptic communications. Current hardware implementations confirm drastic power reductions by orders of magnitude with respect to ANNs [2, 3].

Models typically used to train Spiking Neural Networks (SNNs) are deterministic, and learning rules borrow tools and ideas from the design of ANNs. Examples include the standard leaky integrate-and-fire model and its variants, with associated learning rules that approximate backpropagation [4, 5, 6]. Probabilistic models for SNNs are more conventionally adopted in computational neuroscience and offer a variety of potential advantages, including flexibility and availability of principled learning criteria [7, 8]. Nevertheless, they pose technical challenges in the design of training and inference algorithms that have only partially been addressed [9, 10, 11, 12].

In this work, we study the problem of training a probabilistic model for correlated time series. The framework generalizes probabilistic models for SNNs [13, 12] by allowing for arbitrary alphabets – not restricted to binary – and by enabling individual time series – or neuron signals – not only to affect each other causally over time but also to have instantaneous correlations. It is noted that both general discrete alphabets and lateral dependencies are in principle implementable on existing digital neuromorphic chips [1]. We derive distributed learning rules for Maximum Likelihood (ML) and for Bayesian criteria based on synaptic sampling [14, 15, 16, 17, 18], and detail the corresponding communication requirements. Applications of the model to supervised learning via classification are also discussed.

The proposed model can be considered an extension of exponential family harmoniums [19] from vector-valued signals to time series. We refer to the model as dynamic exponential family, which is specified by a hybrid directed and undirected graphical representation (see Fig. 1). We specifically focus here on the case of fully observed time series, which is of independent interest and also serves as building block for the more complex set-up with latent variables, to be considered in follow-up work.

Fig. 1: Hybrid directed and undirected graphical representation of a dynamic exponential family model: (a) causal graph representing directed causal relationships; (b) lateral graph encoding instantaneous correlations; and (c) time-expanded graph assuming a memory for the causal connections equal to .

The rest of this paper is organized as follows. In Sec. II, we describe the dynamic exponential family model. Under this model, we then derive distributed learning rules for ML and Bayesian criteria in Sec. III. Finally, Sec. IV presents numerical results for a multi-task supervised learning problem tackled via a two-layer SNN.

Ii Dynamic Exponential Family Model

In this section, we describe the proposed probabilistic model as an extension of exponential family harmoniums [19] to time series.

Hybrid Directed-Undirected Representation. We consider a probabilistic model for dependent time series that captures both causal and lateral, or instantaneous, correlations. To describe both causal and lateral dependencies, we introduce a directed graph and an undirected graph , respectively, where is the set of time sequences of interest. Having in mind the application to SNNs discussed above, we will also refer to the vertices as neurons or units. The edge set represents the causal connection between time series and corresponding neurons, and hence we refer to as the causal graph. Note that the presence of self-loops, i.e., edges of the form connecting a unit to itself, or of more general loops involving multiple nodes, is allowed in the causal graph, and it indicates a recurrent behavior for the time series. In contrast, the edge set encodes instantaneous correlations or lateral connections, and hence the graph is referred to as the lateral graph. Examples are shown in Fig. 1 and Fig. 1 for a system with time series and thus nodes.

In the causal graph we denote by the subset of units that has a causal connection to unit . Note that the set includes also unit if there is a self-loop for unit and that a causal connection from unit to unit does not imply the inclusion of the edge in in general. For the lateral graph , we denote by the subset of units that has a lateral connection with unit . Edges and are equivalent in graph . Moreover, we denote by the subset of units that are reachable from unit in lateral graph in the sense that there exists a path in between units and any unit , where a path is a sequence of edges in that share a vertex. We generally have the inclusion ; and that, for any , we have the set equality , since two units and are laterally connected to each other. In example of Fig. 1, we have the subsets , , and .

Probabilistic model. Each unit is associated with a random sequence for , taking values in either a continuous or discrete space . Given causal and lateral connections defined by graphs and , the joint distribution of the random process for factorizes according to the chain rule

(1)

where we denote as the overall random process from time to , and as the set of model parameters. In (1), we have implicitly conditioned on initial value , which is assume to be fixed. Furthermore, the conditional probability follows a distribution in the exponential family with -dimensional sufficient statistics for every unit and a quadratic energy function. In the practically relevant case for neuromorphic computing where the alphabets are discrete and finite, i.e., for some integer , then the sufficient statistics are defined as the one-hot representation, i.e., , where is the indicator function for the event , i.e., evaluates to 1 if and otherwise. Note that we have . Furthermore, SNNs are characterized by , i.e., by a binary alphabet, and we have .

Writing for simplicity of notation, we have

(2)

for some memory parameter . The model (2) captures causal and lateral connections through the second and third terms of the exponential function, respectively. The model is parameterized by the set , whose components are described next.

First, the vector contains unit-wise natural parameter for unit , which is collected as . Second, for every pair of units , the matrix describes the causal influence of unit to unit after a time lag of time instants. In particular, the -th element indicates the causal strength from to . We use the notations and . The structure of the causal graph defines the position of zeros in : is generally non-zero if and is an all-zero matrix otherwise. Note that, in general, we have for . Third, for every pair of units , the matrix describes the instantaneous dependence between units and . In particular, the -th element indicates the correlation strength between the -th sufficient statistics of unit and the -th sufficient statistics of unit . The structure of the lateral graph defines the position of zeros in : is generally non-zero if and is an all-zero matrix otherwise. Note also that we have for .

Following the common approach in computational neuroscience [20], we adopt the parameterization of the causal parameters as the weighted sum of fixed basis functions with learnable weights. To elaborate, for every , we write

(3)

where we have defined basis functions , which vary over time for , and learnable weight matrices with being a matrix, which are collected as . We note that a set of basis functions, along with the weight matrices, describes the spatio-temporal receptive field of the neurons [20]. Examples of basis functions include raised cosines with different synaptic delays [20, 21, 22], as illustrated in Fig. 2. As a result, the conditional joint distribution in (2) is written as

(4)

whose model parameters are defined as . When the number of basis functions is smaller than , the number of parameters is reduced with respect to model (2).

Fig. 2: The raised cosine basis functions , where is indicated in the figure [20].

In (4), a key role is played by the -th filtered trace of unit , which we define as the convolution

(5)

This signal is obtained by filtering the signal through a filter with impulse response equal to the basis function . With the help of this definition, we can re-express the conditional joint distribution in (4) as

(6a)
(6b)

In (6a), we have emphasized the dependence of on the history only through the filtered traces with .

Furthermore, the set of filtered traces from all units in determine the generalized membrane potential at time

(7)

Then, we can rewrite the conditional joint distribution (4) as

(8a)
(8b)

In (8a), we have emphasized the dependence of on the history only through the membrane potentials .

Iii Learning

In this section, we tackle the problem of learning the model parameter for fixed basis functions under the assumption of fully observed time series. As we will see in Sec. IV, this set-up is, for instance, relevant to the training of two-layer SNNs and generalizations thereof, in supervised learning problems.

Iii-a Maximum Likelihood Learning via Gradient Descent

We first aim at deriving gradient-based rules for ML training. To this end, we start by writing the log-likelihood of a given single time-series as

(9)

The gradient of the log-likelihood function is obtained as where

(10a)
(10b)
(10c)

The gradients (10) have the structure, typical for exponential family models, that presents the difference between the empirical average of the relevant sufficient statistics given the actual realization of the time series and the corresponding ensemble average under the model distribution (8). In detail, the gradient in (10) has two components: (i) the positive component points in a direction of the natural parameter space that maximizes the unnormalized log-distribution, or energy, i.e., the logarithm of (8b), for a given data , thus maximizing the fitness of the model to the observed data ; and (ii) the negative component points a direction that minimizes the partition function, i.e., the corresponding normalization term, hence minimizing the fitness of the model to the unobserved configurations (see, e.g., [23]).

\setstretch0.975 Input: Causal graph , lateral graph , training set of time-series , and learning rate
Output: Learned model parameters
1   initialize parameters
2 repeat
3       draw an example from the training set
4       for each unit  do compute the gradients in (10) based on and (see Fig. 3);
5       update model parameters
6until convergence of ;
Algorithm 1 Maximum Likelihood Learning via Gradient Descent

A stochastic gradient-based optimizer iteratively updates the parameter based on the rule summarized in Algorithm 1, where the learning rate is assumed to be fixed here for simplicity of notation. With this rule, each unit updates its own local parameters , where we use the notations and . We note that intersections of the sets of local parameters are non-empty in the presence of the lateral connections among units, and the lateral parameters can be updated at all the units connected to edges in a consistent manner. Details of the distributed computation of the gradients are provided next (see also Fig. 3).

Distributed computation of the gradients. We now discuss the information required to compute the gradients (10) for each unit in order to enable a distributed implementation of Algorithm 1. We define as local to neuron all quantities that concern units that have either a directed edge to neuron in or an undirected edge with it in . The positive components of all the gradients (10) require the local statistics and the local filtered traces for all parents . Furthermore, they also require the current local sufficient statistics for all units connected by lateral link. In contrast, the negative components entail the computation of the average of the local sufficient statistics and of the products for over their marginal distributions, which is further discussed next.

The marginal distribution of the local sufficient statistics is obtained from

(11)

where we recall that is the subset of units that are reachable from unit via lateral connections. As indicated by the notation in (11), the joint distribution on the right-hand side can be obtained from (8b) by including in the sums at the exponent only the terms that depend on and . Calculation of (11) hence requires knowledge of the membrane potentials from the units in , and it depends on the local parameters and lateral parameters of all units in the set . The marginal distribution of the products for is similarly computed as

(12)
Fig. 3: Illustration of the information required to compute the gradients (10) for unit : (a) directed graph ; (b) undirected graph ; (c) local and (d) non-local information necessary for learning.

In summary, as illustrated in Fig. 3, the computation of the gradients (10) for unit requires knowledge of the local sufficient statistics and for all , of the local filtered traces and for all , as well as the local membrane potentials for all . It also requires the acquisition of the membrane potentials and of the current lateral parameters for all units in the set . This calls for non-local signaling if there are units in that are not in the sets .

The complexity of the computation of the second terms in (10) depends exponentially on the size of the set . In particular, if there are no lateral connections, i.e., is an empty graph, the marginalizations (11)-(12) are not required since the set of every unit is empty, and the second terms can be obtained in closed form. When the sets are too large to enable computation of the sums in (11)-(12), Gibbs sampling, or approximations such as Contrastive Divergence, can be used to enable the expectation [24].

Iii-B Bayesian Learning

As a potentially more powerful alternative to ML learning, Bayesian methods aim at computing the posterior distribution of the parameter vector under the assumption of a given prior . Bayesian learning can capture parameter uncertainty and is able to naturally avoid overfitting. While exact Bayesian learning is prohibitively complex, a gradient-based method has been recently proposed that combines Robbins-Monro type stochastic approximation methods [25] with Langevin dynamics [14]. The method, called the stochastic gradient Langevin dynamics [15, 16, 17, 18], updates the parameter based on the rule summarized in Algorithm 2, where is the learning rate and denotes i.i.d. Gaussian noise with zero mean and variance . The update rule in Algorithm 2 has the property that the distribution of the parameter vector at equilibrium matches the Bayesian posterior as .

\setstretch0.975 Input: Causal graph , lateral graph , training set of time-series , prior of the parameters , and learning rate
Output: Learned model parameters
1   initialize parameters
2 repeat
3       draw an example from the training set
4       for each unit  do compute the gradients in (10) based on and (as in Algorithm 1);
5       update model parameters
6until convergence of ;
Algorithm 2 Bayesian Learning

Iv Results and Discussion

In this section, we consider an application of the methods developed in this paper to a multi-task supervised learning problem based on a two-layer SNN. The problem consists of the two tasks of classifying a handwritten number and of detecting a possible rotation of the image of the handwritten digit (see Fig. 4). As illustrated in Fig. 4, the set of neurons consists of two layers: the spike trains associated with the neurons in the first layer are determined by the input image, while the spike trains corresponding to the second layer are determined by the correct labels for the indices of number and orientation of the image. Specifically, as seen in Fig. 4, we have one input neuron for every pixel of the image. Furthermore, the output layer contains two subsets of neurons: one subset of neurons with one neuron for every possible digit index, and another subset with one neuron for each possible orientation, i.e., vertical (unrotated) or rotated. The two-layer SNN can be represented as a dynamic exponential family, in which there are causal connections only from input neurons to output neurons, with self-loops at the output neurons, as well as lateral connections between output neurons in the same subset in order to capture the correlations of the labels in the same task.

Fig. 4: Hybrid directed and undirected graphical architecture of a two-layer SNN used for supervised learning: each input neuron corresponds to a pixel of the input image; each output neuron encodes the digit index; and each output neuron encodes the orientation index. The directed graph has causal connections from input neurons to output neurons and self-loops at the output neurons; and the undirected graph has lateral connections only between output neurons in the same subset.

The training dataset is generated by selecting images of each digit “” and “” from the USPS handwritten digit dataset [26]. All images are included as they are, that is, unrotated, and they are also included upon a rotation by a randomly selected degree. The test set is similarly obtained by using examples from the USPS dataset. As a result, we have input neurons, with one per pixel of the input image. Each gray scale pixel is converted to an input spike train by generating an i.i.d. Bernoulli vector of samples with spike probability proportional to the pixel intensity and limited between and . The index labels are encoded by the first subset of two output neurons, so that the desired output of the neuron corresponding to the correct label index is one spike emission every three non-spike samples, while the other neuron is silent. The same is done for the orientation labels , using the second subset of two output neurons, where the label indicates the original unrotated image, and the label indicates the rotated image. In order to test the classification accuracy, we assume standard rate decoding, which selects the output neuron, and hence the corresponding indices, with the largest number of output spikes. Finally, in order to investigate the impact of lateral connections in our model, we also consider a two-layer SNN in which lateral connections are not allowed.

Fig. 5: ML learning: test accuracy for the two tasks of digit and orientation classification versus the duration of the time series when .

First, we train both models with the ML learning rule in Algorithm 1 for epochs with constant learning rate , based on trials with different random seeds. The model parameters are randomly initialized with uniform distribution in range , while the lateral parameters are randomly initialized in range . We assume the raised cosine basis functions introduced in [20, 9] in (3). Fig. 5 demonstrates the classification accuracy in the test set versus the length of time series with . From the figure, the task of classifying orientation is seen to be easier than digit identification. Furthermore, we observe better accuracies in both tasks when we train the model with lateral connections between output neurons. This implies that Algorithm 1 can efficiently learn how to make use of the instantaneous correlations among output neurons participating in the same task.

Fig. 6: Bayesian learning with different priors: (a)-(b) with uniform prior and (c)-(d) with bimodal prior; figures (a) and (c) show the log-likelihood on the training and test sets as the function of the learning epochs, where the shaded area indicates three standard deviations of the mean based on trials; and figures (b) and (d) plot the evolution of randomly selected causal parameters throughout the epochs (left) and the histogram at the end of the learning epoch (right).

We then consider the Bayesian learning rule in Algorithm 2, and study the impact of the prior for the model parameters on overfitting. To this end, we select and samples of each digit and orientation class for training and testing, respectively. After training the model for a number of epochs with constant learning rate , we evaluate the learned model by measuring the log-likelihood of the desired output spikes for the correct label given the input images in both training and test set. For the experiment in Fig. 6-6, a uniform prior over the causal parameters is assumed; while we choose a mixture of two Gaussians with means and and common standard deviations as a prior for the results in Fig. 6-6, following the approach in [18]. Fig. 6 shows that a uniform prior can lead to overfitting, as also suggested by the larger variance of the sampled weights plotted in Fig. 6. In contrast, a bimodal prior yields good generalization performance due to its capacity to act as a regularizer by keeping a fraction of the weights close to zero as seen in Fig. 6.

References

  1. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
  2. A. Diamond, T. Nowotny, and M. Schmuker, “Comparing neuromorphic solutions in action: implementing a bio-inspired solution to a benchmark classification task on three parallel-computing platforms,” Frontiers in Neuroscience, vol. 9, p. 491, 2016.
  3. C. D. Schuman, T. E. Potok, R. M. Patton, J. D. Birdwell, M. E. Dean, G. S. Rose, and J. S. Plank, “A survey of neuromorphic computing and neural networks in hardware,” arXiv preprint arXiv:1705.06963, 2017.
  4. J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” Frontiers in Neuroscience, vol. 10, p. 508, 2016.
  5. Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi, “Spatio-temporal backpropagation for training high-performance spiking neural networks,” Frontiers in Neuroscience, vol. 12, 2018.
  6. S. M. Bohte, J. N. Kok, and J. A. La Poutré, “Spikeprop: backpropagation for networks of spiking neurons.” in ESANN, 2000, pp. 419–424.
  7. J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli, and E. Chichilnisky, “Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model,” Journal of Neuroscience, vol. 25, no. 47, pp. 11 003–11 013, 2005.
  8. D. Koller, N. Friedman, and F. Bach, Probabilistic graphical models: principles and techniques.   MIT press, 2009.
  9. A. Bagheri, O. Simeone, and B. Rajendran, “Training probabilistic spiking neural networks with first-to-spike decoding,” IEEE ICASSP, 2018.
  10. B. Gardner and A. Grüning, “Supervised learning in spiking neural networks for precise temporal encoding,” PLoS ONE, vol. 11, no. 8, p. e0161335, 2016.
  11. J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner, “Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning,” Neural Computation, vol. 18, no. 6, pp. 1318–1348, 2006.
  12. D. J. Rezende and W. Gerstner, “Stochastic variational learning in recurrent spiking networks,” Frontiers in Computational Neuroscience, vol. 8, p. 38, 2014.
  13. G. E. Hinton and A. D. Brown, “Spiking boltzmann machines,” in Advances in Neural Information Processing Systems, 2000, pp. 122–128.
  14. R. M. Neal et al., “MCMC using Hamiltonian dynamics,” Handbook of Markov Chain Monte Carlo, vol. 2, no. 11, p. 2, 2011.
  15. M. Welling and Y. W. Teh, “Bayesian learning via stochastic gradient Langevin dynamics,” in International Conference on Machine Learning, 2011, pp. 681–688.
  16. S. Patterson and Y. W. Teh, “Stochastic gradient Riemannian Langevin dynamics on the probability simplex,” in Advances in Neural Information Processing Systems, 2013, pp. 3102–3110.
  17. I. Sato and H. Nakagawa, “Approximation analysis of stochastic gradient Langevin dynamics by using fokker-planck equation and ito process,” in International Conference on Machine Learning, 2014, pp. 982–990.
  18. D. Kappel, S. Habenschuss, R. Legenstein, and W. Maass, “Network plasticity as Bayesian inference,” PLoS Computational Biology, vol. 11, no. 11, p. e1004485, 2015.
  19. M. Welling, M. Rosen-Zvi, and G. E. Hinton, “Exponential family harmoniums with an application to information retrieval,” in Advances in Neural Information Processing Systems, 2005, pp. 1481–1488.
  20. J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, “Spatio-temporal correlations and visual signalling in a complete neuronal population,” Nature, vol. 454, no. 7207, p. 995, 2008.
  21. P. Baldi and A. F. Atiya, “How delays affect neural dynamics and learning,” IEEE Transactions on Neural Networks, vol. 5, no. 4, pp. 612–621, 1994.
  22. A. Taherkhani, A. Belatreche, Y. Li, and L. P. Maguire, “Dl-resume: a delay learning-based remote supervised method for spiking neurons,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 12, pp. 3137–3149, 2015.
  23. O. Simeone et al., “A brief introduction to machine learning for engineers,” Foundations and Trends® in Signal Processing, vol. 12, no. 3-4, pp. 200–431, 2018.
  24. G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural Computation, vol. 14, no. 8, pp. 1771–1800, 2002.
  25. H. Robbins and S. Monro, “A stochastic approximation method,” The Annals of Mathematical Statistics, vol. 22, no. 3, pp. 400–407, 1951.
  26. J. J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 5, pp. 550–554, 1994.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
311605
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description