Brain-Inspired Hardware for Artificial Intelligence: Accelerated Learning in a Physical-Model Spiking Neural Network

Brain-Inspired Hardware for Artificial Intelligence: Accelerated Learning in a Physical-Model Spiking Neural Network

Timo Wunderlich Akos F. Kungl Eric Müller Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany Johannes Schemmel Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany Mihai A. Petrovici
Abstract

Future developments in artificial intelligence will profit from the existence of novel, non-traditional substrates for brain-inspired computing. Neuromorphic computers aim to provide such a substrate that reproduces the brain’s capabilities in terms of adaptive, low-power information processing. We present results from a prototype chip of the BrainScaleS-2 mixed-signal neuromorphic system that adopts a physical-model approach with a 1000-fold acceleration of spiking neural network dynamics relative to biological real time. Using the embedded plasticity processor, we both simulate the Pong arcade video game and implement a local plasticity rule that enables reinforcement learning, allowing the on-chip neural network to learn to play the game. The experiment demonstrates key aspects of the employed approach, such as accelerated and flexible learning, high energy efficiency and resilience to noise.

\addbibresource

references.bib

{mdframed}

[linewidth=2pt,linecolor=red]

  • This article has been published as part of the Lecture Notes in Computer Science book series (LNCS, volume 11727). The final authenticated version is available online at https://doi.org/10.1007/978-3-030-30487-4_10.
    Please cite it as Wunderlich et. al, 2019. Brain-Inspired Hardware for Artificial Intelligence: Accelerated Learning in a Physical-Model Spiking Neural Network. In: Tetko I., Kůrková V., Karpov P., Theis F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. ICANN 2019. Lecture Notes in Computer Science, vol 11727. Springer, Cham. doi: 10.1007/978-3-030-30487-4_10..

1 Introduction

Many breakthrough advances in artificial intelligence incorporate methods and algorithms that are inspired by the brain. For instance, the artificial neural networks employed in deep learning are inspired by the architecture of biological neural networks \citepHassabis2017Neuroscience-InspiredIntelligence. Very often, however, these brain-inspired algorithms are run on classical von Neumann devices that instantiate a computational architecture remarkably different from the one of the brain. It is therefore a widely held view that the future of artificial intelligence will depend critically on the deployment of novel computational substrates \citepDean2018ARevolution. Neuromorphic computers represent an attempt to move beyond brain-inspired software by building hardware that structurally and functionally mimics the brain \citepSchuman2017AHardware.

In this work, we use a prototype of the BrainScaleS-2 (BSS2) neuromorphic system \citepFriedmann2017DemonstratingSystem. The employed physical-model approach enables the accelerated (1000-fold with respect to biology) and energy-efficient emulation of spiking neural networks (SNNs). Beyond SNN emulation, the system contains an embedded plasticity processing unit (PPU) that provides facilities for the flexible implementation of learning rules. Our prototype chip (see Fig. 1 A) contains 32 physical-model neurons with 32 synapses each, totalling 1024 synapses. The neurons are an electronic circuit implementation of the leaky integrate-and-fire neuron model. We use this prototype to demonstrate key advantages of our employed approach, such as the 1000-fold speed-up of neuronal dynamics, on-chip learning, high energy-efficiency and robustness to noise in a closed-loop reinforcement learning experiment.

2 Experiment

Figure 1: A: Prototype chip and evaluation board. B: Schematic of experimental setup. The chip runs the experiment fully autonomously, with the PPU simulating the virtual environment (Pong) and calculating reward-based weight updates that are applied to the on-chip analog neural network. The neural network receives the discretized ball position as input and outputs the target paddle position. B taken from [Wunderlich2019DemonstratingStudy].

The experiment represents the first demonstration of on-chip closed-loop learning in an accelerated physical-model neuromorphic system \citepWunderlich2019DemonstratingStudy. It takes place on the chip fully autonomously, with external communication only required for initial configuration (see Fig. 1 B). We use the embedded plasticity processor both to simulate a simplified version of the Pong video game (opponent is a solid wall) and to implement a reward-modulated spike-timing-dependent plasticity (R-STDP) learning rule \citepFremaux2015NeuromodulatedRules. of the form , where is the reward, is a running average of the reward and is an STDP-like eligibility trace. Each synapse locally records the STDP-like eligibility trace and stores it as an analog value (a voltage), to be digitized and used by the plasticity processor \citepFriedmann2017DemonstratingSystem.

The two-layer neural network receives the ball position along one axis as input and dictates the target paddle position, to which the paddle moves with constant velocity, using the neuronal firing rates. It receives reward depending on its aiming accuracy (i.e., how close it aims the paddle to the center of the ball), with for perfect aiming, for not aiming under the ball and graded steps in between. By correlating reward and synaptic activity via the given learning rule, the SNN on the chip learns to trace the ball with high fidelity.

3 Results

Figure 2: Screenshot of experiment interface for live demonstration. Left: Playing field and color-coded neuronal firing rates. Middle: Color-coded synaptic weight matrix. Right: Performance in playing Pong.

A screen recording of a live demonstration of the experiment (see Fig. 2) is available at https://www.youtube.com/watch?v=LW0Y5SSIQU4. The recording allows the viewer to follow the game dynamics, neuronal firing rates, synaptic weight dynamics and learning progress. The learning progress is quantified by measuring the relative number of ball positions for which the paddle is able to catch the ball. The learned weight matrix is diagonally dominant: this expresses a correct mapping of states to actions in the reinforcement learning paradigm.

Importantly, neuronal firing rates vary from trial to trial due to noise in the analog chip components. Used appropriately, this can become an asset rather than a nuisance: in our reinforcement learning scenario, such variability endows the neural network with the ability to explore the action space and thereby with a necessary prerequisite for trial-and-error learning. We also found that neuronal parameter variability due to fixed-pattern noise, which is inevitable in

Figure 3: Comparison of experiment durations. w/o: without. w/: with. Adapted from [Wunderlich2019DemonstratingStudy].

analog neuromorphic hardware, is implicitly compensated by the chosen learning paradigm, leading to a correlation of learned weights and neuronal properties \citepWunderlich2019DemonstratingStudy.

The accelerated nature of our substrate represents a key advantage. We found that a software simulation (NEST v2.14.0) on an Intel processor (i7-4771), when considering only the numerical state propagation, is at least an order of magnitude slower than our neuromorphic emulation (see Fig. 3 and [Wunderlich2019DemonstratingStudy]). Besides this, the emulation on our prototype is at least 1000 times more energy-efficient than the software simulation ( vs. per iteration). This evinces the considerable benefit of using the BSS2 platform for emulating spiking networks and hints towards its decisive advantages when scaling the emulated networks to larger sizes.

4 Conclusions

These experiments demonstrate, for the first time, functional on-chip closed-loop learning on an accelerated physical-model neuromorphic system. The employed approach carries the potential to both enable researchers with the ability to investigate learning processes with a 1000-fold speed-up and to enable novel, energy-efficient and fast solutions for brain-inspired edge computing. While digital neuromorphic solutions and supercomputers generally achieve at most real-time simulation speed in large-scale neural networks \citepJordan2018ExtremelyComputers,vanAlbada2018PerformanceModel,Mikaitis2018NeuromodulatedSystem, the speed-up of BrainScaleS-2 is independent of network size and will become a critical asset in future work on the full-scale BrainScaleS-2 system. \printbibliography

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
392024
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description