ANC Seminar - Thomas Nowotny
Tuesday, 22nd November 2022
Training spiking neural networks with gradient descent on exact gradients - strengths and weaknesses
In a recent paper (Event-based backpropagation can compute exact gradients for spiking neural networks. Scientific Reports, 11(1):12829, 2021) Wunderlich and Pehle introduced the EventProp algorithm that allows learning by gradient descent on exact gradients in spiking neural networks. In this talk I will discuss extensions of EventProp to a wider class of loss functions and an implementation in the GPU enhanced neuronal networks (GeNN) framework (https://github.com/genn-team/genn). The GPU acceleration allows us to test EventProp extensively on a number of increasingly challenging learning benchmarks. We find that EventProp performs well on some tasks but for others there are issues where learning is slow or fails entirely. We have analysed these issues in detail and discovered that they relate to the nature of the exact gradient of the employed loss functions. In particular, the exact gradient does not provide information about loss changes due to changes in weights that lead to additional spikes or the removal of spikes. It only carries information related to changes in spike times. Depending on the details of the task and the loss function, descending the exact gradient with EventProp can, therefore, lead to the deletion of important spikes and so to an inadvertent increase of the loss and decrease of classification accuracy. This can lead to a complete failure to learn. Problems of similar flavour are well-known and are usually solved by regularisation. I will demonstrate, however, that in some situations these solutions do not work, and the loss function needs to be changed.
Thomas Nowotny is a Professor of Informatics and the head of the AI research group at the University of Sussex, Brighton, UK. He was trained in theoretical Physics at Georg-August Universität Göttingen and Universität Leipzig before refocussing on Computational Neuroscience and bio-inspired AI during a postdoc at the University of California, San Diego. His main interests are spiking neural networks (SNNs) in Computational Neuroscience and machine learning, the efficient simulation of SNNs, the use of computational methods in electrophysiology, insect-inspired navigation, and the Neuroscience of olfaction.
Event type: Seminar
Date: Tuesday, 22nd November 2022
Speaker(s): Thomas Nowotny
Chair/Host: Barbara Webb