Skip to content

Repository code for the IJCNN 2021 paper "Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks"

Notifications You must be signed in to change notification settings

loiccordone/sparse-spiking-neural-networks

Repository files navigation

Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks

This work is supported by the French technological research agency (ANRT) through a CIFRE thesis in collaboration between Renault and Université Côte d'Azur.

This repository contains the codes for the paper Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks, accepted to the IJCNN 2021, presenting a method to train sparse SNNs on event data using surrogate gradient learning.

Inspired by S2Net. This code is shared for research purposes only, for application development using SNNs we recommendend to use the SpikingJelly framework.

Our main contributions are:

  1. We propose an improvement over the supervised backpropagation-based learning algorithm using spiking neurons presented in [1] and [2] with the use of strided sparse convolutions. The training implementation is also modified to allow timestep-wise rather than layer-wise learning. This greatly reduces the training time while generating sparser networks during inference, with higher accuracy.

  2. We investigate a full event-based approach composed of sparse spiking convolutions, respecting the data temporality across the network and the constraints of an implementation on specialized neuromorphic and low power hardware.

  3. We evaluate our approach on the neuromorphic DVS128 Gesture dataset, achieving competitive results while using a much smaller and sparser network than other spiking neural networks.


[1] E. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks,” IEEE Signal Processing Magazine, 2019.
[2] T. Pellegrini, R. Zimmer, and T. Masquelier, "Low-activity supervised convolutional spiking neural networks applied to speech commands recognition," IEEE Spoken Language Technology Workshop, 2021.

Citation

If you find this work useful feel free to cite our IJCNN paper:

L. Cordone, B. Miramond and S. Ferrante, "Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks", International Joint Conference on Neural Networks, 2021.

@InProceedings{Cordone_2021_IJCNN,
    author    = {Cordone, Loic and Miramond, Benoît and Ferrante, Sonia},
    title     = {Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks},
    booktitle = {Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN)},
    month     = {July},
    year      = {2021},
    pages     = {}
}

About

Repository code for the IJCNN 2021 paper "Learning from Event Cameras with Sparse Spiking Convolutional Neural Networks"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages