Code for the paper: Simple Transformer with Single Leaky Neuron for Event Vision, accepted at EVGEN: Event-based Vision in the Era of Generative AI - Transforming Perception and Visual Innovation Workshop, WACV 2025.
- Install spikingjelly from Github only.
- Install rest of the packages as mentioned in requirements.txt .
- Download the datasets manually from the spikingjelly datasets repository:
- DVS Gesture
- N-MNIST
- CIFAR10-DVS
- Place the downloaded datasets in the
event_vision/datasets/
folder. The code will automatically preprocess the datasets into frame-based versions during execution.
- Download pretrained Resnet18 and Resnet50 weights from pytorch.
- Update the dataset paths in the
custom_train.py
file to match your setup. - To start training, run the following command:
python custom_train.py
- Logs and training progress will be stored in the
logs/
folder.
If you find this code useful in your research, please consider citing our paper:
@InProceedings{Kumar_2025_WACV,
author = {Kumar, Himanshu and Konkar, Aniket},
title = {Simple Transformer with Single Leaky Neuron for Event Vision},
booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops},
month = {February},
year = {2025},
pages = {928-934}
}
We have used Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron code, and syops-counter for energy calculation.