Skip to content
This repository has been archived by the owner on Feb 3, 2021. It is now read-only.

Latest commit

 

History

History
43 lines (28 loc) · 2.12 KB

README.md

File metadata and controls

43 lines (28 loc) · 2.12 KB

FAIR SSLIME is deprecated. Please see VISSL, a ground-up rewrite in PyTorch with more functionality.

FAIR SSLIME

Introduction

We present Self-Supervised Learning Integrated Multi-modal Environment (SSLIME), a toolkit based on PyTorch that aims to accelerate research cycle in self-supervised learning: from designing a new self-supervised task to evaluating the learned representations. The toolkit treats multiple data modalities (images, videos, audio, text) as first class citizens. The toolkit aims to provide reference implementations of several self-supervised pretext tasks and also provides an extensive benchmark suite for evaluating self-supervised representations. The toolkit is designed to be easily reusable, extensible and enable reproducible research. The toolkit also aims to support efficient distributed training across multiple nodes to facilitate research on Facebook scale data.

Framework Components

Framework Features

Currently, the toolkit supports the Rotation [1] Pretext task and evaluation of features from different layers. Support for Jigsaw, Colorization and DeepCluster pretext tasks will be added in the coming months.

Installation

Please find installation instructions in INSTALL.md.

Getting Started

After installation, please see GETTING_STARTED.md for how to run various ssl tasks.

License

sslime is CC-NC 4.0 International licensed, as found in the LICENSE file.

Citation

If you use sslime in your research, please use the following BibTeX entry.

@article{sslime2019,
  title={SSLIME: A Toolkit for Multi-Modal Self-Supervised Training and Benchmarking},
  author={Bhooshan, Suvrat and Misra, Ishan and Fergus, Rob and Goyal, Priya},
  year={2019}
}

References

  • Gidaris, Spyros, Praveer Singh, and Nikos Komodakis. "Unsupervised representation learning by predicting image rotations." arXiv preprint arXiv:1803.07728 (2018).