Skip to content

Code to reproduce the experiments of "Compression-aware Training of Neural Networks using Frank-Wolfe"

Notifications You must be signed in to change notification settings

ZIB-IOL/compression-aware-SFW

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Compression-aware Training of Neural Networks using Frank-Wolfe

Authors: Max Zimmer, Christoph Spiegel, Sebastian Pokutta

This repository contains the code to reproduce the experiments from the "Compression-aware Training of Neural Networks using Frank-Wolfe" (arXiv:2205.11921) paper. The code is based on PyTorch 1.9 and the experiment-tracking platform Weights & Biases.

Structure and Usage

Experiments are started from the following file:

  • main.py: Starts experiments using the dictionary format of Weights & Biases.

The rest of the project is structured as follows:

  • strategies: Contains all used sparsification methods.
  • runners: Contains classes to control the training and collection of metrics.
  • metrics: Contains all metrics as well as FLOP computation methods.
  • models: Contains all model architectures used.
  • optimizers: Contains reimplementations of SFW, SGD and Proximal SGD.

Citation

In case you find the paper or the implementation useful for your own research, please consider citing:

@Article{zimmer2022,
  author        = {Max Zimmer and Christoph Spiegel and Sebastian Pokutta},
  title         = {Compression-aware Training of Neural Networks using Frank-Wolfe},
  year          = {2022},
  archiveprefix = {arXiv},
  eprint        = {2205.11921},
  primaryclass  = {cs.LG},
}