Skip to content

Releases: mapillary/inplace_abn

Distributed group handling

16 Jul 15:14
Compare
Choose a tag to compare

Added a couple of functions to manage distributed groups with InplaceABNSync:

  • active_group: create a distributed group where each worker can decide wether to participate or not.
  • set_active_group: scan a model, passing a distributed group to all layers that implement a set_group() method.

These are intended to simplify handling of asymmetric computational graphs in DistributedDataParallel when using InplaceABNSync. A typical usage is as follows:

class DynamicModel(nn.Module):
    def __init__(self):
        super(DynamicModel, self).__init__()
        self.conv1 = nn.Conv2d(4, 4, 1)
        self.bn1 = InplaceABNSync(4)
        self.conv2 = nn.Conv2d(4, 4, 1)
        self.bn2 = InplaceABNSync(4)
    
    def forward(x):
        x = self.conv1(x)
        x = self.bn1(x)
        
        # Call some data-dependent function telling us wether the second part of the network
        # should be traversed or not
        active = self.get_active(x)
        
        # Create process group containing only the active workers, pass it to bn2
        set_active_group(self.bn2, active_group(active))
        
        # Run the second part of the network only if active is True
        if active:
            x = self.conv2(x)
            x = self.bn2(x)
        
        return x

Packaging improvements and Vistas script bugfixes

08 Jul 12:58
Compare
Choose a tag to compare
v1.0.2

Version 1.0.2: improved packaging and README, script fixes

Mixed precision support

05 Jul 10:11
Compare
Choose a tag to compare

This update adds back support for mixed precision training. These combinations of inputs / parameters are now supported:

  • float32 input, float32 weight and bias
  • float64 input, float64 weight and bias
  • float16 input, float16 weight and bias
  • float16 input, float32 weight and bias

Note: in the float16 cases all internal operations are still performed with float32 math, and float16 is not supported when operating in CPU mode.

Split iABN library from training scripts, improved native code and synchronized BN

04 Jul 13:44
Compare
Choose a tag to compare

This release marks some major changes in inplace_abn:

  • Complete rewrite of the CUDA code following the most recent native BN implementation from Pytorch
  • Improved synchronized BN implementation, correctly handling different per-GPU batch sizes and Pytorch distributed groups
  • The iABN layers are now packaged in an installable python library to simplify use in other projects
  • The Imagenet / Vistas scripts are still available in the scripts folder

Added ResNet

14 Feb 13:18
42b0952
Compare
Choose a tag to compare

We added the possibility of training ResNet with inplace ABN layers.

In addition we released ResNet34 and ResNet50 pre-trained on ImageNet.

Pytorch v1.0-compatible release

08 Jan 13:43
59a6fd6
Compare
Choose a tag to compare

This is a code refactoring to enable compatibility with Pytorch v1.0.

Additional changes:

  • Moved from multi-threading training to distributed training using multiple processes
  • We provide an adapted implementation of synchronized inplace ABN
  • Our inplace ABN layer is compatible with fp16 tensors.

Pytorch v0.4.1-compatible release

07 Jan 13:37
Compare
Choose a tag to compare

This is a partial code refactoring to enable compatibility with Pytorch v0.4.1. In particular:

  • Fixed compatibility with pytorch>=0.4.1 due to change of AT_ASSERT
  • Fixed GPU allocation of tensors created in CUDA code

Additional changes:

  • Added segmentation models and scripts to run inference on Vistas
  • Updated license

Pytorch v0.4-compatible release

18 Jul 09:50
Compare
Choose a tag to compare

This is a partial code refactoring to enable compatibility with Pytorch v0.4. In particular:

  • Native functions have been rewritten to use the new ATen-based extension interface introduced in v0.4. As a side effect, the native code doesn't need to be pre-compiled anymore. Instead, we are now using Pytorch's newly introduced run-time library loading mechanism.
  • The python code has been modified to account for the fact that autograd.Variable does not exist anymore.

Additional changes:

  • ABN modules have been slightly refactored, leading to a slight change in the structure of the overall models' state_dicts. As a consequence, pre-trained models need to be re-downloaded (updated links in README.md).

Pytorch v0.3-compatible release

17 Jul 13:35
Compare
Choose a tag to compare

NOTE: this is the last release that is compatible with Pytorch v0.3

After this release, the code will undergo partial rewrite to adapt to the changes introduced in Pytorch v0.4 regarding Tensors / Variables and native functions. As a consequence, we are completely dropping support for versions of Pytorch before v0.3.