Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable AutoAugment and modernize DALI pipeline for ConvNets #1343

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Commits on Aug 29, 2023

  1. Enable AutoAugment and modernize DALI pipeline for ConvNets

    Update DALI implementation to use modern "fn" API
    instead of old class approach.
    
    Add a codepath using AutoAugment in DALI training pipeline.
    It can be easily extended to use other Automatic Augmentations.
    
    The integration of DALI Pipeline with PyTorch additionally skips
    the transposition when exposing NHWC data.
    
    Extract the DALI implementation to separate file.
    Update the readme and some configuration files for EfficientNet:
    * dali-gpu is the default one, instead of PyTorch
    * DALI supports AutoAugment (+ a mention of other Automatic Augmentations)
    
    Fix a typo in the readme files:
    --data-backends -> --data-backend
    
    This PR is a backport of the changes made to this example, when it was
    introduced into DALI codebase:
    https://github.com/NVIDIA/DALI/tree/main/docs/examples/use_cases/pytorch/efficientnet
    
    The changes were tested with the smallest EfficientNet only.
    
    The usage od DALI GPU pipeline in the training can remove the CPU bottlneck
    on both DGX-1V and DGX-A100 when running using AMP which was covered
    in the blogpost:
    https://developer.nvidia.com/blog/why-automatic-augmentation-matters/
    
    Signed-off-by: Krzysztof Lecki <[email protected]>
    klecki committed Aug 29, 2023
    Configuration menu
    Copy the full SHA
    d6c8f05 View commit details
    Browse the repository at this point in the history