Please visit this page for performance information.
This repository is a collection of models that have been ported to run on Intel Gaudi AI accelerator. They are intended as examples, and will be reasonably optimized for performance while still being easy to read.
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
ResNet50, ResNeXt101 | PyTorch | Training | Training, Inference |
ResNet152 | PyTorch | Training | - |
MobileNetV2 | PyTorch | Training | - |
UNet 2D, Unet3D | PyTorch Lightning | Training, Inference | Training, Inference |
SSD | PyTorch | Training | Training |
GoogLeNet | PyTorch | Training | - |
Vision Transformer | PyTorch | Training | - |
DINO | PyTorch | Training | - |
YOLOX | PyTorch | Training | - |
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
BERT Pretraining and Finetuning | PyTorch | Training, Inference | Training, Inference |
DeepSpeed BERT-1.5B, BERT-5B | PyTorch | Training | - |
BART | PyTorch | Training | - |
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
Wav2Vec2ForCTC | PyTorch | Inference | Inference |
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
Stable Diffusion | PyTorch Lightning | Training | Training |
Stable Diffusion FineTuning | PyTorch | Training | Training |
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
GPT3 | PyTorch | - | Training |
Llama 70B LoRA | PyTorch | - | Training |
Models | Framework | Validated on Gaudi | Validated on Gaudi 2 |
---|---|---|---|
Llama 70B | PyTorch | - | Inference |
Stable Diffusion XL | PyTorch | - | Inference |
MLPerf™ is a trademark and service mark of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use is strictly prohibited.
We welcome you to use the GitHub issue tracker to report bugs or suggest features.
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
- A reproducible test case or series of steps
- The version of our code being used
- Any modifications you've made relevant to the bug
- Anything unusual about your environment or deployment
- All supported models are available in Optimum Habana project https://github.com/huggingface/optimum-habana/ and as model cards at https://huggingface.co/Habana.
- Megatron-DeepSpeed was moved to a new GitHub repository HabanaAI/Megatron-DeepSpeed.
- This model was moved to a new GitHub repository HabanaAI/DeepSpeedExample.