Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
General
Integrations
- Azure Machine Learning
- Azure IoT Edge
- Azure Media Services
- Azure SQL Edge and Managed Instance
- Windows Machine Learning
- ML.NET
Inference only
- CPU: Basic
- CPU: Resnet50
- ONNX-Ecosystem Docker image
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- NUPHAR EP samples
Inference with model conversion
Other
- C: SqueezeNet
- C++: model-explorer - single and batch processing
- C++: SqueezeNet
- C++: MNIST
Inference and deploy through AzureML
For aditional information on training in AzureML, please see AzureML Training Notebooks
- Inferencing on CPU using ONNX Model Zoo models:
- Inferencing on CPU with PyTorch model training:
- Inferencing on CPU with model conversion for existing (CoreML) model:
- Inferencing on GPU with TensorRT Execution Provider (AKS):
Inference and Deploy with Azure IoT Edge
Deploy ONNX model in Azure SQL Edge
Examples of inferencing with ONNX Runtime through Windows Machine Learning