Use docker pull
with any of the images and tags below to pull an image and try for yourself. Note that the build from source (CPU), CUDA, and TensorRT images include additional dependencies like miniconda for compatibility with AzureML image deployment.
Example: Run docker pull mcr.microsoft.com/azureml/onnxruntime:latest-cuda
to pull the latest released docker image with ONNX Runtime GPU, CUDA, and CUDNN support.
Build Flavor | Base Image | ONNX Runtime Docker Image tags | Latest |
---|---|---|---|
Source (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0, :v0.5.0 | :latest |
CUDA (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-cuda10.0-cudnn7, :v0.5.0-cuda10.1-cudnn7 | :latest-cuda |
TensorRT (x86) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-tensorrt19.03, :v0.5.0-tensorrt19.06 | :latest-tensorrt |
OpenVino (VAD-M) | mcr.microsoft.com/azureml/onnxruntime | TBA | TBA |
OpenVino (MYRIAD) | mcr.microsoft.com/azureml/onnxruntime | TBA | TBA |
Server | mcr.microsoft.com/onnxruntime/server | :v0.4.0, :v0.5.0 | :latest |
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-source -f Dockerfile.source .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-source
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-cuda -f Dockerfile.cuda .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-cuda
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-ngraph -f Dockerfile.ngraph .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-ngraph
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-trt
-
Build the onnxruntime image for all the accelerators supported as below
Retrieve your docker image in one of the following ways.
- For building the docker image, download OpenVINO online installer version 2019 R1.1 from here and copy the openvino tar file in the same directory and build the image. The online installer size is only 16MB and the components needed for the accelerators are mentioned in the dockerfile. Providing the argument device enables onnxruntime for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default values are http://github.com/microsoft/onnxruntime and repo is master
docker build -t onnxruntime --build-arg DEVICE=$DEVICE .
- Pull the official image from DockerHub.
- For building the docker image, download OpenVINO online installer version 2019 R1.1 from here and copy the openvino tar file in the same directory and build the image. The online installer size is only 16MB and the components needed for the accelerators are mentioned in the dockerfile. Providing the argument device enables onnxruntime for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default values are http://github.com/microsoft/onnxruntime and repo is master
-
DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
Device Option Target Device CPU_FP32
Intel CPUs GPU_FP32
Intel Integrated Graphics GPU_FP16
Intel Integrated Graphics MYRIAD_FP16
Intel MovidiusTM USB sticks VAD-M_FP16
Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs
-
Retrieve your docker image in one of the following ways.
-
Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host .
-
Pull the official image from DockerHub.
# Will be available with next release
-
-
Run the docker image
docker run -it onnxruntime-cpu
-
Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host .
- Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
-
Run the docker image
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
- Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host .
- Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
- Install the Myriad rules drivers on the host machine according to the reference in here
- Run the docker image by mounting the device drivers
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest
=======
- Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-vadr --build-arg DEVICE=VAD-M_FP16 --network host .
- Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
- Install the HDDL drivers on the host machine according to the reference in here
- Run the docker image by mounting the device drivers
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-hddl:latest
- Build the docker image from the Dockerfile in this repository
docker build -t {docker_image_name} -f Dockerfile.server .
- Run the ONNXRuntime server with the image created in step 1
docker run -v {localModelAbsoluteFolder}:{dockerModelAbsoluteFolder} -p {your_local_port}:8001 {imageName} --model_path {dockerModelAbsolutePath}
- Send HTTP requests to the container running ONNX Runtime Server
Send HTTP requests to the docker container through the binding local port. Here is the full usage document.
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-nuphar -f Dockerfile.nuphar .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-nuphar