Skip to content

Getting started: using the new features of MIGraphX 0.2

mvermeulen edited this page Feb 21, 2019 · 32 revisions

New Features in MIGraphX 0.2

MIGraphX 0.2 supports the following new features:

  • New Python API
  • Additional ONNX features and fixes that now enable a large set of Imagenet models
  • Support for RNN Operators
  • Support for multi-stream Execution

This page provides examples of how to use these new features.

Python API

MIGraphX functionality can now be called from Python as well as C++. This support is illustrated with an example below of a "webcam classifier". The classifier uses OpenCV Python modules to capture images from a webcam, reforms the image to NCHW format and then uses MIGraphX to evaluate these image using an Imagenet-based neural network. The result is a stream of classifications of what is seen in the webcam.

The first release of Python API is for python 2.7.

Prerequisites

Prior to running this example, one needs to install OpenCV. On Ubuntu this can be done by installing the following package

prompt% apt install python-opencv

The PYTHONPATH variable should be set by the package installation scripts. However, if necessary it can be set using

export PYTHONPATH=/opt/rocm/migraphx/py:$PYTHONPATH

Python code example

Our Python code example starts with setup code for the webcam. In this particular example we capture a small image size (240x320) that we will later and crop to an Imagenet size (CHW, 3 Channels, 224 Height, 224 Width) and representing as float32 values instead of int8.

import numpy as np
import cv2
# video settings
cap = cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,320)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,240)

An additional piece of setup is to read in Imagenet labels from file that stores them in Json format

import json
# get labels
with open('imagenet_class_index.json') as json_data:
   class_idx = json.load(json_data)
   idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]

With OpenCV and labels set up we now initialize the MIGraphX interface. The first step is to read a model including weights from an ONNX file

model = migraphx.parse_onnx("resnet50.onnx")

The next step is to "compile" the model. The compilation step runs optimization passes on the model and also loads constant parameter weights to the GPU memory.

model.compile(migraphx.get_target("gpu"))

While the compilation step has loaded model weights, we also need to allocate GPU memory for input, output and scratch parameters found in the model. This is accomplished with the following code

# allocate space on the GPU for model parameters
params = {}
for key,value in model.get_parameter_shapes().items():
   params[key] = migraphx.allocate_gpu(value)

With these steps complete, we now get to the primary loop that will capture images from a webcam, manipulate them to Imagenet format and call MIGraphX to evaluate the model.

while (True):
   # capture frame by frame
   ret,frame = cap.read()

   if ret: # check - some webcams need warmup operations

The following steps process the captured frame to an image for the Resnet50 model

      cropped = frame[16:304,8:232]    # convert to 224x224
      trans = cropped.transpose(2,0,1) # convert HWC to CHW
      image = np.ascontiguousarray(    # contiguous to feed to migraphx initializer
         np.expand_dims(               # change CHW to NCHW
            trans.astype('float32')/256.0,0))  # convert int8 to float32

The following creates a window to display webcam frames in int8 format before conversion

      cv2.imshow('frame',cropped)

The following code copies the converted frame to the GPU.

      params['0'] = migraphx.to_gpu(migraphx.argument(image))

The following code runs the model, returns the result and puts it in a numpy array

      result = np.array(migraphx.from_gpu(model.run(params),copy=False)))

The result for the Resnet50 model is an array of 1000 elements containing probabilities. We find the highest probability, look up the label name and print it as output

      idx = np.argmax(result[0])
      print idx2label[idx]

The last part of the code looks for a 'q' key to be pressed to exit the program

   if cv2.waitKey(1) & 0xFF == ord('q'):
      break

Outside the loop we close up OpenCV context and exit the program

# when all is done, release the capture
cap.release()
cv2.destroyAllWindows()

Overall, this program provides an end-to-end example using the MIGRaphX Python API including parsing ONNX files, compiling models, loading parameters to memory and running programs.

Imagenet Model support

insert example here that references Cadene and shows how to use different models in the classifier

RNN Operator support

insert example with basic RNN operator