Skip to content

Commit

Permalink
[#58247] Review examples/yolact/README.md and examples/mask_rcnn/READ…
Browse files Browse the repository at this point in the history
…ME.md

Signed-off-by: Wojtek Rajtar <[email protected]>
  • Loading branch information
wrajtar authored and glatosinski committed Apr 30, 2024
1 parent cab39fb commit f0a7901
Show file tree
Hide file tree
Showing 2 changed files with 44 additions and 44 deletions.
60 changes: 30 additions & 30 deletions examples/mask_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Instance segmentation inference testing with MaskRCNN

This demo runs an instance segmentation algorithm on frames from COCO dataset.
This demo runs an instance segmentation algorithm on frames from the COCO dataset.
The demo consists of four parts:

* `CVNodeManager` - manages testing scenario and data flow between dataprovider and tested MaskRCNN node.
* `CVNodeManagerGUI` - visualizes the input data and results of the inference testing.
* `CVNodeManagerGUI` - visualizes input data and results of inference testing.
* `Kenning` - provides images to the MaskRCNN node and collects inference results.
* `MaskRCNN` - runs inference on the input images and returns the results.
* `MaskRCNN` - runs inference on input images and returns results.

## Necessary dependencies

Expand All @@ -15,7 +15,7 @@ This demo requires:
* A CUDA-enabled NVIDIA GPU for inference acceleration
* [repo tool](https://gerrit.googlesource.com/git-repo/+/refs/heads/main/README.md) to clone all necessary repositories
* [Docker](https://www.docker.com/) to use a prepared environment
* [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) to provide access to the GPU in the Docker container
* [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia-container-toolkit) to provide access to the GPU in the Docker container.

All the necessary build, runtime and development dependencies are provided in the [Dockerfile](./Dockerfile).
The image contains:
Expand All @@ -27,7 +27,7 @@ The image contains:
* CUDNN and CUDA libraries for faster acceleration on GPUs
* Additional development tools

Docker image containing all necessary dependencies can be built with:
To build the Docker image containing all necessary dependencies, run:

```bash
sudo ./build-docker.sh
Expand All @@ -37,13 +37,13 @@ For more details regarding base image refer to the [ROS2 GuiNode](https://github

## Preparing the environment

First off, create a workspace directory, where downloaded repositories will be stored:
First off, create a workspace directory to store downloaded repositories:

```bash
mkdir cvnode && cd cvnode
```

Then, all the dependencies can be downloaded using the `repo` tool:
Download all dependencies using the `repo` tool:

```bash
repo init -u https://github.com/antmicro/ros2-vision-node-base.git -m examples/mask_rcnn/manifest.xml -b main
Expand Down Expand Up @@ -84,16 +84,16 @@ This script starts the image with:
* `-v $(pwd):/data` - mounts current (`cvnode`) directory in the `/data` directory in the container's context
* `-v /tmp/.X11-unix/:/tmp/.X11-unix/` - passes the X11 socket directory to the container's context (to allow running GUI application)
* `-e DISPLAY=$DISPLAY`, `-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR` - adds X11-related environment variables
* `--gpus='all,"capabilities=compute,utility,graphics,display"'` - adds GPUs to the container's context for computing and displaying purposes
* `--gpus='all,"capabilities=compute,utility,graphics,display"'` - adds GPUs to the container's context for compute and display purposes

Then, in the Docker container, you need to install graphics libraries for NVIDIA that match your host's drivers.
To check NVIDIA drivers version, run:
Then, in the Docker container, install graphics libraries for NVIDIA that match your host's drivers.
To check NVIDIA driver versions, run:

```bash
nvidia-smi
```

And check the `Driver version`.
And check `Driver version`.

For example, for 530.41.03, install the following in the container:

Expand All @@ -120,12 +120,12 @@ The script takes the following arguments:

* `--image` - path to the image to run inference on
* `--output` - path to the directory where the exported model will be stored
* `--method` - method to export model with. Should be one of: onnx, torchscript
* `--method` - method for model export. Should be one of: onnx, torchscript
* `--num-classes` - optional argument indicating amount of classes to use in model architecture
* `--weights` - optional argument indicating path to the file storing weights.
By default, fetches COCO pre-trained model weights from model zoo
* `--weights` - optional argument indicating path to the file storage weights.
By default, fetches COCO pre-trained model weights from model zoo.

For example, to export the model to the `TorchScript` and locate it in the `config` directory:
For example, to export the model to `TorchScript` and locate it in the `config` directory, run:

```bash
curl http://images.cocodataset.org/val2017/000000000632.jpg --output image.jpg
Expand All @@ -141,13 +141,13 @@ Later, the model can be loaded with the `mask_rcnn_torchscript_launch.py` launch

## Building the MaskRCNN demo

Firstly, the ROS2 environment has to be sourced:
First, source the ROS2 environment:

```bash
source /opt/ros/setup.sh
```

Then, the GUI node and the Camera node can be build with:
Then, build the GUI node and the Camera node:

```bash
colcon build --base-path=src/ --packages-select \
Expand All @@ -157,13 +157,13 @@ colcon build --base-path=src/ --packages-select \
--cmake-args ' -DBUILD_GUI=ON' ' -DBUILD_MASK_RCNN=ON ' ' -DBUILD_MASK_RCNN_TORCHSCRIPT=ON' ' -DBUILD_TORCHVISION=ON'
```

Where the `--cmake-args` are:
Here, the `--cmake-args` are:

* `-DBUILD_GUI=ON` - builds the GUI for CVNodeManager
* `-D BUILD_MASK_RCNN=ON and ' -DBUILD_MASK_RCNN_TORCHSCRIPT=ON` - builds the MaskRCNN demos
* `-DBUILD_TORCHVISION=ON` - builds the TorchVision library needed for MaskRCNN

Build targets then can be sourced with:
Source the build targets with:

```bash
source install/setup.sh
Expand All @@ -176,7 +176,7 @@ source install/setup.sh
* `mask_rcnn_detectron_launch.py` - runs the MaskRCNN node with Python Detectron2 backend
* `mask_rcnn_torchscript_launch.py` - runs the MaskRCNN node with C++ TorchScript backend

A sample launch with the Python backend can be run with:
You can run a sample launch with a Python backend with:

```bash
ros2 launch cvnode_base mask_rcnn_detectron_launch.py \
Expand All @@ -191,7 +191,7 @@ ros2 launch cvnode_base mask_rcnn_detectron_launch.py \
log_level:=INFO
```

And with the C++ backend:
For a C++ backend, run:

```bash
ros2 launch cvnode_base mask_rcnn_torchscript_launch.py \
Expand All @@ -207,20 +207,20 @@ ros2 launch cvnode_base mask_rcnn_torchscript_launch.py \
log_level:=INFO
```

Where the parameters are:
Here, the parameters are:

* `model_path` - path to the TorchScript model
* `class_names_path` - path to the CSV file with class names
* `inference_configuration` - path to the JSON file with Kenning's inference configuration
* `model_path` - path to a TorchScript model
* `class_names_path` - path to a CSV file with class names
* `inference_configuration` - path to a JSON file with Kenning's inference configuration
* `publish_visualizations` - whether to publish visualizations for the GUI
* `preserve_output` - whether to preserve the output of the last inference if timeout is reached
* `scenario` - scenario to run the demo in, one of:
* `scenario` - scenario for running the demo, one of:
* `real_world_last` - tries to process last received frame within timeout
* `real_world_first` - tries to process first received frame
* `synthetic` - ignores timeout and processes frames as fast as possible
* `inference_timeout_ms` - timeout for inference in milliseconds. Used only by `real_world` scenarios
* `measurements` - path to the file where inference measurements will be stored
* `report_path` - path to the file where the rendered report will be stored
* `log_level` - log level for running the demo
* `measurements` - path to file where inference measurements will be stored
* `report_path` - path to file where the rendered report will be stored
* `log_level` - log level for running the demo.

Later, produced reports can be found under `/data/build/reports` directory.
The produced reports can later be found in the `/data/build/reports` directory.
28 changes: 14 additions & 14 deletions examples/yolact/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Instance segmentation inference YOLACT

This demo runs an instance segmentation model YOLACT on sequences from [LindenthalCameraTraps](https://lila.science/datasets/lindenthal-camera-traps/) dataset.
This demo runs the YOLACT instance segmentation model on sequences from the [LindenthalCameraTraps](https://lila.science/datasets/lindenthal-camera-traps/) dataset.
The demo consists of four parts:

* `CVNodeManager` - manages testing scenario and data flow between dataprovider and tested CVNode.
* `CVNodeManagerGUI` - visualizes the input data and results of the inference testing.
* `Kenning` - provides sequences from LindenthalCameraTraps dataset and collects inference results.
* `CVNode` - runs inference on the input images and returns the results.
* `CVNodeManagerGUI` - visualizes input data and results of inference testing.
* `Kenning` - provides sequences from the LindenthalCameraTraps dataset and collects inference results.
* `CVNode` - runs inference on input images and returns results.

## Dependencies

Expand Down Expand Up @@ -111,7 +111,7 @@ kenning report --measurements \

## Building the demo

First of all, load the `setup.sh` script for ROS 2 tools, e.g.:
First, load the `setup.sh` script for ROS 2 tools, e.g.:

```bash
source /opt/ros/setup.sh
Expand All @@ -127,7 +127,7 @@ colcon build --base-path=src/ --packages-select \
--cmake-args ' -DBUILD_GUI=ON' ' -DBUILD_YOLACT=ON'
```

Where the `--cmake-args` are:
Here, the `--cmake-args` are:

* `-DBUILD_GUI=ON` - builds the GUI for CVNodeManager
* `-DBUILD_YOLACT=ON` - builds the YOLACT CVNodes
Expand All @@ -140,11 +140,11 @@ source install/setup.sh

## Running the demo

This example provides a single launch scripts for running the demo:
This example provides a single launch script for running the demo:

* `yolact_launch.py` - starts provided executable as CVNode along with other nodes
* `yolact_launch.py` - starts the provided executable as CVNode along with other nodes.

A sample launch with the TFLite backend can be run with:
Run a sample launch with the TFLite backend using:

```bash
ros2 launch cvnode_base yolact_launch.py \
Expand All @@ -156,23 +156,23 @@ ros2 launch cvnode_base yolact_launch.py \
log_level:=INFO
```

Where the parameters are:
Here, the parameters are:

* `tflite` - backend to use, one of:
* `tflite` - TFLite backend
* `tvm` - TVM backend
* `onnxruntime` - ONNXRuntime backend
* `model_path` - path to the model file.
Make sure to have IO specification placed alongside the model file with the same name and `.json` extension.
* `scenario` - scenario to run the demo in, one of:
* `scenario` - scenario for running the demo, one of:
* `real_world_last` - tries to process last received frame within timeout
* `real_world_first` - tries to process first received frame
* `synthetic` - ignores timeout and processes frames as fast as possible
* `measurements` - path to the file where inference measurements will be stored
* `report_path` - path to the file where the rendered report will be stored
* `measurements` - path to file where inference measurements will be stored
* `report_path` - path to file where the rendered report will be stored
* `log_level` - log level for running the demo

Later, produced reports can be found under `/data/build/reports` directory.
The produced reports can later be found in the `/data/build/reports` directory.

This demo supports TFLite, TVM and ONNX backends.
For more information on how to export model for these backends, see [Kenning documentation](https://antmicro.github.io/kenning/json-scenarios.html).

0 comments on commit f0a7901

Please sign in to comment.