Skip to content

Commit

Permalink
updating docs for object_detection for v1 (#284)
Browse files Browse the repository at this point in the history
Signed-off-by: greg pereira <[email protected]>
  • Loading branch information
Gregory-Pereira authored Apr 17, 2024
1 parent dc885a6 commit 53af15b
Show file tree
Hide file tree
Showing 2 changed files with 63 additions and 29 deletions.
4 changes: 2 additions & 2 deletions recipes/audio/audio_to_text/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://githu

# Build the Application

The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the Pod above is built, run, and what purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.
The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the recipes use a central [Makefile](../../common/Makefile.common) that includes variables populated with default values to simplify getting started. Please review the [Makefile docs](../../common/README.md), to learn about further customizing your application.

* [Download a model](#download-a-model)
* [Build the Model Service](#build-the-model-service)
Expand Down Expand Up @@ -88,7 +88,7 @@ Once the streamlit application is up and running, you should be able to access i
From here, you can upload audio files from your local machine and translate the audio files as shown below.

By using this recipe and getting this starting point established,
users should now have an easier time customizing and building their own LLM enabled applications.
users should now have an easier time customizing and building their own AI enabled applications.

#### Input audio files

Expand Down
88 changes: 61 additions & 27 deletions recipes/computer_vision/object_detection/README.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,92 @@
# Object Detection

This recipe provides an example for running an object detection model service and its associated client locally.
This recipe helps developers start building their own custom AI enabled object detection applications. It consists of two main components: the Model Service and the AI Application.

## Build and run the model service
There are a few options today for local Model Serving, but this recipe will use our FastAPI [`object_detection_python`](../../../model_servers/object_detection_python/src/object_detection_server.py) model server. There is a Containerfile provided that can be used to build this Model Service within the repo, [`model_servers/object_detection_python/base/Containerfile`](/model_servers/object_detection_python/base/Containerfile).

```bash
cd object_detection/model_server
podman build -t object_detection_service .
```
The AI Application will connect to the Model Service via an API. The recipe relies on [Streamlit](https://streamlit.io/) for the UI layer. You can find an example of the object detection application below.

![](/assets/object_detection.png)

## Try the Object Detection Application:

The [Podman Desktop](https://podman-desktop.io) [AI Lab Extension](https://github.com/containers/podman-desktop-extension-ai-lab) includes this recipe among others. To try it out, open `Recipes Catalog` -> `Object Detection` and follow the instructions to start the application.

# Build the Application

The rest of this document will explain how to build and run the application from the terminal, and will go into greater detail on how each container in the application above is built, run, and what purpose it serves in the overall application. All the Model Server elements of the recipe use a central Model Server [Makefile](../../../model_servers/common/Makefile.common) that includes variables populated with default values to simplify getting started. Currently we do not have a Makefile for the Application elements of the Recipe, but this coming soon, and will leverage the recipes common [Makefile](../../common/Makefile.common) to provide variable configuration and reasonable defaults to this Recipe's application.

* [Download a model](#download-a-model)
* [Build the Model Service](#build-the-model-service)
* [Deploy the Model Service](#deploy-the-model-service)
* [Build the AI Application](#build-the-ai-application)
* [Deploy the AI Application](#deploy-the-ai-application)
* [Interact with the AI Application](#interact-with-the-ai-application)

## Download a model

If you are just getting started, we recommend using [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101).
This is a well performant model with an Apache-2.0 license.
It's simple to download a copy of the model from [huggingface.co](https://huggingface.co)

You can use the `download-model-facebook-detr-resnet-101` make target in the `model_servers/object_detection_python` directory to download and move the model into the models directory for you:

```bash
podman run -it --rm -p 8000:8000 object_detection_service
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
make download-model-facebook-detr-resnet-101
```

By default the model service will use [`facebook/detr-resnet-101`](https://huggingface.co/facebook/detr-resnet-101), which has an apache-2.0 license. The model is relatively small, but it will be downloaded fresh each time the model server is started unless a local model is provided (see additional instructions below).
## Build the Model Service

The You can build the Model Service from the [object_detection_python model-service directory](../../../model_servers/object_detection_python).

## Use a different or local model

If you'd like to use a different model hosted on huggingface, simply use the environment variable `MODEL_PATH` and set it to the correct `org/model` path on [huggingface.co](https://huggingface.co/) when starting your container.
```bash
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
make build
```

If you'd like to download models locally so that they are not pulled each time the container restarts, you can use the following python snippet to a model to your `models/` directory.
Checkout the [Makefile](../../../model_servers/object_detection_python/Makefile) to get more details on different options for how to build.

```python
from huggingface_hub import snapshot_download
## Deploy the Model Service

snapshot_download(repo_id="facebook/detr-resnet-101",
revision="no_timm",
local_dir="<PATH_TO>/locallm/models/vision/object_detection/facebook/detr-resnet-101",
local_dir_use_symlinks=False)
The local Model Service relies on a volume mount to the localhost to access the model files. It also employs environment variables to dictate the model used and where its served. You can start your local Model Service using the following `make` command from the [`model_servers/object_detection_python`](../../../model_servers/object_detection_python) directory, which will be set with reasonable defaults:

```bash
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
make run
```

When using a model other than the default, you will need to set the `MODEL_PATH` environment variable. Here is an example of running the model service with a local model:
As stated above, by default the model service will use [`facebook/detr-resnet-101`](https://huggingface.co/facebook/detr-resnet-101). However you can use other compatabale models. Simply pass the new `MODEL_NAME` and `MODEL_PATH` to the make command. Make sure the model is downloaded and exists in the [models directory](../../../models/):

```bash
podman run -it --rm -p 8000:8000 -v <PATH/TO>/locallm/models/vision/:/locallm/models -e MODEL_PATH=models/object_detection/facebook/detr-resnet-50/ object_detection_service
# from path model_servers/object_detection_python from repo containers/ai-lab-recipes
make MODEL_NAME=facebook/detr-resnet-50 MODEL_PATH=/models/facebook/detr-resnet-50 run
```

## Build and run the client application
## Build the AI Application

Now that the Model Service is running we want to build and deploy our AI Application. Use the provided Containerfile to build the AI Application
image from the [`object_detection/`](./) recipe directory.

```bash
cd object_detection/client
# from path recipes/computer_vision/object_detection from repo containers/ai-lab-recipes
podman build -t object_detection_client .
```

### Deploy the AI Application

Make sure the Model Service is up and running before starting this container image.
When starting the AI Application container image we need to direct it to the correct `MODEL_ENDPOINT`.
This could be any appropriately hosted Model Service (running locally or in the cloud) using a compatible API.
The following Podman command can be used to run your AI Application:

```bash
podman run -p 8501:8501 -e MODEL_ENDPOINT=http://10.88.0.1:8000/detection object_detection_client
```

Once the client is up a running, you should be able to access it at `http://localhost:8501`. From here you can upload images from your local machine and detect objects in the image as shown below.

<p align="center">
<img src="../../../assets/object_detection.png" width="70%">
</p>
### Interact with the AI Application

Once the client is up a running, you should be able to access it at `http://localhost:8501`. From here you can upload images from your local machine and detect objects in the image as shown below.

By using this recipe and getting this starting point established,
users should now have an easier time customizing and building their own AI enabled applications.

0 comments on commit 53af15b

Please sign in to comment.