Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS-2688: Make switch to modal from models dropdown #3239

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified assets/tutorials/data-management/mlmodel-service-conf.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 7 additions & 9 deletions docs/services/ml/deploy/tflite_cpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,20 +41,18 @@ You can choose to configure your service with an existing model on the machine o

1. To configure your service and deploy a model onto your machine, select **Deploy model on machine** for the **Deployment** field in the resulting ML model service configuration pane.

2. Click on **Select models** to open a dropdown with all of the ML models available to you privately, as well as all of the ML models available in [the registry](https://app.viam.com), which are shared by users.
Models that your organization has trained that are not uploaded to the registry will appear first in the dropdown.
You can select from any of these models to deploy on your robot.
Only TensorFlow Lite models are shown.
2. Click **Select model**.
In the modal that appears, search for models from your organization or the [Registry](/registry/).

{{<imgproc src="/services/deploy-model-menu.png" resize="700x" alt="Models dropdown menu with models from the registry.">}}
{{<imgproc src="/tutorials/data-management/mlmodel-modal.png" alt="The ML model service configuration modal." resize="500x" >}}

{{% alert title="Tip" color="tip" %}}
To see more details about a model, open its page in [the registry](https://app.viam.com).
{{% /alert %}}
You can select a model to see more details about it, and then select the model to deploy it to your machine.

{{<imgproc src="/tutorials/data-management/mlmodel-service-conf.png" alt="The ML model service configuration modal with a model suggested." resize="450x" >}}

3. Also, optionally select the **Number of threads**.

{{<imgproc src="/services/deploy-model.png" resize="700x" alt="Create a machine learning models service with a model to be deployed">}}
4. Click **Save** at the top right of the window to save your changes.

{{% /tab %}}
{{% tab name="Path to Existing Model On Robot" %}}
Expand Down
3 changes: 2 additions & 1 deletion docs/tutorials/projects/filtered-camera.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,8 @@ Add the ML model service to your machine to be able to deploy and update ML mode
1. Navigate to your machine's **CONFIGURE** tab in the [Viam app](https://app.viam.com).
1. Click the **+** (Create) button next to your main part in the left-hand menu and select **Service**, then select the built-in `TFLite CPU` model.
1. Use the suggested name for your service or give it a name, like `my-mlmodel-service`, then click **Create**.
1. On the panel that appears, select the **Deploy model on machine** toggle, then select your model from the **Select model** dropdown.
1. On the panel that appears, select the **Deploy model on machine** toggle, then select **Select model**.
sguequierre marked this conversation as resolved.
Show resolved Hide resolved
Select your model from the modal that appears.
If you don't see your model name appear here, ensure that your model appears under the [**Models** subtab](https://app.viam.com/data/models) of the **DATA** tab in the Viam app.
If you trained your own model, ensure that the model has finished training and appears under the **Models** section of that page, and not the **Training** section.
1. Click **Save** in the top right corner of the page to save your changes.
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/projects/integrating-viam-with-openai.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ To configure an ML model service:
Your robot will register this as a machine learning model and make it available for use.

Select **Deploy model on machine** for the **Deployment** field.
Then select the `viam-labs:EfficientDet-COCO` model from the **Models** dropdown.
Click **Select model**, then select the `viam-labs:EfficientDet-COCO` model from the modal that appears.
sguequierre marked this conversation as resolved.
Show resolved Hide resolved

Now, create a vision service to visualize your ML model:

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/projects/verification-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ The model can detect a variety of things which you can see in <file>[labels.txt]
3. Select type `ML model`, then select model `TFLite CPU`.
4. Enter `persondetect` as the name for your ML model service, then click **Create**.
5. Select **Deploy model on machine** for the **Deployment** field.
6. Then select the `viam-labs:EfficientDet-COCO` model from the **Select models** dropdown.
6. Click **Select model**, then select the **EfficientDet-COCO** model by **viam-labs** from the **Registry** tab of the modal that appears.

Finally, configure an `mlmodel` detector vision service to use your new `"persondetect"` ML model:

Expand Down
5 changes: 3 additions & 2 deletions docs/tutorials/services/data-mlmodel-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,9 +230,10 @@ To deploy a model to your machine:
1. Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
1. Select the `ML model` type, then select the `TFLite CPU` model.
1. Enter a name or use the suggested name, like `my-mlmodel-service`, for your service and click **Create**.
1. In the resulting ML Model service configuration pane, select **Deploy model on machine**, then select the model you just trained from the **Select model** dropdown menu.
1. In the resulting ML Model service configuration pane, select **Deploy model on machine**, then click **Select model**.
In the modal that appears, search for and select a model from your organization or the [Registry](/registry/).

{{< imgproc src="/tutorials/data-management/mlmodel-service-conf.png" alt="The ML model service configuration pane showing the required settings to deploy the my-classifier-model." resize="600x" >}}
{{<imgproc src="/tutorials/data-management/mlmodel-service-conf.png" alt="The ML model service configuration pane showing the required settings to deploy the my-classifier-model." resize="400x">}}

1. Click **Save** at the top right of the window to save your changes.

Expand Down
Loading