diff --git a/assets/services/data/time-series.png b/assets/services/data/time-series.png new file mode 100644 index 0000000000..506189e99a Binary files /dev/null and b/assets/services/data/time-series.png differ diff --git a/docs/data-ai/_index.md b/docs/data-ai/_index.md index e54164b6d3..0f1b7d7b2c 100644 --- a/docs/data-ai/_index.md +++ b/docs/data-ai/_index.md @@ -8,3 +8,13 @@ no_list: true open_on_desktop: true overview: true --- + + + +Machine learning (ML) provides your machines with the ability to adjust their behavior based on models that recognize patterns or make predictions. + +Common use cases include: + +- Object detection, which enables machines to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected. +- Object classification, which enables machines to separate people, animals, plants, or other objects into predefined categories based on their characteristics, and to perform different actions based on the classes of objects. +- Speech recognition, natural language processing, and speech synthesis, which enable machines to verbally communicate with us. diff --git a/docs/data-ai/ai/act.md b/docs/data-ai/ai/act.md index 4adb7aea52..86a1376eeb 100644 --- a/docs/data-ai/ai/act.md +++ b/docs/data-ai/ai/act.md @@ -4,6 +4,148 @@ title: "Act based on inferences" weight: 70 layout: "docs" type: "docs" -no_list: true -description: "TODO" +description: "Use the vision service API to act based on inferences." --- + +You can use the [vision service API](/dev/reference/apis/services/vision/) to get information about your machine's inferences and program behavior based on that. + +## Program a line following robot + +For example, you can [program a line following robot](/tutorials/services/color-detection-scuttle/) that uses a vision service to follow a colored object. + +You can use the following code to detect and follow the location of a colored object: + +{{% expand "click to view code" %}} + +```python {class="line-numbers linkable-line-numbers"} +async def connect(): + opts = RobotClient.Options.with_api_key( + # Replace "" (including brackets) with your machine's API key + api_key='', + # Replace "" (including brackets) with your machine's + # API key ID + api_key_id='' + ) + return await RobotClient.at_address("ADDRESS FROM THE VIAM APP", opts) + + +# Get largest detection box and see if it's center is in the left, center, or +# right third +def leftOrRight(detections, midpoint): + largest_area = 0 + largest = {"x_max": 0, "x_min": 0, "y_max": 0, "y_min": 0} + if not detections: + print("nothing detected :(") + return -1 + for d in detections: + a = (d.x_max - d.x_min) * (d.y_max-d.y_min) + if a > largest_area: + a = largest_area + largest = d + centerX = largest.x_min + largest.x_max/2 + if centerX < midpoint-midpoint/6: + return 0 # on the left + if centerX > midpoint+midpoint/6: + return 2 # on the right + else: + return 1 # basically centered + + +async def main(): + spinNum = 10 # when turning, spin the motor this much + straightNum = 300 # when going straight, spin motor this much + numCycles = 200 # run the loop X times + vel = 500 # go this fast when moving motor + + # Connect to robot client and set up components + machine = await connect() + base = Base.from_robot(machine, "my_base") + camera_name = "" + camera = Camera.from_robot(machine, camera_name) + frame = await camera.get_image(mime_type="image/jpeg") + + # Convert to PIL Image + pil_frame = viam_to_pil_image(frame) + + # Grab the vision service for the detector + my_detector = VisionClient.from_robot(machine, "my_color_detector") + + # Main loop. Detect the ball, determine if it's on the left or right, and + # head that way. Repeat this for numCycles + for i in range(numCycles): + detections = await my_detector.get_detections_from_camera(camera_name) + + answer = leftOrRight(detections, pil_frame.size[0]/2) + if answer == 0: + print("left") + await base.spin(spinNum, vel) # CCW is positive + await base.move_straight(straightNum, vel) + if answer == 1: + print("center") + await base.move_straight(straightNum, vel) + if answer == 2: + print("right") + await base.spin(-spinNum, vel) + # If nothing is detected, nothing moves + + await robot.close() + +if __name__ == "__main__": + print("Starting up... ") + asyncio.run(main()) + print("Done.") +``` + +{{% /expand%}} + +If you configured the color detector to detect red in the Viam app, your rover should detect and navigate towards any red objects that come into view of its camera. +Use something like a red sports ball or book cover as a target to follow to test your rover: + +
+{{
+ +## Act in industrial applications + +You can also act based on inferences in an industrial context. +For example, you can program a robot arm to halt operations when workers enter dangerous zones, preventing potential accidents. + +The code for this would look like: + +```python {class="line-numbers linkable-line-numbers"} +detections = await detector.get_detections_from_camera(camera_name) +for d in detections: + if d.confidence > 0.6 and d.class_name == "PERSON": + arm.stop() +``` + +You can also use inferences of computer vision for quality assurance purposes. +For example, you can program a robot arm doing automated harvesting to use vision to identify ripe produce and pick crops selectively. + +The code for this would look like: + +```python {class="line-numbers linkable-line-numbers"} +classifications = await detector.get_classifications_from_camera( + camera_name, + 4) +for c in classifications: + if d.confidence > 0.6 and d.class_name == "RIPE": + arm.pick() +``` + +To get inferences programmatically, you will want to use the vision service API: + +{{< cards >}} +{{% card link="/dev/reference/apis/services/vision/" customTitle="Vision service API" noimage="True" %}} +{{< /cards >}} + +To implement industrial solutions in code, you can also explore the following component APIs: + +{{< cards >}} +{{< card link="/dev/reference/apis/components/arm/" customTitle="Arm API" noimage="True" >}} +{{< card link="/dev/reference/apis/components/base/" customTitle="Base API" noimage="True" >}} +{{< card link="/dev/reference/apis/components/camera/" customTitle="Camera API" noimage="True" >}} +{{< card link="/dev/reference/apis/components/gripper/" customTitle="Gripper API" noimage="True" >}} +{{< card link="/dev/reference/apis/components/motor/" customTitle="Motor API" noimage="True" >}} +{{< card link="/dev/reference/apis/components/sensor/" customTitle="Sensor API" noimage="True" >}} +{{< /cards >}} diff --git a/docs/data-ai/ai/advanced/conditional-sync.md b/docs/data-ai/ai/advanced/conditional-sync.md deleted file mode 100644 index 9d8fae7746..0000000000 --- a/docs/data-ai/ai/advanced/conditional-sync.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -linkTitle: "Upload external data" -title: "Upload external data for training" -weight: 20 -layout: "docs" -type: "docs" -no_list: true -description: "TODO" ---- diff --git a/docs/data-ai/ai/advanced/upload-external-data.md b/docs/data-ai/ai/advanced/upload-external-data.md new file mode 100644 index 0000000000..39e2d75951 --- /dev/null +++ b/docs/data-ai/ai/advanced/upload-external-data.md @@ -0,0 +1,297 @@ +--- +linkTitle: "Upload external data" +title: "Upload external data for training" +images: ["/services/icons/data-folder.svg"] +weight: 20 +layout: "docs" +type: "docs" +languages: ["python"] +viamresources: ["data_manager"] +aliases: + - /data/upload/ + - /services/data/upload/ + - /how-tos/upload-data/ +date: "2024-12-04" +description: "Upload data to the Viam app from your local computer or mobile device using the data client API, Viam CLI, or Viam mobile app." +--- + +If you configured the [data management service](/services/data/), Viam automatically uploads data from the configured directory to the cloud, at the interval you specified. +However, if you want to upload a batch of data once from somewhere else, either from a different directory on your machine or from your personal computer or mobile device, you have several options using the Viam app, the data client API, or the Viam mobile app. + +## Sync a batch of data from another directory + +Typically, you configure the data service to sync data from your machine at regular intervals indefinitely. +However, if you already have a cache of data you'd like to use with Viam, you can temporarily modify your configuration to sync a batch of data and then revert your config changes after the data is uploaded. + +### Prerequisites + +{{% expand "A running machine connected to the Viam app. Click to see instructions." %}} + +{{% snippet "setup-both.md" %}} + +{{% /expand%}} + +{{< expand "Enable data capture and sync on your machine." >}} + +Add the [data management service](/services/data/): + +On your machine's **CONFIGURE** tab, click the **+** icon next to your machine part in the left-hand menu and select **Service**. + +Select the `data management / RDK` service and click **Create**. +You can leave the default data sync interval of `0.1` minutes to sync every 6 seconds. + +{{< /expand >}} + +### Instructions + +{{% alert title="Note" color="note" %}} + +This method of uploading data will delete the data from your machine once it is uploaded to the cloud. + +If you do not want the data deleted from your machine, copy the data to a new folder and sync that folder instead so that your local copy remains. + +{{% /alert %}} + +{{< table >}} +{{% tablestep %}} +{{}} +**1. Organize your data** + +Put the data you want to sync in a directory on your machine. +All of the data in the folder will be synced, so be sure that you want to upload all of the contents of the folder. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Configure sync from the additional folder** + +In the **Additional paths**, enter the full path to the directory where the data you want to upload is stored, for example, `/Users/Artoo/my_cat_photos`. + +Toggle **Syncing** to on (green) if it isn't already on. + +{{}} + +Click **Save** in the top right corner of the page. + +{{% /tablestep %}} +{{% tablestep %}} +**3. Confirm that your data uploaded** + +Navigate to your [**DATA** page in the Viam app](https://app.viam.com/data/view) and confirm that your data appears there. +If you don't see your files yet, wait a few moments and refresh the page. + +{{% /tablestep %}} +{{% tablestep %}} +**4. Remove the folder path** + +Once the data has uploaded, navigate back to your data service config. +You can now delete the additional path you added. +You can also turn off **Syncing** unless you have other directories you'd like to continue to sync from. +{{% /tablestep %}} +{{< /table >}} + +## Upload data with Python + +You can use the Python data client API [`file_upload_from_path`](/appendix/apis/data-client/#fileuploadfrompath) method to upload one or more files from your computer to the Viam Cloud. + +{{% alert title="Note" color="note" %}} + +Unlike data sync, using the `file_upload_from_path` API method uploads all the data even if that data already exists in the cloud. +In other words, it duplicates data if you run it multiple times. + +Also unlike data sync, this method _does not_ delete data from your device. + +{{% /alert %}} + +### Prerequisites + +{{< expand "Install the Viam Python SDK" >}} + +Install the [Viam Python SDK](https://python.viam.dev/) by running the following command on the computer from which you want to upload data: + +```sh {class="command-line" data-prompt="$"} +pip install viam-sdk +``` + +{{< /expand >}} + +### Instructions + +{{< table >}} +{{% tablestep link="/appendix/apis/data-client/#establish-a-connection" %}} +**1. Get API key** + +Go to your organization's setting page and create an API key for your individual {{< glossary_tooltip term_id="part" text="machine part" >}}, {{< glossary_tooltip term_id="part" text="machine" >}}, {{< glossary_tooltip term_id="location" text="location" >}}, or {{< glossary_tooltip term_id="organization" text="organization" >}}. + +{{% /tablestep %}} +{{% tablestep link="/appendix/apis/data-client/" %}} +**2. Add a `file_upload_from_path` API call** + +Create a Python script and use the `file_upload_from_path` method to upload your data, depending on whether you are uploading one or multiple files: + +{{< tabs >}} +{{< tab name="Upload a single file" >}} + +To upload just one file, make a call to [`file_upload_from_path`](/appendix/apis/data-client/#fileuploadfrompath). + +{{< expand "Click this to see example code" >}} + +```python {class="line-numbers linkable-line-numbers"} +import asyncio + +from viam.rpc.dial import DialOptions, Credentials +from viam.app.viam_client import ViamClient + + +async def connect() -> ViamClient: + dial_options = DialOptions( + credentials=Credentials( + type="api-key", + # Replace "" (including brackets) with your machine's API key + payload='', + ), + # Replace "" (including brackets) with your machine's + # API key ID + auth_entity='' + ) + return await ViamClient.create_from_dial_options(dial_options) + + +async def main(): + # Make a ViamClient + viam_client = await connect() + # Instantiate a DataClient to run data client API methods on + data_client = viam_client.data_client + await data_client.file_upload_from_path( + # The ID of the machine part the file should be associated with + part_id="abcdefg-1234-abcd-5678-987654321xyzabc", + # Any tags you want to apply to this file + tags=["cat", "animals", "brown"], + # Path to the file + filepath="/Users/Artoo/my_cat_photos/brown-cat-on-a-couch.png" + ) + + viam_client.close() + +if __name__ == "__main__": + asyncio.run(main()) +``` + +{{< /expand >}} + +{{% /tab %}} +{{< tab name="Upload all files in a directory" >}} + +To upload all the files in a directory, you can use the [`file_upload_from_path`](/appendix/apis/data-client/#fileuploadfrompath) method inside a `for` loop. + +{{< expand "Click this to see example code" >}} + +```python {class="line-numbers linkable-line-numbers"} +import asyncio +import os + +from viam.rpc.dial import DialOptions, Credentials +from viam.app.viam_client import ViamClient + + +async def connect() -> ViamClient: + dial_options = DialOptions( + credentials=Credentials( + type="api-key", + # Replace "" (including brackets) with your machine's API key + payload='', + ), + # Replace "" (including brackets) with your machine's + # API key ID + auth_entity='' + ) + return await ViamClient.create_from_dial_options(dial_options) + + +async def main(): + # Make a ViamClient + viam_client = await connect() + # Instantiate a DataClient to run data client API methods on + data_client = viam_client.data_client + # Specify directory from which to upload data + my_data_directory = "/Users/Artoo/my_cat_photos" + + for file_name in os.listdir(my_data_directory): + await data_client.file_upload_from_path( + part_id="abcdefg-1234-abcd-5678-987654321xyzabc", + tags=["cat", "animals", "brown"], + filepath=os.path.join(my_data_directory, file_name) + ) + + viam_client.close() + +if __name__ == "__main__": + asyncio.run(main()) +``` + +{{< /expand >}} + +{{% /tab %}} +{{< /tabs >}} + +{{% /tablestep %}} +{{% tablestep %}} +{{}} +**3. Run your code** + +Save and run your code once. +Running your code more than once will duplicate the data. +View your uploaded data in your [**DATA** page in the Viam app](https://app.viam.com/data/view). + +{{% /tablestep %}} +{{< /table >}} + +## Upload images with the Viam mobile app + +Upload images as machine data straight from your phone, skipping the normal data capture and cloud synchronization process, through the [Viam mobile app](/fleet/control/#control-interface-in-the-viam-mobile-app). +This is useful if you want to capture images for training machine learning models on the go. + +### Prerequisites + +{{< expand "Download the Viam mobile app and sign into your Viam account" >}} + +Install the mobile app from the [App Store](https://apps.apple.com/vn/app/viam-robotics/id6451424162) or [Google Play](https://play.google.com/store/apps/details?id=com.viam.viammobile&hl=en&gl=US). + + + apple store icon + + + + google play store icon + + +{{< /expand >}} + +### Instructions + +{{< table >}} +{{% tablestep link="/services/data/" %}} +**1. Navigate to your machine** + +In the Viam mobile app, select an organization by clicking on the menu icon in the top left corner and tapping an organization. + +Tap the **Locations** tab and select a location, then select the machine you want your data to be associated with. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Upload images** + +Tap the menu button marked "**...**" in the upper right corner. +Tap **Upload Images**. + +Select each image you want to upload, then tap **Add**. + +The uploaded images metadata will contain the machine part you selected. +However, the uploaded images will not be associated with a component or method. + +{{% /tablestep %}} +{{< /table >}} + +## Next steps + +Now that you have a batch of data uploaded, you can [train an ML model](/data-ai/ai/train-tflite/) on it. +Or, if you want to collect and upload data _not_ in a batch, see [Create a dataset](/data-ai/ai/create-dataset/). diff --git a/docs/data-ai/ai/alert.md b/docs/data-ai/ai/alert.md index 8f26dda2eb..adcec026df 100644 --- a/docs/data-ai/ai/alert.md +++ b/docs/data-ai/ai/alert.md @@ -4,6 +4,121 @@ title: "Alert on inferences" weight: 60 layout: "docs" type: "docs" -no_list: true -description: "TODO" +description: "Use triggers to send email notifications when inferences are made." --- + +At this point, you should have already set up and tested [computer vision functionality](/data-ai/ai/run-inference/). +On this page, you'll learn how to use triggers to send alerts in the form of email notifications when certain detections or classifications are made. + +You will build a system that can monitor camera feeds and detect situations that require review. +In other words, this system performs anomaly detection. +Whenever the system detects an anomaly, it will send an email notification. + +First, you'll set up data capture and sync to record images with the anomaly and upload them to the cloud. +Next, you'll configure a trigger to send email notifications when the anomaly is detected. + +## Configured a filtered camera + +Your physical camera is working and your vision service is set up. +Now you will pull them together to filter out only images where an inference is made with the [`filtered-camera`](https://app.viam.com/module/erh/filtered-camera) {{< glossary_tooltip term_id="module" text="module" >}}. +This camera module takes the vision service and applies it to your webcam feed, filtering the output so that later, when you configure data management, you can save only the images that contain people without hard hats rather than all images the camera captures. + +Configure the camera module with classification or object labels according to the labels your ML model provides that you want to alert on. +Follow the instructions in the [`filtered-camera` module readme](https://github.com/erh/filtered_camera). +For example, if using the YOLOv8 model (named `yolo`) for hardhat detection, you would configure the module like the following: + +{{% expand "Instructions for configuring the filtered-camera module to detect people without a hardhat" %}} + +1. Navigate to your machine's **CONFIGURE** tab. + +2. Click the **+** (Create) button next to your main part in the left-hand menu and select **Component**. + Start typing `filtered-camera` and select **camera / filtered-camera** from the results. + Click **Add module**. + +3. Name your filtering camera something like `objectfilter-cam` and click **Create**. + +4. Paste the following into the attributes field: + + ```json {class="line-numbers linkable-line-numbers"} + { + "camera": "my_webcam", + "vision": "yolo", + "window_seconds": 3, + "objects": { + "NO-Hardhat": 0.5 + } + } + ``` + + If you named your detector something other than "yolo," edit the `vision_services` value accordingly. + You can also edit the confidence threshold. + If you change it to `0.6` for example, the `filtered-camera` camera will only return labeled bounding boxes when the vision model indicates at least 60% confidence that the object is a hard hat or a person without a hard hat. + +5. Click **Save** in the top right corner of the screen to save your changes. + +{{% /expand%}} + +## Configure data capture and sync + +Viam's built-in [data management service](/services/data/) allows you to, among other things, capture images and sync them to the cloud. + +Configure data capture on the `filtered-camera` camera to capture images of detections or classifications: + +1. First, you need to add the data management service to your machine to make it available to capture data on your camera. + + Navigate to your machine's **CONFIGURE** tab. + + Click the **+** (Create) button next to your main part in the left-hand menu and select **Service**. + Type "data" and click **data management / RDK**. + Name your data management service `data-manager` and click **Create**. + + Leave all the default data service attributes as they are and click **Save** in the top right corner of the screen to save your changes. + +2. Now you're ready to enable data capture on your detector camera. + Locate the `objectfilter-cam` panel. + +3. Click **Add method**. + Click the **Type** dropdown and select **ReadImage**. + Set the capture frequency to `0.2` images per second (equivalent to one image every 5 seconds). + You can always change the frequency to suit your use case. + Set the **MIME type** to `image/jpeg`. + +## Set up alerts + +[Triggers](/configure/triggers/) allow you to send webhook requests or email notifications when certain events happen. + +You can use the **Data has been synced to the cloud** trigger to send email alerts whenever an image with an anomaly detection is synced to the cloud from your object filter camera. + +### Configure a trigger on your machine + +Now it's time to configure a trigger so that you get an email when a person is not wearing a hard hat. + +Go to the **CONFIGURE** tab of your machine on the [Viam app](https://app.viam.com). +Click the **+** (Create) button in the left side menu and select **Trigger**. + +Name the trigger and click **Create**. + +Select trigger **Type** as **Data has been synced to the cloud** and **Data Types** as **Binary (image)**. + +{{}} + +To configure notifications, add an email address. +Also configure the time between notifications. + +Click **Save** in the top right corner of the screen to save your changes. + +## Test the whole system + +You've built all the pieces of the system and connected them together. +Now it's time to test the whole thing. + +Make sure `viam-server` is running on your machine. +Run your camera in front of what you're detecting and wait for an anomaly to appear. +Wait a couple of minutes for the email to arrive in your inbox. +Congratulations, you've successfully built your anomaly detection monitor! + +## Troubleshooting + +### Test the vision service + +To see the detections or classifications occurring in real time and verify if their confidence level reaches the threshold you have set, you can navigate to the vision service card and expand the **TEST** panel. diff --git a/docs/data-ai/ai/create-dataset.md b/docs/data-ai/ai/create-dataset.md index 060ff51f24..796896b27e 100644 --- a/docs/data-ai/ai/create-dataset.md +++ b/docs/data-ai/ai/create-dataset.md @@ -4,6 +4,385 @@ title: "Create a dataset" weight: 10 layout: "docs" type: "docs" -no_list: true -description: "TODO" +description: "Create a dataset to train a machine learning model." --- + +To ensure a machine learning model you create performs well, you need to train it on a variety of images that cover the range of things your machine should be able to recognize. + +This page will walk you through capturing data with the data management service, labeling these images for machine learning, and creating a dataset with them. + +{{% expand "Just testing and want a dataset to get started with? Click here." %}} + +We have two datasets you can use for testing, one with shapes and the other with a wooden figure: + +{{}} + +{{< imgproc src="/tutorials/filtered-camera-module/viam-figure-dataset.png" style="width:400px" alt="The datasets subtab of the data tab in the Viam app, showing a custom 'viam-figure' dataset of 25 images, most containing the wooden Viam figure" class="imgzoom fill aligncenter" resize="1400x" >}} + +1. [Download the shapes dataset](https://storage.googleapis.com/docs-blog/dataset-shapes.zip) or [download the wooden figure dataset](https://storage.googleapis.com/docs-blog/dataset-figure.zip). +1. Unzip the download. +1. Open a terminal and go to the dataset folder. +1. Create a python script in the dataset's folder with the following contents: + + ```python {class="line-numbers linkable-line-numbers"} + # Assumption: The dataset was exported using the `viam dataset export` command. + # This script is being run from the `destination` directory. + + import asyncio + import os + import json + import argparse + + from viam.rpc.dial import DialOptions, Credentials + from viam.app.viam_client import ViamClient + from viam.proto.app.data import BinaryID + + async def connect(args) -> ViamClient: + dial_options = DialOptions( + credentials=Credentials( + type="api-key", + payload=args.api_key, + ), + auth_entity=args.api_key_id + ) + return await ViamClient.create_from_dial_options(dial_options) + + + async def main(): + parser = argparse.ArgumentParser( + description='Upload images, metadata, and tags to a new dataset') + parser.add_argument('-org-id', dest='org_id', action='store', + required=True, help='Org Id') + parser.add_argument('-api-key', dest='api_key', action='store', + required=True, help='API KEY with org admin access') + parser.add_argument('-api-key-id', dest='api_key_id', action='store', + required=True, help='API KEY ID with org admin access') + parser.add_argument('-machine-part-id', dest='machine_part_id', + action='store', required=True, + help='Machine part id for image metadata') + parser.add_argument('-location-id', dest='location_id', action='store', + required=True, help='Location id for image metadata') + parser.add_argument('-dataset-name', dest='dataset_name', action='store', + required=True, + help='Name of the data to create and upload to') + args = parser.parse_args() + + + # Make a ViamClient + viam_client = await connect(args) + # Instantiate a DataClient to run data client API methods on + data_client = viam_client.data_client + + # Create dataset + try: + dataset_id = await data_client.create_dataset( + name=args.dataset_name, + organization_id=args.org_id + ) + print("Created dataset: " + dataset_id) + except Exception: + print("Error. Check that the dataset name does not already exist.") + print("See: https://app.viam.com/data/datasets") + return 1 + + file_ids = [] + + for file_name in os.listdir("metadata/"): + with open("metadata/" + file_name) as f: + data = json.load(f) + tags = None + if "tags" in data["captureMetadata"].keys(): + tags = data["captureMetadata"]["tags"] + + annotations = None + if "annotations" in data.keys(): + annotations = data["annotations"] + + image_file = data["fileName"] + + print("Uploading: " + image_file) + + id = await data_client.file_upload_from_path( + part_id=args.machine_part_id, + tags=tags, + filepath=os.path.join("data/", image_file) + ) + print("FileID: " + id) + + binary_id = BinaryID( + file_id=id, + organization_id=args.org_id, + location_id=args.location_id + ) + + if annotations: + bboxes = annotations["bboxes"] + for box in bboxes: + await data_client.add_bounding_box_to_image_by_id( + binary_id=binary_id, + label=box["label"], + x_min_normalized=box["xMinNormalized"], + y_min_normalized=box["yMinNormalized"], + x_max_normalized=box["xMaxNormalized"], + y_max_normalized=box["yMaxNormalized"] + ) + + file_ids.append(binary_id) + + await data_client.add_binary_data_to_dataset_by_ids( + binary_ids=file_ids, + dataset_id=dataset_id + ) + print("Added files to dataset.") + print("https://app.viam.com/data/datasets?id=" + dataset_id) + + viam_client.close() + + if __name__ == '__main__': + asyncio.run(main()) + ``` + +1. Run the script to upload the images and their metadata into a dataset in Viam app providing the following input: + + ```sh {class="command-line" data-prompt="$" } + python upload_data.py -org-id -api-key \ + -api-key-id -machine-part-id \ + -location-id -dataset-name + ``` + +1. Continue to [Train a tflite machine learning model](/data-ai/ai/train-tflite/). + +{{% /expand%}} + +{{< gif webm_src="/how-tos/capture-images.webm" mp4_src="/how-tos/capture-images.mp4" alt="Configuring data management for a camera in the viam app" max-width="600px" class="aligncenter" >}} + +{{< table >}} +{{% tablestep link="/services/data/" %}} +**1. Enable the data management service** + +In the configuration pane for your configured camera component, find the **Data capture** section. +Click **Add method**. + +When the **Create a data management service** prompt appears, click it to add the service to your machine. +You can leave the default data manager settings. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Capture data** + +With the data management service configured on your machine, configure how the camera component captures data: + +In the **Data capture** panel of your camera's configuration, select `ReadImage` from the method selector. + +Set your desired capture frequency. +For example, set it to `0.05` to capture an image every 20 seconds. + +Set the MIME type to your desired image format, for example `image/jpeg`. + +{{% /tablestep %}} +{{% tablestep %}} +**3. Save to start capturing** + +Save the config. + +With cloud sync enabled, your machine automatically uploads captured data to the Viam app after a short delay. + +{{% /tablestep %}} +{{% tablestep %}} +**4. View data in the Viam app** + +Click on the **...** menu of the camera component and click on **View captured data**. +This takes you to the data tab. + +![View captured data option in the component menu](/get-started/quickstarts/collect-data/cam-capt-data.png) + +If you do not see images from your camera, try waiting a minute and refreshing the page to allow time for the images to be captured and then synced to the app at the interval you configured. + +If no data appears after the sync interval, check the **LOGS** tab for errors. + +{{% /tablestep %}} +{{% tablestep %}} +**5. Capture a variety of data** + +Your camera now saves images at the configured time interval. +When training machine learning models, it is important to supply a variety of images. +The dataset you create should represent the possible range of visual input. +This may include capturing images of different angles, different configurations of objects and different lighting conditions. +The more varied the provided dataset, the more accurate the resulting model becomes. + +Capture at least 10 images of anything you want your machine to recognize. + +{{< expand "For more tips and tricks on improving model accuracy, click here." >}} + +- **More data means better models:** Incorporate as much data as you practically can to improve your model’s overall performance. +- **Include counterexamples:** Include images with and without the object you’re looking to classify. + This helps the model distinguish the target object from the background and reduces the chances of false positives by teaching it what the object is not. +- **Avoid class imbalance:** Don’t train excessively on one specific type or class, make sure each category has a roughly equal number of images. + For instance, if you're training a dog detector, include images of various dog breeds to avoid bias towards one breed. + An imbalanced dataset can lead the model to favor one class over others, reducing its overall accuracy. +- **Match your training images to your intended use case:** Use images that reflect the quality and conditions of your production environment. + For example, if you plan to use a low-quality camera in production, train with low-quality images. + Similarly, if your model will run all day, capture images in both daylight and nighttime conditions. +- **Vary your angles and distances:** Include image examples from every angle and distance that the model will see in normal use. +- **Ensure labelling accuracy:** Make sure the labels or bounding box annotations you give are accurate. + +{{< /expand >}} + +{{% /tablestep %}} +{{% tablestep %}} +**6. Label your images** + +Once you have enough images, you can disable data capture to [avoid incurring fees](https://www.viam.com/product/pricing) for capturing large amounts of training data. + +Then use the interface on the [**DATA** tab](https://app.viam.com/data/view) to label your images. + +Most use cases fall into one of two categories: + +- Detecting certain objects and their location within an image. + For example, you may wish to know where and how many `pizzas` there are in an image. + In this case, add a label for each object you would like to detect. + +{{< expand "For instructions to add labels, click here." >}} +To add a label, click on an image and select the **Bounding box** mode in the menu that opens. +Choose an existing label or create a new label. +Click on the image where you would like to add the bounding box and drag to where the bounding box should end. + +{{}} + +To expand the image, click on the expand side menu arrow in the corner of the image: + +{{}} + +Repeat this with all images. + +You can add one or more bounding boxes for objects in each image. +{{< /expand >}} + +- Classifying an image as a whole. + In other words, determining a descriptive state about an image. + For example, you may wish to know whether an image of a food display is `full`, `empty`, or `average` or whether the quality of manufacturing output is `good` or `bad`. + In this case, add tags to describe your images. + +{{< expand "For instructions to add tags, click here." >}} +To tag an image, click on an image and select the **Image tags** mode in the menu that opens. +Add one or more tags to your image. + +{{}} + +If you want to expand the image, click on the expand side menu arrow in the corner of the image. + +Repeat this with all images. +{{< /expand >}} + +{{% /tablestep %}} +{{% tablestep link="/fleet/dataset/" %}} +**7. Organize data into a dataset** + +To train a model, your images must be in a dataset. + +Use the interface on the **DATA** tab to add your labeled images to a dataset. + +Also add any unlabelled images to your dataset. +Unlabelled images must not comprise more than 20% of your dataset. +If you have 25 images in your dataset, at least 20 of those must be labelled. + +{{}} + +{{< expand "Want to add images to a dataset programmatically? Click here." >}} + +You can also add all images with a certain label to a dataset using the [`viam dataset data add` command](/cli/#dataset) or the [Data Client API](/appendix/apis/data-client/#addtagstobinarydatabyfilter): + +{{< tabs >}} +{{% tab name="CLI" %}} + +```sh {class="command-line" data-prompt="$"} +viam dataset create --org-id= --name= +viam dataset data add filter --dataset-id= --tags=red_star,blue_square +``` + +{{% /tab %}} +{{< tab name="Data Client API" >}} + +You can run this script to add all images from your machine to a dataset: + +```python {class="line-numbers linkable-line-numbers" data-line="14,18,30" } +import asyncio + +from viam.rpc.dial import DialOptions, Credentials +from viam.app.viam_client import ViamClient +from viam.utils import create_filter +from viam.proto.app.data import BinaryID + + +async def connect() -> ViamClient: + dial_options = DialOptions( + credentials=Credentials( + type="api-key", + # Replace "" (including brackets) with your machine's API key + payload='', + ), + # Replace "" (including brackets) with your machine's + # API key ID + auth_entity='' + ) + return await ViamClient.create_from_dial_options(dial_options) + + +async def main(): + # Make a ViamClient + viam_client = await connect() + # Instantiate a DataClient to run data client API methods on + data_client = viam_client.data_client + + # Replace "" (including brackets) with your machine's part id + my_filter = create_filter(part_id="") + + print("Getting data for part...") + binary_metadata, _, _ = await data_client.binary_data_by_filter( + my_filter, + include_binary_data=False + ) + my_binary_ids = [] + + for obj in binary_metadata: + my_binary_ids.append( + BinaryID( + file_id=obj.metadata.id, + organization_id=obj.metadata.capture_metadata.organization_id, + location_id=obj.metadata.capture_metadata.location_id + ) + ) + print("Creating dataset...") + # Create dataset + try: + dataset_id = await data_client.create_dataset( + name="MyDataset", + organization_id=ORG_ID + ) + print("Created dataset: " + dataset_id) + except Exception: + print("Error. Check that the dataset name does not already exist.") + print("See: https://app.viam.com/data/datasets") + return 1 + + print("Adding data to dataset...") + await data_client.add_binary_data_to_dataset_by_ids( + binary_ids=my_binary_ids, + dataset_id=dataset_id + ) + print("Added files to dataset.") + print("See dataset: https://app.viam.com/data/datasets?id=" + dataset_id) + + viam_client.close() + +if __name__ == '__main__': + asyncio.run(main()) +``` + +{{% /tab %}} +{{< /tabs >}} + +{{% /expand%}} + +{{% /tablestep %}} +{{< /table >}} diff --git a/docs/data-ai/ai/deploy.md b/docs/data-ai/ai/deploy.md index d8cc3a5953..919e610b3d 100644 --- a/docs/data-ai/ai/deploy.md +++ b/docs/data-ai/ai/deploy.md @@ -4,6 +4,64 @@ title: "Deploy a model" weight: 40 layout: "docs" type: "docs" -no_list: true -description: "TODO" +modulescript: true +description: "Deploy an ML model to your machine." --- + +The Machine Learning (ML) model service allows you to deploy [machine learning models](/registry/ml-models/) to your machine. +The service works with models trained inside and outside the Viam app: + +- You can [train](/how-tos/train-deploy-ml/) models on data from your machines. +- You can upload externally trained models on the [**MODELS** tab](https://app.viam.com/data/models) in the **DATA** section of the Viam app. +- You can use [ML models](https://app.viam.com/registry?type=ML+Model) from the [Viam Registry](https://app.viam.com/registry). +- You can use a [model](/registry/ml-models/) trained outside the Viam platform whose files are on your machine. + +## Deploy your ML model + +Navigate to the **CONFIGURE** tab of one of your machine in the [Viam app](https://app.viam.com). +Add an ML model service that supports the ML model you trained or the one you want to use from the registry. + +{{}} + +### Model framework support + +Viam currently supports the following frameworks: + + +| Model Framework | ML Model Service | Hardware Support | Description | +| --------------- | --------------- | ------------------- | ----------- | +| [TensorFlow Lite](https://www.tensorflow.org/lite) | [`tflite_cpu`](https://github.com/viam-modules/mlmodel-tflite) | linux/amd64, linux/arm64, darwin/arm64, darwin/amd64 | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the [model requirements.](https://github.com/viam-modules/mlmodel-tflite) | +| [ONNX](https://onnx.ai/) | [`onnx-cpu`](https://github.com/viam-labs/onnx-cpu), [`triton`](https://github.com/viamrobotics/viam-mlmodelservice-triton) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | Universal format that is not optimized for hardware inference but runs on a wide variety of machines. | +| [TensorFlow](https://www.tensorflow.org/) | [`tensorflow-cpu`](https://github.com/viam-modules/tensorflow-cpu), [`triton`](https://github.com/viamrobotics/viam-mlmodelservice-triton) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | A full framework that is made for more production-ready systems. | +| [PyTorch](https://pytorch.org/) | [`torch-cpu`](https://github.com/viam-modules/torch), [`triton`](https://github.com/viamrobotics/viam-mlmodelservice-triton) | Nvidia GPU, linux/arm64, darwin/arm64 | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers. | + +{{< alert title="Note" color="note" >}} +For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU. +{{< /alert >}} + +For example,use the `ML model / TFLite CPU` service for TFlite ML models. +If you used the built-in training, this is the ML model service you need to use. +If you used a custom training script, you may need a different ML model service. + +To deploy a model, click **Select model** and select the model from your organization or the registry. +Save your config. + +### Machine learning models from registry + +You can search the machine learning models that are available to deploy on this service from the registry here: + +{{}} + +## Next steps + +On its own the ML model service only runs the model. +After deploying your model, you need to configure an additional service to use the deployed model. +For example, you can configure an [`mlmodel` vision service](/services/vision/) to visualize the inferences your model makes. +Follow our docs to [run inference](/data-ai/ai/run-inference/) to add an `mlmodel` vision service and see inferences. + +For other use cases, consider [creating custom functionality with a module](/how-tos/create-module/). + +{{< alert title="Add support for other models" color="tip" >}} +ML models must be designed in particular shapes to work with the `mlmodel` [classification](/services/vision/mlmodel/) or [detection](/services/vision/mlmodel/) model of Viam's [vision service](/services/vision/). +See [ML Model Design](/registry/advanced/mlmodel-design/) to design a modular ML model service with models that work with vision. +{{< /alert >}} diff --git a/docs/data-ai/ai/run-inference.md b/docs/data-ai/ai/run-inference.md index 9587691af1..b42a9c117e 100644 --- a/docs/data-ai/ai/run-inference.md +++ b/docs/data-ai/ai/run-inference.md @@ -4,6 +4,100 @@ title: "Run inference on a model" weight: 50 layout: "docs" type: "docs" -no_list: true -description: "TODO" +modulescript: true +description: "Run inference on a model with a vision service or an SDK." --- + +After deploying an ml model, you need to configure an additional service to use the inferences the deployed model makes. +You can run inference on an ML model with a vision service or use an SDK to further process inferences. + +## Use a vision service + +Vision services work to provide computer vision. +They use an ML model and apply it to the stream of images from your camera. + +{{}} + +{{< readfile "/static/include/create-your-own-mr.md" >}} + +Note that many of these services have built in ML models, and thus do not need to be run alongside an ML model service. + +One vision service you can use to run inference on a camera stream if you have an ML model service configured is the `mlmodel` service. + +### Configure an mlmodel vision service + +Add the `vision / ML model` service to your machine. +Then, from the **Select model** dropdown, select the name of the ML model service you configured when [deploying](/data-ai/ai/deploy/) your model (for example, `mlmodel-1`). + +**Save** your changes. + +### Test your changes + +You can test a deployed vision service by clicking on the **Test** area of its configuration panel or from the [**CONTROL** tab](/fleet/control/). + +The camera stream shows when the vision service identifies something. +Try pointing the camera at a scene similar to your training data. + +{{< imgproc src="/tutorials/data-management/blue-star.png" alt="Detected blue star" resize="x200" >}} +{{< imgproc src="/tutorials/filtered-camera-module/viam-figure-preview.png" alt="Detection of a viam figure with a confidence score of 0.97" resize="x200" >}} + +{{% expand "Want to limit the number of shown classifications or detections? Click here." %}} + +If you are seeing a lot of classifications or detections, you can set a minimum confidence threshold. + +Start by setting the value to 0.8. +This reduces your output by filtering out anything below a threshold of 80% confidence. +You can adjust this attribute as necessary. + +Click the **Save** button in the top right corner of the page to save your configuration, then close and reopen the **Test** panel of the vision service configuration panel. +Now if you reopen the panel, you will only see classifications or detections with a confidence value higher than the `default_minimum_confidence` attribute. + +{{< /expand>}} + +For more detailed information, including optional attribute configuration, see the [`mlmodel` docs](/services/vision/mlmodel/). + +## Use an SDK + +You can also run inference using a Viam SDK. +You can use the [`Infer`](/dev/reference/apis/services/ml/#infer) +method of the ML Model API to make inferences. + +For example: + +{{< tabs >}} +{{% tab name="Python" %}} + +```python {class="line-numbers linkable-line-numbers"} +import numpy as np + +my_mlmodel = MLModelClient.from_robot(robot=machine, name="my_mlmodel_service") + +image_data = np.zeros((1, 384, 384, 3), dtype=np.uint8) + +# Create the input tensors dictionary +input_tensors = { + "image": image_data +} + +output_tensors = await my_mlmodel.infer(input_tensors) +``` + +{{% /tab %}} +{{% tab name="Go" %}} + +```go {class="line-numbers linkable-line-numbers"} +input_tensors := ml.Tensors{"0": tensor.New(tensor.WithShape(1, 2, 3), tensor.WithBacking([]int{1, 2, 3, 4, 5, 6}))} + +output_tensors, err := myMLModel.Infer(context.Background(), input_tensors) +``` + +{{% /tab %}} +{{< /tabs >}} + +After adding a vision service, you can use a vision service API method with a classifier or a detector to get inferences programmatically. +For more information, see the ML Model and Vision APIs: + +{{< cards >}} +{{< card link="/dev/reference/apis/services/ml/" customTitle="ML Model API" noimage="True" >}} +{{% card link="/dev/reference/apis/services/vision/" customTitle="Vision service API" noimage="True" %}} +{{< /cards >}} diff --git a/docs/data-ai/ai/train-tflite.md b/docs/data-ai/ai/train-tflite.md index 0149e8e904..67dd62ef11 100644 --- a/docs/data-ai/ai/train-tflite.md +++ b/docs/data-ai/ai/train-tflite.md @@ -2,8 +2,185 @@ linkTitle: "Train TFlite model" title: "Train a TFlite model" weight: 20 -layout: "docs" type: "docs" -no_list: true -description: "TODO" +tags: ["vision", "data", "services"] +images: ["/services/ml/train.svg"] +description: "Use your image data to train a model, so your machines can make inferences about their environments." +aliases: + - /use-cases/deploy-ml/ + - /manage/ml/train-model/ + - /ml/train-model/ + - /services/ml/train-model/ + - /tutorials/data-management-tutorial/ + - /tutorials/data-management/ + - /data-management/data-management-tutorial/ + - /tutorials/services/data-management-tutorial/ + - /tutorials/services/data-mlmodel-tutorial/ + - /tutorials/projects/filtered-camera/ + - /how-tos/deploy-ml/ + - /how-tos/train-deploy-ml/ +languages: [] +viamresources: ["data_manager", "mlmodel", "vision"] +platformarea: ["ml"] +date: "2024-12-03" --- + +Many machines have cameras through which they can monitor their environment. +With machine leaning, you can train models on patterns within that visual data. +You can collect data from the camera stream and label any patterns within the images. + +If a camera is pointed at a food display, for example, you can label the image of the display with `full` or `empty`, or label items such as individual `pizza_slice`s. + +Using a model trained on such images, machines can make inferences about their environments. +Your machines can then automatically trigger alerts or perform other actions. +If a food display is empty, the machine could, for example, alert a supervisor to restock the display. + +Common use cases for this are **quality assurance** and **health and safety** applications. + +Follow this guide to use your image data to train an ML model, so that your machine can make inferences about its environment. + +## Prerequisites + +{{% expand "A running machine connected to the Viam app. Click to see instructions." %}} + +{{% snippet "setup.md" %}} + +{{% /expand%}} + +{{% expand "A dataset with labels. Click to see instructions." %}} + +Follow the guide to [create a dataset](/data-ai/ai/create-dataset/) if you haven't already. + +{{% /expand%}} + +{{% expand "A configured camera. Click to see instructions." %}} + +First, connect the camera to your machine's computer if it's not already connected (like with an inbuilt laptop webcam). + +Then, navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com). +Click the **+** icon next to your machine part in the left-hand menu and select **Component**. +The `webcam` model supports most USB cameras and inbuilt laptop webcams. +You can find additional camera models in the [camera configuration](/components/camera/#configuration) documentation. + +Complete the camera configuration and use the **TEST** panel in the configuration card to test that the camera is working. + +{{% /expand%}} + +{{% expand "No computer or webcam?" %}} + +No problem. +You don't need to buy or own any hardware to complete this guide. + +Use [Try Viam](https://app.viam.com/try) to borrow a rover free of cost online. +The rover already has `viam-server` installed and is configured with some components, including a webcam. + +Once you have borrowed a rover, go to its **CONTROL** tab where you can view camera streams and also drive the rover. +You should have a front-facing camera and an overhead view of your rover. +Now you know what the rover can perceive. + +To change what the front-facing camera is pointed at, find the **cam** camera panel on the **CONTROL** tab and click **Toggle picture-in-picture** so you can continue to view the camera stream. +Then, find the **viam_base** panel and drive the rover around. + +Now that you have seen that the cameras on your Try Viam rover work, begin by [Creating a dataset and labeling data](/data-ai/ai/create-dataset/). +You can drive the rover around as you capture data to get a variety of images from different angles. + +{{< alert title="Tip" color="tip" >}} +Be aware that if you are running out of time during your rental, you can [extend your rover rental](/appendix/try-viam/reserve-a-rover/#extend-your-reservation) as long as there are no other reservations. +{{< /alert >}} + +{{% /expand%}} + +## Train a machine learning (ML) model + +Now that you have a dataset with your labeled images, you are ready to train a machine learning model. + +{{< table >}} +{{% tablestep %}} +**1. Train an ML model** + +In the Viam app, navigate to your list of [**DATASETS**](https://app.viam.com/data/datasets) and select the one you want to train on. + +Click **Train model** and follow the prompts. + +You can train a TFLite model using **Built-in training**. + +Click **Next steps**. + +{{}} + +{{% /tablestep %}} +{{% tablestep %}} +**2. Fill in the details for your ML model** + +Enter a name for your new model. + +Select a **Task Type**: + +- **Single Label Classification**: The resulting model predicts one of the selected labels or `UNKNOWN` per image. + Select this if you only have one label on each image. Ensure that the dataset you are training on also contains unlabeled images. +- **Multi Label Classification**: The resulting model predicts one or more of the selected labels per image. +- **Object Detection**: The resulting model predicts either no detected objects or any number of object labels alongside their locations per image. + +Select the labels you want to train your model on from the **Labels** section. Unselected labels will be ignored, and will not be part of the resulting model. + +Click **Train model**. + +{{< imgproc src="/tutorials/data-management/train-model.png" alt="The data tab showing the train a model pane" style="width:500px" resize="1200x" class="imgzoom fill aligncenter" >}} + +{{% /tablestep %}} +{{% tablestep %}} +**3. Wait for your model to train** + +The model now starts training and you can follow its process on the [**TRAINING** tab](https://app.viam.com/training). + +Once the model has finished training, it becomes visible on the [**MODELS** tab](https://app.viam.com/data/models). + +You will receive an email when your model finishes training. + +{{% /tablestep %}} +{{% tablestep %}} +**4. Debug your training job** + +From the [**TRAINING** tab](https://app.viam.com/training), click on your training job's ID to see its logs. + +{{< alert title="Note" color="note" >}} + +Your training script may output logs at the error level but still succeed. + +{{< /alert >}} + +You can also view your training jobs' logs with the [`viam train logs`](/cli/#train) command. + +{{% /tablestep %}} +{{< /table >}} + +## Test your ML model + +{{}} + +Once your model has finished training, you can test it. + +Ideally, you want your ML model to be able to work with a high level of confidence. +As you test it, if you notice faulty predictions or confidence scores, you will need to adjust your dataset and retrain your model. + +If you trained a classification model, you can test it with the following instructions. +If you trained a detection model, move on to [deploy an ML model](/data-ai/ai/deploy/). + +1. Navigate to the [**DATA** tab](https://app.viam.com/data/view) and click on the **Images** subtab. +1. Click on an image to open the side menu, and select the **Actions** tab. +1. In the **Run model** section, select your model and specify a confidence threshold. +1. Click **Run model** + +If the results exceed the confidence threshold, the **Run model** section shows a label and the responding confidence threshold. + +## Next steps + +Now your machine can make inferences about its environment. The next step is to [act](/data-ai/ai/act/) or [alert](/data-ai/ai/alert/) based on these inferences. + +See the following tutorials for examples of using machine learning models to make your machine do things based on its inferences about its environment: + +{{< cards >}} +{{% card link="/tutorials/projects/helmet/" %}} +{{% card link="/tutorials/services/color-detection-scuttle/" %}} +{{% card link="/tutorials/projects/pet-treat-dispenser/" customTitle="Smart Pet Feeder" %}} +{{< /cards >}} diff --git a/docs/data-ai/ai/train.md b/docs/data-ai/ai/train.md index 4d8c06f95b..2bd5a34d2f 100644 --- a/docs/data-ai/ai/train.md +++ b/docs/data-ai/ai/train.md @@ -1,9 +1,601 @@ --- linkTitle: "Train other models" title: "Train other models" +tags: ["data management", "ml", "model training"] weight: 30 layout: "docs" type: "docs" -no_list: true -description: "TODO" +aliases: + - /services/ml/upload-training-script/ + - /how-tos/create-custom-training-scripts/ +languages: ["python"] +viamresources: ["mlmodel", "data_manager"] +platformarea: ["ml"] +description: "If you want to train models to custom specifications, write a custom training script and upload it to the Viam Registry." +date: "2024-12-04" --- + +You can create custom Python training scripts that train ML models to your specifications using PyTorch, Tensorflow, TFLite, ONNX, or any other Machine Learning framework. +Once you upload a training script to the [Viam Registry](https://app.viam.com/registry?type=Training+Script), you can use it to build ML models in the Viam Cloud based on your datasets. + +## Prerequisites + +{{% expand "A dataset with data you can train an ML model on. Click to see instructions." %}} + +For image data, you can follow the instructions to [Create a dataset](/data-ai/ai/create-dataset/) to create a dataset and label data. + +For other data you can use the [Data Client API](/appendix/apis/data-client/) from within the training script to get data stored in the Viam Cloud. + +{{% /expand%}} + +{{% expand "The Viam CLI. Click to see instructions." %}} + +You must have the Viam CLI installed to upload training scripts to the registry. + +{{< readfile "/static/include/how-to/install-cli.md" >}} + +{{% /expand%}} + +## Create a training script + +{{< table >}} +{{% tablestep %}} +**1. Create files** + +Create the following folders and empty files: + +```treeview +my-training/ +├── model/ +| ├── training.py +| └── __init__.py +└── setup.py +``` + +{{% /tablestep %}} +{{% tablestep %}} +**2. Add `setup.py` code** + +Add the following code to `setup.py` and add additional required packages on line 11: + +```python {class="line-numbers linkable-line-numbers" data-line="11"} +from setuptools import find_packages, setup + +setup( + name="my-training", + version="0.1", + packages=find_packages(), + include_package_data=True, + install_requires=[ + "google-cloud-aiplatform", + "google-cloud-storage", + # TODO: Add additional required packages + ], +) +``` + +{{% /tablestep %}} +{{% tablestep %}} +**3. Create `__init__.py`** + +If you haven't already, create a folder called model and create an empty file inside it called \_\_init\_\_.py. + +{{% /tablestep %}} +{{< tablestep >}} + +

4. Add training.py code

+ +

Copy this template into training.py:

+ +{{% expand "Click to see the template" %}} + +```python {class="line-numbers linkable-line-numbers" data-line="126,170" } +import argparse +import json +import os +import typing as ty + +single_label = "MODEL_TYPE_SINGLE_LABEL_CLASSIFICATION" +multi_label = "MODEL_TYPE_MULTI_LABEL_CLASSIFICATION" +labels_filename = "labels.txt" +unknown_label = "UNKNOWN" + +API_KEY = os.environ['API_KEY'] +API_KEY_ID = os.environ['API_KEY_ID'] + + +# This parses the required args for the training script. +# The model_dir variable will contain the output directory where +# the ML model that this script creates should be stored. +# The data_json variable will contain the metadata for the dataset +# that you should use to train the model. +def parse_args(): + """Returns dataset file, model output directory, and num_epochs if present. + These must be parsed as command line arguments and then used as the model + input and output, respectively. The number of epochs can be used to + optionally override the default. + """ + parser = argparse.ArgumentParser() + parser.add_argument("--dataset_file", dest="data_json", type=str) + parser.add_argument("--model_output_directory", dest="model_dir", type=str) + parser.add_argument("--num_epochs", dest="num_epochs", type=int) + args = parser.parse_args() + return args.data_json, args.model_dir, args.num_epochs + + +# This is used for parsing the dataset file (produced and stored in Viam), +# parse it to get the label annotations +# Used for training classifiction models +def parse_filenames_and_labels_from_json( + filename: str, all_labels: ty.List[str], model_type: str +) -> ty.Tuple[ty.List[str], ty.List[str]]: + """Load and parse JSON file to return image filenames and corresponding + labels. The JSON file contains lines, where each line has the key + "image_path" and "classification_annotations". + Args: + filename: JSONLines file containing filenames and labels + all_labels: list of all N_LABELS + model_type: string single_label or multi_label + """ + image_filenames = [] + image_labels = [] + + with open(filename, "rb") as f: + for line in f: + json_line = json.loads(line) + image_filenames.append(json_line["image_path"]) + + annotations = json_line["classification_annotations"] + labels = [unknown_label] + for annotation in annotations: + if model_type == multi_label: + if annotation["annotation_label"] in all_labels: + labels.append(annotation["annotation_label"]) + # For single label model, we want at most one label. + # If multiple valid labels are present, we arbitrarily select + # the last one. + if model_type == single_label: + if annotation["annotation_label"] in all_labels: + labels = [annotation["annotation_label"]] + image_labels.append(labels) + return image_filenames, image_labels + + +# Parse the dataset file (produced and stored in Viam) to get +# bounding box annotations +# Used for training object detection models +def parse_filenames_and_bboxes_from_json( + filename: str, + all_labels: ty.List[str], +) -> ty.Tuple[ty.List[str], ty.List[str], ty.List[ty.List[float]]]: + """Load and parse JSON file to return image filenames + and corresponding labels with bboxes. + Args: + filename: JSONLines file containing filenames and bboxes + all_labels: list of all N_LABELS + """ + image_filenames = [] + bbox_labels = [] + bbox_coords = [] + + with open(filename, "rb") as f: + for line in f: + json_line = json.loads(line) + image_filenames.append(json_line["image_path"]) + annotations = json_line["bounding_box_annotations"] + labels = [] + coords = [] + for annotation in annotations: + if annotation["annotation_label"] in all_labels: + labels.append(annotation["annotation_label"]) + # Store coordinates in rel_yxyx format so that + # we can use the keras_cv function + coords.append( + [ + annotation["y_min_normalized"], + annotation["x_min_normalized"], + annotation["y_max_normalized"], + annotation["x_max_normalized"], + ] + ) + bbox_labels.append(labels) + bbox_coords.append(coords) + return image_filenames, bbox_labels, bbox_coords + + +# Build the model +def build_and_compile_model( + labels: ty.List[str], model_type: str, input_shape: ty.Tuple[int, int, int] +) -> Model: + """Builds and compiles a model + Args: + labels: list of string lists, where each string list contains up to + N_LABEL labels associated with an image + model_type: string single_label or multi_label + input_shape: 3D shape of input + """ + + # TODO: Add logic to build and compile model + + return model + + +def save_labels(labels: ty.List[str], model_dir: str) -> None: + """Saves a label.txt of output labels to the specified model directory. + Args: + labels: list of string lists, where each string list contains up to + N_LABEL labels associated with an image + model_dir: output directory for model artifacts + """ + filename = os.path.join(model_dir, labels_filename) + with open(filename, "w") as f: + for label in labels[:-1]: + f.write(label + "\n") + f.write(labels[-1]) + + +def save_model( + model: Model, + model_dir: str, + model_name: str, +) -> None: + """Save model as a TFLite model. + Args: + model: trained model + model_dir: output directory for model artifacts + model_name: name of saved model + """ + file_type = "" + + # Save the model to the output directory. + filename = os.path.join(model_dir, f"{model_name}.{file_type}") + with open(filename, "wb") as f: + f.write(model) + + +if __name__ == "__main__": + DATA_JSON, MODEL_DIR = parse_args() + + IMG_SIZE = (256, 256) + + # Read dataset file. + # TODO: change labels to the desired model output. + LABELS = ["orange_triangle", "blue_star"] + + # The model type can be changed based on whether you want the model to + # output one label per image or multiple labels per image + model_type = multi_label + image_filenames, image_labels = parse_filenames_and_labels_from_json( + DATA_JSON, LABELS, model_type) + + # Build and compile model on data + model = build_and_compile_model() + + # Save labels.txt file + save_labels(LABELS + [unknown_label], MODEL_DIR) + # Convert the model to tflite + save_model( + model, MODEL_DIR, "classification_model", IMG_SIZE + (3,) + ) +``` + +{{% /expand %}} + +{{% /tablestep %}} +{{< tablestep >}} + +

5. Understand template script parsing functionality

+

When a training script is run, the Viam platform passes the dataset file for the training and the designated model output directory to the script.

+

The template contains functionality to parse the command line inputs and parse annotations from the dataset file.

+ +{{% expand "Click for more information on parsing command line inputs." %}} + +The script you are creating must take the following command line inputs: + +- `dataset_file`: a file containing the data and metadata for the training job +- `model_output_directory`: the location where the produced model artifacts are saved to + +The `parse_args()` function in the template parses your arguments. + +You can add additional custom command line inputs by adding them to the `parse_args()` function. + +{{% /expand %}} + +{{% expand "Click for more information on parsing annotations from dataset file." %}} + +When you submit a training job to the Viam Cloud, Viam will pass a `dataset_file` to the training script when you train an ML model with it. +The file contains metadata from the dataset used for the training, including the file path for each data point and any annotations associated with the data. + +Dataset JSON files for image datasets with bounding box labels and classification labels are formatted as follows: + +```json {class="line-numbers linkable-line-numbers"} +{ + "image_path": "/path/to/data/data1.jpeg", + "bounding_box_annotations": [ + { + "annotation_label": "blue_star", + "x_min_normalized": 0.38175675675675674, + "x_max_normalized": 0.5101351351351351, + "y_min_normalized": 0.35585585585585583, + "y_max_normalized": 0.527027027027027 + } + ], + "classification_annotations": [ + { + "annotation_label": "blue_star" + } + ] +} +{ + "image_path": "/path/to/data/data2.jpeg", + "bounding_box_annotations": [ + { + "annotation_label": "blue_star", + "x_min_normalized": 0.2939189189189189, + "x_max_normalized": 0.4594594594594595, + "y_min_normalized": 0.25225225225225223, + "y_max_normalized": 0.5495495495495496 + } + ], + "classification_annotations": [ + { + "annotation_label": "blue_star" + } + ] +} + +{ + "image_path": "/path/to/data/data3.jpeg", + "bounding_box_annotations": [ + { + "annotation_label": "blue_star", + "x_min_normalized": 0.03557312252964427, + "x_max_normalized": 0.2015810276679842, + "y_min_normalized": 0.30526315789473685, + "y_max_normalized": 0.5368421052631579 + }, + { + "annotation_label": "blue_square", + "x_min_normalized": 0.039525691699604744, + "x_max_normalized": 0.2015810276679842, + "y_min_normalized": 0.2578947368421053, + "y_max_normalized": 0.5473684210526316 + } + ], + "classification_annotations": [ + { + "annotation_label": "blue_star" + }, + { + "annotation_label": "blue_square" + } + ] +} +``` + +In your training script, you must parse the dataset file for the classification or bounding box annotations from the dataset metadata. +Depending on if you are training a classification or detection model, the template script contains the `parse_filenames_and_labels_from_json()` and the `parse_filenames_and_bboxes_from_json()` function. + +{{% /expand%}} + +

If the script you are creating does not use an image dataset, you only need the model output directory.

+ +{{% /tablestep %}} +{{% tablestep %}} +**6. Add logic to produce the model artifact** + +You must fill in the `build_and_compile_model` function. +In this part of the script, you use the data from the dataset and the annotations from the dataset file to build a Machine Learning model. + +As an example, you can refer to the logic from model/training.py from this [example classification training script](https://github.com/viam-modules/classification-tflite) that trains a classification model using TensorFlow and Keras. + +{{% /tablestep %}} +{{% tablestep %}} +**7. Save the model artifact** + +The `save_model()` and the `save_labels()` functions in the template before the `main` logic save the model artifact your training job produces to the `model_output_directory` in the cloud. + +Once a training job is complete, Viam checks the output directory and creates a package with all of the contents of the directory, creating or updating a registry item for the ML model. + +You must fill in these functions. + +As an example, you can refer to the logic from model/training.py from this [example classification training script](https://github.com/viam-modules/classification-tflite) that trains a classification model using TensorFlow and Keras. + +{{% /tablestep %}} +{{% tablestep %}} +**8. Update the main method** + +Update the main to call the functions you have just created. + +{{% /tablestep %}} +{{% tablestep %}} +**9. Using Viam APIs in a training script** + +If you need to access any of the [Viam APIs](/appendix/apis/) within a custom training script, you can use the environment variables `API_KEY` and `API_KEY_ID` to establish a connection. +These environment variables will be available to training scripts. + +```python +async def connect() -> ViamClient: + """Returns a authenticated connection to the ViamClient for the requested + org associated with the submitted training job.""" + # The API key and key ID can be accessed programmatically, using the + # environment variable API_KEY and API_KEY_ID. The user does not need to + # supply the API keys, they are provided automatically when the training + # job is submitted. + dial_options = DialOptions.with_api_key( + os.environ.get("API_KEY"), os.environ.get("API_KEY_ID") + ) + return await ViamClient.create_from_dial_options(dial_options) +``` + +{{% /tablestep %}} +{{< /table >}} + +## Test your training script locally + +You can export one of your Viam datasets to test your training script locally. + +{{< table >}} +{{% tablestep %}} +**1. Export your dataset** + +You can get the dataset ID from the dataset page or using the [`viam dataset list`](/cli/#dataset) command: + +```sh {class="command-line" data-prompt="$"} +viam dataset export --destination= --dataset-id= --include-jsonl=true +``` + +The dataset will be formatted like the one Viam produces for the training. +Use the `parse_filenames_and_labels_from_json` and `parse_filenames_and_bboxes_from_json` functions to get the images and annotations from your dataset file. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Run your training script locally** + +Install any required dependencies and run your training script specifying the path to the dataset.jsonl file from your exported dataset: + +```sh {class="command-line" data-prompt="$"} +python3 -m model.training --dataset_file=/path/to/dataset.jsonl \ + --model_output_directory=. --custom_arg=3 +``` + +{{% /tablestep %}} +{{< /table >}} + +## Upload your training script + +To be able to use your training script in the Viam platform, you must upload it to the Viam Registry. + +{{< table >}} +{{% tablestep %}} +**1. Package the training script as a tar.gz source distribution** + +Before you can upload your training script to Viam, you have to compress your project folder into a tar.gz file: + +```sh {class="command-line" data-prompt="$"} +tar -czvf my-training.tar.gz my-training/ +``` + +{{% alert title="Tip" color="tip" %}} +You can refer to the directory structure of this [example classification training script](https://github.com/viam-modules/classification-tflite). +{{% /alert %}} + +{{% /tablestep %}} +{{% tablestep %}} +**2. Upload a training script** + +To upload your custom training script to the registry, use the `viam training-script upload` command. + +{{< tabs >}} +{{% tab name="Usage" %}} + +```sh {class="command-line" data-prompt="$"} +viam training-script upload --path= \ + --org-id= --script-name= +``` + +{{% /tab %}} +{{% tab name="Examples" %}} + +```sh {class="command-line" data-prompt="$"} +viam training-script upload --path=my-training.tar.gz \ + --org-id= --script-name=my-training-script + +viam training-script upload --path=my-training.tar.gz \ + --org-id= --script-name=my-training \ + --framework=tensorflow --type=single_label_classification \ + --description="Custom image classification model" \ + --visibility=private +``` + +{{% /tab %}} +{{< /tabs >}} + +You can also [specify the version, framework, type, visibility, and description](/cli/#training-script) when uploading a custom training script. + +To find your organization's ID, run the following command: + +```sh {class="command-line" data-prompt="$"} +viam organization list +``` + +After a successful upload, the CLI displays a confirmation message with a link to view your changes online. +You can view uploaded training scripts by navigating to the [registry's **Training Scripts** page](https://app.viam.com/registry?type=Training+Script). + +{{% /tablestep %}} +{{< /table >}} + +## Submit a training job + +After uploading the training script, you can run it by submitting a training job through the Viam app or using the Viam CLI or [ML Training client API](/appendix/apis/ml-training-client/#submittrainingjob). + +{{< table >}} +{{% tablestep %}} +**1. Create the training job** + +{{< tabs >}} +{{% tab name="Viam app" min-height="150px" %}} + +{{}} + +In the Viam app, navigate to your list of [**DATASETS**](https://app.viam.com/data/datasets) and select the one you want to train a model on. + +Click **Train model** and select **Train on a custom training script**, then follow the prompts. + +{{% /tab %}} +{{% tab name="CLI" %}} + +You can use [`viam train submit custom from-registry`](/cli/#positional-arguments-submit) to submit a training job. + +For example: + +```sh {class="command-line" data-prompt="$"} +viam train submit custom from-registry --dataset-id= \ + --org-id= --model-name=MyRegistryModel \ + --model-version=2 --version=1 \ + --script-name=mycompany:MyCustomTrainingScript + --args=custom_arg1=3,custom_arg2="'green_square blue_star'" +``` + +This command submits a training job to the previously uploaded `MyCustomTrainingScript` with another input dataset, which trains `MyRegistryModel` and publishes that to the registry. + +You can get the dataset id from the dataset page or using the [`viam dataset list`](/cli/#dataset) command. + +{{% /tab %}} +{{< /tabs >}} + +{{% /tablestep %}} +{{% tablestep %}} +**2. Check on training job process** + +You can view your training job on the **DATA** page's [**TRAINING** tab](https://app.viam.com/training). + +Once the model has finished training, it becomes visible on the **DATA** page's [**MODELS** tab](https://app.viam.com/data/models). + +You will receive an email when your training job completes. + +You can also check your training jobs and their status from the CLI: + +```sh {class="command-line" data-prompt="$"} +viam train list --org-id= --job-status=unspecified +``` + +{{% /tablestep %}} +{{% tablestep %}} +**3. Debug your training job** + +From the **DATA** page's [**TRAINING** tab](https://app.viam.com/training), click on your training job's ID to see its logs. + +{{< alert title="Note" color="note" >}} + +Your training script may output logs at the error level but still succeed. + +{{< /alert >}} + +You can also view your training jobs' logs with the [`viam train logs`](/cli/#train) command. + +{{% /tablestep %}} +{{< /table >}} + +To use your new model with machines, you must [deploy it](/data-ai/ai/deploy/) with the appropriate ML model service. +Then you can use another service, such as the vision service, to [run inference](/data-ai/ai/run-inference/). diff --git a/docs/data-ai/data/advanced/alert-data.md b/docs/data-ai/data/advanced/alert-data.md new file mode 100644 index 0000000000..41de7c09a7 --- /dev/null +++ b/docs/data-ai/data/advanced/alert-data.md @@ -0,0 +1,430 @@ +--- +linkTitle: "Alert on data" +title: "Alert on data" +weight: 60 +layout: "docs" +type: "docs" +description: "Use triggers to send email notifications or webhook requests when data from the machine is synced." +--- + +You can use triggers to send email notifications or webhook requests when data from the machine is synced, even captured from a specific component with a specified condition. +For example, you can configure a trigger to send you a notification when your robot's sensor collects a new reading. + +Follow this guide to learn how to configure a trigger to send webhook requests or emails for the following events: + +- **Data has been synced to the cloud**: trigger when data from the machine is synced +- **Conditional data ingestion**: trigger any time data is captured from a specified component with a specified method and condition + +To configure a trigger: + +{{< tabs >}} +{{% tab name="Builder mode" %}} + +1. Go to the **CONFIGURE** tab of your machine on the [Viam app](https://app.viam.com). + Click the **+** (Create) button in the left side menu and select **Trigger**. + + {{}} + +2. Name the trigger and click **Create**. + +3. Select trigger **Type**. + Configure additional attributes: + +{{< tabs name="Types of Triggers" >}} +{{% tab name="Data synced to cloud" %}} + +Select the data types for which the trigger should send requests. +Whenever data of the specified data types is ingested, a `POST` request will be sent. + +{{% /tab %}} +{{% tab name="Conditional data ingestion" %}} + +Select the component you want to capture data from and the method you want to capture data from. +Then, add any conditions. + +These can include a key, a value, and a logical operator. +For example, a trigger configured to fire when data is captured from the motor `motor-1`'s `IsPowered` method when `is_on` is equal to `True`: + +{{}} + +For more information, see [Conditions](#conditions). + +{{% alert title="Note" color="note" %}} +You must [configure data capture](/services/data/) for your component to use this trigger. +{{% /alert %}} + +{{% /tab %}} +{{< /tabs >}} + +4. Add **Webhooks** or **Emails**. + +{{< tabs name="Notifications types" >}} +{{% tab name="Webhooks" %}} + +Click **Add Webhook**. +Add the URL of your cloud function or lambda. +Configure the time between notifications. + +![The trigger configured with an example URL in the Viam app.](/build/configure/trigger-configured.png) + +{{% /tab %}} +{{% tab name="Emails" %}} + +Click **Add Email**. +Add the email you wish to be notified whenever this trigger is triggered. +Configure the time between notifications. + +![The trigger configured with an example email in the Viam app.](/build/configure/trigger-configured-email.png) + +{{% /tab %}} +{{< /tabs >}} +{{% /tab %}} +{{% tab name="JSON mode" %}} + +To configure your trigger by using **JSON** mode instead of **Builder** mode, paste one of the following JSON templates into your JSON config. +`"triggers"` is a top-level section, similar to `"components"` or `"services"`. + +{{< tabs >}} +{{% tab name="JSON Template: Data Synced" %}} + +```json {class="line-numbers linkable-line-numbers"} + "triggers": [ + { + "name": "", + "event": { + "type": "part_data_ingested", + "data_ingested": { + "data_types": ["binary", "tabular", "file"] + } + }, + "notifications": [ + { + "type": "webhook", + "value": "https://1abcde2ab3cd4efg5abcdefgh10zyxwv.lambda-url.us-east-1.on.aws", + "seconds_between_notifications": + } + ] + } + ] +``` + +{{% /tab %}} +{{% tab name="JSON Template: Conditional Data Ingestion" %}} + +```json {class="line-numbers linkable-line-numbers"} +"triggers": [ + { + "name": "", + "event": { + "type": "conditional_data_ingested", + "conditional": { + "data_capture_method": "::", + "condition": { + "evals": [ + { + "operator": "", + "value": + } + ] + } + } + }, + "notifications": [ + { + "type": "email", + "value": "", + "seconds_between_notifications": + } + ] + } +] + +``` + +{{% /tab %}} +{{% tab name="JSON Example" %}} + +```json {class="line-numbers linkable-line-numbers"} +{ + "components": [ + { + "name": "local", + "model": "pi", + "type": "board", + "namespace": "rdk", + "attributes": {}, + "depends_on": [] + }, + { + "name": "my_temp_sensor", + "model": "bme280", + "type": "sensor", + "namespace": "rdk", + "attributes": {}, + "depends_on": [], + "service_configs": [ + { + "type": "data_manager", + "attributes": { + "capture_methods": [ + { + "method": "Readings", + "additional_params": {}, + "capture_frequency_hz": 0.017 + } + ] + } + } + ] + } + ], + "triggers": [ + { + "name": "trigger-1", + "event": { + "type": "part_data_ingested", + "data_ingested": { + "data_types": ["binary", "tabular", "file"] + } + }, + "notifications": [ + { + "type": "webhook", + "value": "", + "seconds_between_notifications": 0 + } + ] + } + ] +} +``` + +{{% /tab %}} +{{< /tabs >}} + +{{% /tab %}} +{{< /tabs >}} + +The following attributes are available for triggers: + + +| Name | Type | Required? | Description | +| ---- | ---- | --------- | ----------- | +| `name` | string | **Required** | The name of the trigger | +| `event` | object | **Required** | The trigger event object:
  • `type`: The type of the event to trigger on. Options: `part_data_ingested`, `conditional_data_ingested`.
  • `data_types`: Required with `type` `part_data_ingested`. The data types that trigger the event. Options: `binary`, `tabular`, `file`, `unspecified`.
  • `conditional`: Required with `type` `conditional_data_ingested`. See [Conditions](#conditions) for more information.
| +| `notifications` | object | **Required** | The notifications object:
  • `type`: The type of the notification. Options: `webhook`, `email`
  • `value`: The URL to send the request to or the email address to notify.
  • `seconds_between_notifications`: The interval between notifications in seconds.
| + +#### Conditions + +The `conditional` object for the `conditional_data_ingested` trigger includes the following options: + + +| Name | Type | Required? | Description | +| ---- | ---- | --------- | ----------- | +| `data_capture_method` | string | **Required** | The method of data capture to trigger on.
Example: `sensor::Readings`. | +| `condition` | object | Optional | Any additional conditions for the method to fire the trigger. Leave out this object for the trigger to fire any time there is data synced.
Options:
  • `evals`:
    • `operator`: Logical operator for the condition.
    • `value`: An object, string, or integer that specifies the value of the method of the condition, along with the key or nested keys of the measurements in data capture.
| + +Options for `operator`: + +| Name | Description | +| ----- | ------------------------ | +| `lt` | Less than | +| `gt` | Greater than | +| `lte` | Less than or equal to | +| `gte` | Greater than or equal to | +| `eq` | Equals | +| `neq` | Does not equal | + +Examples: + +{{< tabs >}} +{{% tab name="1 level of nesting" %}} + +```json {class="line-numbers linkable-line-numbers"} +"condition": { + "evals": [ + { + "operator": "lt", + "value": { + "Line-Neutral AC RMS Voltage": 130 + } + } + ] +} +``` + +This eval would trigger for the following sensor reading: + +```json {class="line-numbers linkable-line-numbers"} +{ + "readings": { + "Line-Neutral AC RMS Voltage": 100 + } +} +``` + +{{% /tab %}} +{{% tab name="2 levels of nesting" %}} + +```json {class="line-numbers linkable-line-numbers"} +"condition": { + "evals": [ + { + "operator": "lt", + "value": { + "coordinate": { + "latitude": 50 + } + } + } + ] +} +``` + +This eval would trigger for the following sensor reading: + +```json {class="line-numbers linkable-line-numbers"} +{ + "readings": { + "coordinate": { + "latitude": 40 + } + } +} +``` + +{{% /tab %}} +{{< /tabs >}} + +5. If using a webhook, write your cloud function or lambda to process the request from `viam-server`. + You can use your cloud function or lambda to interact with any external API such as, for example, Twilio, PagerDuty, or Zapier. + The following example function prints the received headers: + + {{< tabs >}} + {{% tab name="Flask" %}} + +```python {class="line-numbers linkable-line-numbers" } +from flask import Flask, request + +app = Flask(__name__) + + +@app.route("/", methods=['GET', 'POST']) +def trigger(): + headers = request.headers + data = {} + if request.data: + data = request.json + payload = { + "Org-Id": headers.get('org-id', 'no value'), + "Organization-Name": headers.get('organization-name', '') or + data.get('org_name', 'no value'), + "Location-Id": headers.get('location-id', 'no value'), + "Location-Name": headers.get('location-name', '') or + data.get('location_name', 'no value'), + "Part-Id": headers.get('part-id', 'no value'), + "Part-Name": headers.get('part-name', 'no value'), + "Robot-Id": headers.get('robot-id', 'no value'), + "Machine-Name": headers.get('machine-name', '') or + data.get('machine_name', 'no value'), + "Component-Type": data.get('component_type', 'no value'), + "Component-Name": data.get('component_name', 'no value'), + "Method-Name": data.get('method_name', 'no value'), + "Min-Time-Received": data.get('min_time_received', 'no value'), + "Max-Time-Received": data.get('max_time_received', 'no value'), + "Data-Type": data.get('data_type', 'no value'), + "File-Id": data.get('file_id', 'no value'), + "Trigger-Condition": data.get("trigger_condition", 'no value'), + "Data": data.get('data', 'no value') + } + print(payload) + + return payload + + +if __name__ == '__main__': + app.run(host='0.0.0.0', port=8080) +``` + +{{% /tab %}} +{{% tab name="functions_framework" %}} + +```python {class="line-numbers linkable-line-numbers"} +import functions_framework +import requests +import time + + +@functions_framework.http +def hello_http(request): + headers = request.headers + data = {} + if request.data: + data = request.json + payload = { + "Org-Id": headers.get("org-id", "no value"), + "Organization-Name": headers.get("organization-name", "") + or data.get("org_name", "no value"), + "Location-Id": headers.get("location-id", "no value"), + "Location-Name": headers.get("location-name", "") + or data.get("location_name", "no value"), + "Part-Id": headers.get("part-id", "no value"), + "Part-Name": headers.get("part-name", "no value"), + "Robot-Id": headers.get("robot-id", "no value"), + "Machine-Name": headers.get("machine-name", "") + or data.get("machine_name", "no value"), + "Component-Type": data.get("component_type", "no value"), + "Component-Name": data.get("component_name", "no value"), + "Method-Name": data.get("method_name", "no value"), + "Min-Time-Received": data.get("min_time_received", "no value"), + "Max-Time-Received": data.get("max_time_received", "no value"), + "Data-Type": data.get("data_type", "no value"), + "File-Id": data.get('file_id', "no value"), + "Trigger-Condition": data.get("trigger_condition", "no value"), + "Data": data.get('data', "no value") + } + print(payload) + + return 'Received headers: {}'.format(payload) +``` + +{{% /tab %}} +{{< /tabs >}} + +## Returned headers + +When a trigger occurs, Viam sends a HTTP request to the URL you specified for the trigger: + + +| Trigger type | HTTP Method | +| ------------ | ----------- | +| `part_data_ingested` | POST | +| `conditional_data_ingested` | POST | + +The request includes the following headers: + + +| Header Key | Description | +| ---------- | ----------- | +| `Org-Id` | The ID of the organization that triggered the request. | +| `Location-Id` | The location of the machine that triggered the request. | +| `Part-Id` | The part of the machine that triggered the request. | +| `Robot-Id` | The ID of the machine that triggered the request. | + +The request body includes the following data: + + +| Data Key | Description | Trigger types | +| -------- | ----------- | ------------- | +| `component_name` | The name of the component for which data was ingested. | `part_data_ingested`, `conditional_data_ingested` | +| `component_type` | The type of component for which data was ingested. | `part_data_ingested`, `conditional_data_ingested` | +| `method_name` | The name of the method from which data was ingested. | `part_data_ingested`, `conditional_data_ingested` | +| `min_time_received` | Indicates the earliest time a piece of data was received. | `part_data_ingested` | +| `max_time_received` | Indicates the latest time a piece of data was received. | `part_data_ingested` | +| `method_name` | The name of the method that triggered the request. | `conditional_data_ingested` | +| `machine_name` | The name of the machine that triggered the request. | `part_data_ingested`, `conditional_data_ingested` | +| `location_name` | The location of the machine that triggered the request. | `part_data_ingested`, `conditional_data_ingested` | +| `org_name` | The name of the organization that triggered the request. | `part_data_ingested`, `conditional_data_ingested` | +| `file_id` | The id of the file that was ingested. | `part_data_ingested` | +| `trigger_condition` | The condition that triggered the request. | `conditional_data_ingested` | +| `data` | The ingested sensor data. Includes `metadata` with `received_at` and `requested_at` timestamps and `data` in the form `map[string]any`. | `part_data_ingested`, `conditional_data_ingested` (sensor data) | diff --git a/docs/data-ai/data/advanced/conditional-sync.md b/docs/data-ai/data/advanced/conditional-sync.md index 85cf099dc5..48fc585951 100644 --- a/docs/data-ai/data/advanced/conditional-sync.md +++ b/docs/data-ai/data/advanced/conditional-sync.md @@ -1,9 +1,307 @@ --- +title: "Conditional cloud sync" linkTitle: "Conditional sync" -title: "Conditional sync" -weight: 20 -layout: "docs" +description: "Trigger cloud sync to sync captured data when custom conditions are met." type: "docs" -no_list: true -description: "TODO" +weight: 20 +tags: ["data management", "cloud", "sync"] +images: ["/services/icons/data-cloud-sync.svg"] +icon: true +aliases: + - /data/trigger-sync/ + - /how-tos/trigger-sync/ + - /services/data/trigger-sync/ +languages: [] +viamresources: ["sensor", "data_manager"] +platformarea: ["data", "registry"] +date: "2024-12-04" --- + +You may want to sync data only when a certain logic condition is met, instead of at a regular time interval. +For example, if you rely on mobile data but have intermittent WiFi connection in certain locations or at certain times of the day, you may want to trigger sync to only occur when these conditions are met. +Or, you may want to trigger sync only when your machine detects an object of a certain color. +You can use the [trigger-sync-examples module](https://github.com/viam-labs/trigger-sync-examples-v2) if one of these examples is what you are looking for. + +If you need different logic, you can create a modular sensor that determines if the conditions for sync are met or not. +This page will show you the implementation of a sensor which only allows sync during a defined time interval. +You can use it as the basis of your own custom logic. + +{{% alert title="In this page" color="tip" %}} + +{{% toc %}} + +{{% /alert %}} + +## Prerequisites + +{{% expand "A running machine connected to the Viam app. Click to see instructions." %}} + +{{% snippet "setup-both.md" %}} + +{{% /expand%}} + +{{< expand "Enable data capture and sync on your machine." >}} + +Add the [data management service](/services/data/): + +On your machine's **CONFIGURE** tab, click the **+** icon next to your machine part in the left-hand menu and select **Service**. + +Select the `data management / RDK` service and click **Create**. +You can leave the default data sync interval of `0.1` minutes to sync every 6 seconds. +Also leave both **Capturing** and **Syncing** toggles in the "on" position. + +{{< /expand >}} + +{{% expand "Create a sensor module. Click to see instructions." %}} + +Start by [creating a sensor module](/how-tos/sensor-module/). +Your sensor should have access to the information you need to determine if your machine should sync or not. +Based on that data, make the sensor return true when the machine should sync and false when it should not. +For example, if your want your machine to return data only during a specific time interval, your sensor needs to be able to access the time as well as be configured with the time interval during which you would like to sync data. +It can then return true during the specified sync time interval and false otherwise. + +{{% /expand%}} + +## Return `should_sync` as a reading from a sensor + +If the builtin data manager is configured with a sync sensor, the data manager will check the sensor's `Readings` method for a response with a "should_sync" key. + +The following example returns `"should_sync": true` if the current time is in a specified time window, and `"should_sync": false` otherwise. + +```go {class="line-numbers linkable-line-numbers" data-line="26,31,32,37"} +func (s *timeSyncer) Readings(context.Context, map[string]interface{}) (map[string]interface{}, error) { + currentTime := time.Now() + var hStart, mStart, sStart, hEnd, mEnd, sEnd int + n, err := fmt.Sscanf(s.start, "%d:%d:%d", &hStart, &mStart, &sStart) + + if err != nil || n != 3 { + s.logger.Error("Start time is not in the format HH:MM:SS.") + return nil, err + } + m, err := fmt.Sscanf(s.end, "%d:%d:%d", &hEnd, &mEnd, &sEnd) + if err != nil || m != 3 { + s.logger.Error("End time is not in the format HH:MM:SS.") + return nil, err + } + + zone, err := time.LoadLocation(s.zone) + if err != nil { + s.logger.Error("Time zone cannot be loaded: ", s.zone) + } + + startTime := time.Date(currentTime.Year(), currentTime.Month(), currentTime.Day(), + hStart, mStart, sStart, 0, zone) + endTime := time.Date(currentTime.Year(), currentTime.Month(), currentTime.Day(), + hEnd, mEnd, sEnd, 0, zone) + + readings := map[string]interface{}{"should_sync": false} + readings["time"] = currentTime.String() + // If it is between the start and end time, sync. + if currentTime.After(startTime) && currentTime.Before(endTime) { + s.logger.Debug("Syncing") + readings["should_sync"] = true + return readings, nil + } + + // Otherwise, do not sync. + s.logger.Debug("Not syncing. Current time not in sync window: " + currentTime.String()) + return readings, nil +} +``` + +{{< alert title="Note" color="note" >}} +You can return other readings alongside the `should_sync` value. +{{< /alert >}} + +If you wish to see more context, see the entire [implementation of the sensor on GitHub](https://github.com/viam-labs/sync-at-time/blob/main/timesyncsensor/timesyncsensor.go). + +For additional examples, see the `Readings` function of the [time-interval-trigger code](https://github.com/viam-labs/trigger-sync-examples-v2/blob/main/time-interval-trigger/selective_sync/selective_sync.go) and the [color-trigger code](https://github.com/viam-labs/trigger-sync-examples-v2/blob/main/color-trigger/selective_sync/selective_sync.go). + +## Add your sensor to determine when to sync + +Add your module to your machine and configure it. +In this example we will continue to use [`sync-at-time:timesyncsensor`](https://app.viam.com/module/naomi/sync-at-time). +You will need to follow the same steps with your module: + +{{< table >}} +{{% tablestep %}} +**1. Add the sensor to your machine** + +On your machine's **CONFIGURE** page, click the **+** button next to your machine part in the left menu. +Select **Component**, then search for and select the `sync-at-time:timesyncsensor` model provided by the [`sync-at-time` module](https://app.viam.com/module/naomi/sync-at-time). + +Click **Add module**, then enter a name or use the suggested name for your sensor and click **Create**. + +{{% /tablestep %}} + + + +{{% tablestep link="https://github.com/viam-labs/sync-at-time" %}} +**2. Configure your time frame** + +Go to the new component panel and copy and paste the following attribute template into your sensor’s attributes field: + +{{< tabs >}} +{{% tab name="Template" %}} + +```json +{ + "start": "HH:MM:SS", + "end": "HH:MM:SS", + "zone": "" +} +``` + +{{% /tab %}} +{{% tab name="Example" %}} + +```json +{ + "start": "18:29:00", + "end": "18:30:00", + "zone": "CET" +} +``` + +{{% /tab %}} +{{< /tabs >}} + +The following attributes are available for the `naomi:sync-at-time:timesyncsensor` sensor: + +
+ + +| Name | Type | Required? | Description | +| ------- | ------ | --------- | ----------- | +| `start` | string | **Required** | The start time for the time frame during which you want to sync. Example: `"14:10:00"`. | +| `end` | string | **Required** | The end of the sync time frame, for example: `"15:35:00"`. | +| `zone` | string | **Required** | The time zone for the `start` and `end` time, for example: `"CET"`. | + +{{< /tablestep >}} +{{< /table >}} + +
+
+ +In the next step you will configure the data manager to take the sensor into account when syncing. + +## Configure the data manager to sync based on sensor + +On your machine's **CONFIGURE** tab, switch to **JSON** mode and add a `selective_syncer_name` with the name for the sensor you configured and add the sensor to the `depends_on` field: + +{{< tabs >}} +{{% tab name="JSON Template" %}} + +```json {class="line-numbers linkable-line-numbers" data-line="9,14"} +{ + "name": "data_manager-1", + "type": "data_manager", + "namespace": "rdk", + "attributes": { + "additional_sync_paths": [], + "capture_dir": "", + "capture_disabled": false, + "selective_syncer_name": "", + "sync_disabled": false, + "sync_interval_mins": 0.1, + "tags": [] + }, + "depends_on": [""] +} +``` + +{{% /tab %}} +{{% tab name="JSON Example" %}} + +```json {class="line-numbers linkable-line-numbers" data-line="7,12"} +{ + "name": "datamanager", + "type": "data_manager", + "namespace": "rdk", + "attributes": { + "additional_sync_paths": [], + "selective_syncer_name": "timesensor", + "sync_interval_mins": 0.1, + "capture_dir": "", + "tags": [] + }, + "depends_on": ["timesensor"] +} +``` + +{{% /tab %}} +{{< /tabs >}} + +{{% expand "Click to view a full configuration example" %}} + +```json {class="line-numbers linkable-line-numbers" data-line="12-22,25-37,40-45"} +{ + "components": [ + { + "name": "camera-1", + "namespace": "rdk", + "type": "camera", + "model": "webcam", + "attributes": { + "video_path": "0x114000005a39331" + } + }, + { + "name": "timesensor", + "namespace": "rdk", + "type": "sensor", + "model": "naomi:sync-at-time:timesyncsensor", + "attributes": { + "start": "18:29:00", + "end": "18:30:00", + "zone": "CET" + } + } + ], + "services": [ + { + "name": "data_manager-1", + "namespace": "rdk", + "type": "data_manager", + "attributes": { + "capture_dir": "", + "tags": [], + "additional_sync_paths": [], + "selective_syncer_name": "timesensor", + "sync_interval_mins": 0.1 + }, + "depends_on": ["timesensor"] + } + ], + "modules": [ + { + "type": "registry", + "name": "naomi_sync-at-time", + "module_id": "naomi:sync-at-time", + "version": "2.0.0" + } + ] +} +``` + +{{% /expand%}} + +You have now configured sync to happen during a specific time slot. + +## Test your sync configuration + +To test your setup, [configure a webcam](/components/camera/webcam/) or another component and [enable data capture on the component](/services/data/#configuration). +Make sure to physically connect any hardware parts to the computer controlling your machine. +For a camera component, use the `ReadImage` method. +The data manager will now capture data. +Go to the [**CONTROL** tab](/fleet/control/). +You should see the sensor. +Click on `GetReadings`. + +{{}} + +If you are in the time frame for sync, the time sync sensor will return true. + +You can confirm that if data is currently syncing by going to the [**Data** tab](https://app.viam.com/data/view). +If you are not in the time frame for sync, adjust the configuration of your time sync sensor. +Then check again on the **CONTROL** and **Data** tab to confirm data is syncing. diff --git a/docs/data-ai/data/advanced/filter-before-sync.md b/docs/data-ai/data/advanced/filter-before-sync.md index d283ffb4c1..424831eff3 100644 --- a/docs/data-ai/data/advanced/filter-before-sync.md +++ b/docs/data-ai/data/advanced/filter-before-sync.md @@ -4,6 +4,112 @@ title: "Filter data before sync" weight: 10 layout: "docs" type: "docs" -no_list: true -description: "TODO" +description: "Use filtering to collect and sync only certain images." --- + +You can use filtering to selectively capture images using a machine learning (ML) model, for example to only capture images with people or specific objects in them. + +Contributors have written several filtering {{< glossary_tooltip term_id="module" text="modules" >}} that you can use to filter image capture. +The following steps use the [`filtered_camera`](https://github.com/erh/filtered_camera) module: + +{{< table >}} +{{% tablestep link="/services/ml/"%}} +{{}} +**1. Add an ML model service to your machine** + +Add an ML model service on your machine that is compatible with the ML model you want to use, for example [TFLite CPU](https://github.com/viam-modules/mlmodel-tflite). + +{{% /tablestep %}} +{{% tablestep link="/services/vision/"%}} +{{}} +**2. Select a suitable ML model** + +Click **Select model** on the ML model service configuration panel, then select an [existing model](https://app.viam.com/registry?type=ML+Model) you want to use, or click **Upload a new model** to upload your own. +If you're not sure which model to use, you can use [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO) from the **Registry**, which can detect people and animals, among other things. + +{{% /tablestep %}} +{{% tablestep link="/services/vision/"%}} +{{}} +**3. Add a vision service to use with the ML model** + +You can think of the vision service as the bridge between the ML model service and the output from your camera. + +Add and configure the `vision / ML model` service on your machine. +From the **Select model** dropdown, select the name of your ML model service (for example, `mlmodel-1`). + +{{% /tablestep %}} +{{% tablestep %}} +{{}} +**4. Configure the filtered camera** + +The `filtered-camera` {{< glossary_tooltip term_id="modular-resource" text="modular component" >}} pulls the stream of images from the camera you configured earlier, and applies the vision service to it. + +Configure a `filtered-camera` component on your machine, following the [attribute guide in the README](https://github.com/erh/filtered_camera?tab=readme-ov-file#configure-your-filtered-camera). +Use the name of the camera you configured in the first part of this guide as the `"camera"` to pull images from, and select the name of the vision service you just configured as your `"vision"` service. +Then add all or some of the labels your ML model uses as classifications or detections in `"classifications"` or `"objects"`. + +For example, if you are using the `EfficientDet-COCO` model, you could use a configuration like the following to only capture images when a person is detected with more than 60% confidence in your camera stream. + +```json {class="line-numbers linkable-line-numbers"} +{ + "window_seconds": 0, + "objects": { + "Person": 0.8 + }, + "camera": "camera-1", + "vision": "vision-1" +} +``` + +Additionally, you can also add a buffer window with `window_seconds` which controls the duration of a buffer of images captured prior to a successful match. +If you were to set `window_seconds` to `3`, the camera would also capture and sync images from the 3 seconds before a person appeared in the camera stream. + +{{% /tablestep %}} +{{% tablestep %}} +{{}} +**5. Configure data capture and sync on the filtered camera** + +Configure data capture and sync on the filtered camera just as you did before for the physical camera. +The filtered camera will only capture image data that passes the filters you configured in the previous step. + +Turn off data capture on your original camera if you haven't already, so that you don't capture duplicate or unfiltered images. + +{{% /tablestep %}} +{{% tablestep %}} +**6. Save to start capturing** + +Save the config. +With cloud sync enabled, captured data is automatically uploaded to the Viam app after a short delay. + +{{% /tablestep %}} +{{% tablestep %}} + +{{}} +**7. View filtered data in the Viam app** + +Once you save your configuration, place something that is part of your trained ML model within view of your camera. + +Images that pass your filter will be captured and will sync at the specified sync interval, which may mean you have to wait and then refresh the page for data to appear. +Your images will begin to appear under the **DATA** tab. + +If no data appears after the sync interval, check the [**Logs**](/cloud/machines/#logs) and ensure that the condition for filtering is met. +You can test the vision service from the [**CONTROL** tab](/cloud/machines/#control) to see its classifications and detections live. + +{{% /tablestep %}} +{{% tablestep %}} +{{}} +**7. (Optional) Trigger sync with custom logic** + +By default, the captured data syncs at the regular interval you specified in the data capture config. +If you need to trigger sync in a different way, see [Conditional cloud sync](/how-tos/conditional-sync/) for a documented example of syncing data only at certain times of day. + +{{% /tablestep %}} +{{< /table >}} + +## Stop data capture on the filtered camera + +If this is a test project, make sure you stop data capture to avoid [incurring fees](https://www.viam.com/product/pricing) for capturing large amounts of test data. + +In the **Data capture** section of your filtered camera's configuration, toggle the switch to **Off**. + +Click the **Save** button in the top right corner of the page to save your config. diff --git a/docs/data-ai/data/alert-on-data.md b/docs/data-ai/data/alert-on-data.md deleted file mode 100644 index ca2cb46311..0000000000 --- a/docs/data-ai/data/alert-on-data.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -linkTitle: "Alert on data" -title: "Alert on data" -weight: 35 -layout: "docs" -type: "docs" -no_list: true -description: "TODO" ---- diff --git a/docs/data-ai/data/export.md b/docs/data-ai/data/export.md index 713fa40b32..0d0131bbc0 100644 --- a/docs/data-ai/data/export.md +++ b/docs/data-ai/data/export.md @@ -2,8 +2,79 @@ linkTitle: "Export data" title: "Export data" weight: 40 -layout: "docs" +description: "Download data from the Viam app using the data client API or the Viam CLI." type: "docs" -no_list: true -description: "TODO" +tags: ["data management", "cloud", "sync"] +icon: true +images: ["/services/icons/data-capture.svg"] +aliases: + - /manage/data/export/ + - /data/export/ + - /services/data/export/ +viamresources: ["sensor", "data_manager"] +platformarea: ["data", "cli"] +date: "2024-12-03" --- + +You can download machine data from cloud storage to your computer with the Viam CLI. + +If you prefer to manage your data with code, see the [data client API documentation](/appendix/apis/data-client/). + +## Prerequisites + +{{< expand "Install the Viam CLI and authenticate." >}} +Install the Viam CLI using the option below that matches your system architecture: + +{{< readfile "/static/include/how-to/install-cli.md" >}} + +Then authenticate your CLI session with Viam using one of the following options: + +{{< readfile "/static/include/how-to/auth-cli.md" >}} + +{{< /expand >}} + +## Export data with the Viam CLI + +To export your data from the cloud using the Viam CLI: + +{{< table >}} +{{% tablestep %}} +**1. Filter the data you want to download** + +Navigate to the [**DATA** page in the Viam app](https://app.viam.com/data/view). + +Use the filters on the left side of the page to filter only the data you wish to export. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Copy the export command from the DATA page** + +In the upper right corner of the **DATA** page, click the **Export** button. + +Click **Copy export command**. +This copies the command, including your org ID and the filters you selected, to your clipboard. + +{{% /tablestep %}} +{{% tablestep link="/cli/#data" %}} +**3. Run the command** + +Run the copied command in a terminal: + +```sh {class="command-line" data-prompt="$"} +viam data export --org-ids= --data-type= --mime-types= --destination=. +``` + +This command uses the Viam CLI to download the data onto your computer based on the search criteria you select in the Viam app. + +By default, the command creates two new directories named `data` and `metadata` in the current directory and downloads the specified data into the `data` folder and metadata, like bounding box information and labels, in JSON format into the `metadata` folder. +If you want to store the data in a different location, change the specified folder with the [`--destination` flag](/cli/#named-arguments). + +Once the command has finished running and downloading the data, you can view and use the data locally. + +Since data is downloaded in parallel, the order is not guaranteed. +Sort your folder by filename in order to see them in chronological order. + +{{% /tablestep %}} +{{< /table >}}
+ +You can see more information about exporting data in the [Viam CLI documentation](/cli/#data). diff --git a/docs/data-ai/data/query.md b/docs/data-ai/data/query.md index 1919558fa3..744bdb4b1a 100644 --- a/docs/data-ai/data/query.md +++ b/docs/data-ai/data/query.md @@ -4,6 +4,279 @@ title: "Query data" weight: 20 layout: "docs" type: "docs" -no_list: true -description: "TODO" +aliases: + - /manage/data/query/ + - /data/query/ + - /use-cases/sensor-data-query/ + - /use-cases/sensor-data-query-with-third-party-tools/ +languages: [] +viamresources: ["sensor", "data_manager"] +platformarea: ["data", "core"] +date: "2024-12-03" +description: "Query sensor data that you have synced to the Viam app using the Viam app with SQL or MQL." --- + +You can use the data management service to [capture sensor data](/how-tos/collect-sensor-data/) from any machine and sync that data to the cloud. +Then, you can follow the steps on this page to query it using {{< glossary_tooltip term_id="sql" text="SQL" >}} or {{< glossary_tooltip term_id="mql" text="MQL" >}}. +For example, you can configure data capture for several sensors on one machine, or for several sensors across multiple machines, to report the ambient operating temperature. +You can then run queries against that data to search for outliers or edge cases, to analyze how the ambient temperature affects your machines' operation. + +- **SQL:** For querying captured data, Viam supports the [MongoDB Atlas SQL dialect](https://www.mongodb.com/docs/atlas/data-federation/query/sql/query-with-asql-statements/), which supports standard SQL query syntax in addition to Atlas-specific capabilities such as `FLATTEN` and `UNWIND`. + For more information, see the [MongoDB Atlas SQL language reference](https://www.mongodb.com/docs/atlas/data-federation/query/sql/language-reference/). + +- **MQL**: Viam also supports the [MongoDB Query language](https://www.mongodb.com/docs/manual/tutorial/query-documents/) for querying captured data from MQL-compatible clients such as `mongosh` or MongoDB Compass. + +## Query data in the Viam app + +### Prerequisites + +{{% expand "Captured sensor data. Click to see instructions." %}} + +Follow the guide to [capture sensor data](/how-tos/collect-sensor-data/). + +{{% /expand%}} + +### Query from the app + +Once your data has synced, you can query your data from within the Viam app using {{< glossary_tooltip term_id="sql" text="SQL" >}} or {{< glossary_tooltip term_id="mql" text="MQL" >}}. + +You must have the [owner role](/cloud/rbac/) in order to query data in the Viam app. + +{{< table >}} +{{% tablestep %}} +**1. Query with SQL or MQL** + +Navigate to the [**Query** page](https://app.viam.com/data/query). +Then, select either **SQL** or **MQL** from the **Query mode** dropdown menu on the right-hand side. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Run your query** + +This example query returns 5 readings from a component called `my-sensor`: + +{{< tabs >}} +{{% tab name="SQL" %}} + +```sql +SELECT * FROM readings +WHERE component_name = 'my-sensor' LIMIT 5 +``` + +{{% /tab %}} +{{% tab name="MQL" %}} + +```mql +[ + { "$match": { "component_name": "my-sensor" } }, + { "$limit": 5 } +] +``` + +{{% /tab %}} +{{< /tabs >}} +{{% /tablestep %}} +{{% tablestep %}} +**3. Review results** + +Click **Run query** when ready to perform your query and get matching results. +Query results are displayed as a [JSON array](https://json-schema.org/understanding-json-schema/reference/array) below your query. + +{{% expand "See examples" %}} + +- The following shows a SQL query that filters by component name and specific column names, and its returned results: + + ```sh {class="command-line" data-prompt="$" data-output="3-80"} + SELECT time_received, data, tags FROM readings + WHERE component_name = 'PM_sensor' LIMIT 2 + [ + { + "time_received": "2024-07-30 00:04:02.144 +0000 UTC", + "data": { + "readings": { + "units": "μg/m³", + "pm_10": 7.6, + "pm_2.5": 5.7 + } + }, + "tags": [ + "air-quality" + ] + }, + { + "time_received": "2024-07-30 00:37:22.192 +0000 UTC", + "data": { + "readings": { + "pm_2.5": 9.3, + "units": "μg/m³", + "pm_10": 11.5 + } + }, + "tags": [ + "air-quality" + ] + } + ] + ``` + +- The following shows a SQL query that returns a count of records matching the search criteria: + + ```sh {class="command-line" data-prompt="$" data-output="3-80"} + SELECT count(*) FROM readings + WHERE component_name = 'PM_sensor' + [ + { + "_1": 111550 + } + ] + ``` + +For more information on MQL syntax, see the [MQL (MongoDB Query Language)](https://www.mongodb.com/docs/manual/tutorial/query-documents/) documentation. + +{{% /expand%}} + +{{% /tablestep %}} +{{< /table >}} + +## Query data using third-party tools + +### Prerequisites + +{{% expand "Captured sensor data. Click to see instructions." %}} + +Follow the guide to [capture sensor data](/how-tos/collect-sensor-data/). + +{{% /expand%}} + +{{% expand "The Viam CLI to set up data query. Click to see instructions." %}} + +You must have the Viam CLI installed to configure querying with third-party tools. + +{{< readfile "/static/include/how-to/install-cli.md" >}} + +{{% /expand%}} + +{{% expand "mongosh or another third-party tool for querying data. Click to see instructions." %}} + +[Download the `mongosh` shell](https://www.mongodb.com/try/download/shell) or another third-party tool that can connect to a MongoDB data source to follow along. +See the [`mongosh` documentation](https://www.mongodb.com/docs/mongodb-shell/) for more information. + +{{% /expand%}} + +### Configure data query + +If you want to query data from third party tools, you have to configure data query to obtain the credentials you need to connect to the third party service. + +{{< readfile "/static/include/how-to/query-data.md" >}} + +### Query data using third-party tools + +You can use third-party tools, such as the [`mongosh` shell](https://www.mongodb.com/docs/mongodb-shell/) or [MongoDB Compass](https://www.mongodb.com/docs/compass/current/), to query captured sensor data. + +{{< table >}} +{{% tablestep link="/how-tos/sensor-data-query-with-third-party-tools/#configure-data-query"%}} +**1. Connect to your Viam organization's data** + +Run the following command to connect to your Viam organization's MongoDB Atlas instance from `mongosh` using the connection URI you obtained during query configuration: + +```sh {class="command-line" data-prompt=">"} +mongosh "mongodb://db-user-abcd1e2f-a1b2-3c45-de6f-ab123456c123:YOUR-PASSWORD-HERE@data-federation-abcd1e2f-a1b2-3c45-de6f-ab123456c123-0z9yx.a.query.mongodb.net/?ssl=true&authSource=admin" +``` + +{{% /tablestep %}} +{{% tablestep %}} +**2. Query data from a compatible client** + +Once connected, you can run SQL or MQL statements to query captured data directly. + +The following query searches the `sensorData` database and `readings` collection, and gets sensor readings from an ultrasonic sensor on a specific `robot_id` where the recorded `distance` measurement is greater than `.2` meters. + +{{< tabs >}} +{{% tab name="MQL" %}} + +The following MQL query performs counts the number of sensor readings where the `distance` value is above `0.2` using the [MongoDB query language](https://www.mongodb.com/docs/manual/tutorial/query-documents/): + +```mongodb {class="command-line" data-prompt=">" data-output="10"} +use sensorData +db.readings.aggregate( + [ + { $match: { + 'robot_id': 'abcdef12-abcd-abcd-abcd-abcdef123456', + 'component_name': 'my-ultrasonic-sensor', + 'data.readings.distance': { $gt: .2 } } }, + { $count: 'numStanding' } + ] ) +[ { numStanding: 215 } ] +``` + +{{% /tab %}} +{{% tab name="SQL" %}} + +The following query uses the MongoDB [`$sql` aggregation pipeline stage](https://www.mongodb.com/docs/atlas/data-federation/supported-unsupported/pipeline/sql/): + +```mongodb {class="command-line" data-prompt=">" data-output="11"} +use sensorData +db.aggregate( +[ + { $sql: { + statement: "select count(*) as numStanding from readings \ + where robot_id = 'abcdef12-abcd-abcd-abcd-abcdef123456' and \ + component_name = 'my-ultrasonic-sensor' and (CAST (data.readings.distance AS DOUBLE)) > 0.2", + format: "jdbc" + }} +] ) +[ { '': { numStanding: 215 } } ] +``` + +{{< alert title="Tip" color="tip" >}} +If you use a data field that is named the same as a [reserved SQL keyword](https://en.wikipedia.org/wiki/List_of_SQL_reserved_words), such as `value` or `position`, you must escape that field name in your query using backticks ( \` ). +For example, to query against a field named `value` which is a subfield of the `data` field in the `readings` collection, you would use: + +```mongodb {class="command-line" data-prompt=">"} +select data.`value` from readings +``` + +See the [MongoDB Atlas Documentation](https://www.mongodb.com/docs/atlas/data-federation/query/sql/language-reference/#compatability-and-limitations) for more information. + +{{< /alert >}} + +{{% /tab %}} +{{< /tabs >}} + + + +{{< expand "Need to query by date? Click here." >}} + +##### Query by date + +When using MQL to query your data by date or time range, you can optimize query performance by avoiding the MongoDB `$toDate` expression, using the [BSON `date` type](https://www.mongodb.com/docs/manual/reference/bson-types/#date) instead. + +For example, use the following query to search by a date range in the `mongosh` shell, using the JavaScript `Date()` constructor to specify an explicit start timestamp, and use the current time as the end timestamp: + +```mongodb {class="command-line" data-prompt=">"} +// Switch to sensorData database: +use sensorData + +// Set desired start and end times: +const startTime = new Date('2024-02-10T19:45:07.000Z') +const endTime = new Date() + +// Run query using $match: +db.readings.aggregate( + [ + { $match: { + time_received: { + $gte: startTime, + $lte: endTime } + } } + ] ) +``` + +{{< /expand>}} + +{{% /tablestep %}} +{{< /table >}} + +For information on connecting to your Atlas instance from other MQL clients, see the MongoDB Atlas [Connect to your Cluster Tutorial](https://www.mongodb.com/docs/atlas/tutorial/connect-to-your-cluster/). + +On top of querying sensor data with third-party tools, you can also [query it with the Python SDK](/data-ai/reference/data-client/) or [visualize it](/data-ai/data/visualize/). diff --git a/docs/data-ai/data/visualize.md b/docs/data-ai/data/visualize.md index 50e52c5b54..9ced06a9de 100644 --- a/docs/data-ai/data/visualize.md +++ b/docs/data-ai/data/visualize.md @@ -4,6 +4,285 @@ title: "Visualize data" weight: 20 layout: "docs" type: "docs" -no_list: true -description: "TODO" +images: ["/services/icons/data-visualization.svg"] +icon: true +aliases: + - /data/visualize/ + - /use-cases/sensor-data-visualize/ + - /how-tos/sensor-data-visualize/ +viamresources: ["sensor", "data_manager"] +platformarea: ["data", "fleet"] +date: "2024-12-04" +description: "Use teleop or grafana to visualize sensor data from the Viam app." --- + +Once you have used the data management service to [capture data](/data-ai/data/get-started/capture-sync/), you can visualize your data with either the Viam app's **TELEOP** page or a variety of third-party tools, including Grafana, Tableau, Google's Looker Studio, and more. + +## Teleop + +Visualize sensor data on a widget with the Viam app's **TELEOP** page. + +### Prerequisites + +{{% expand "A configured machine with sensor components" %}} + +Make sure your machine has at least one of the following: + +- A movement sensor or sensor + +See [configure a machine](/how-tos/configure/) for more information. + +{{% /expand%}} + +### Configure a workplace with a sensor widget + +{{< table >}} +{{% tablestep %}} +**1. Create a workspace in the Viam app** + +Log in to the [Viam app](https://app.viam.com/). + +Navigate to the **FLEET** page's **TELEOP** tab. +Create a workspace by clicking **+ Create workspace**. +Give it a name. + +{{}} + +{{% /tablestep %}} +{{% tablestep %}} +**2. Add widgets** + +Click **Add widget** and select the appropriate widget for your machine. +Repeat as many times as necessary. + +Click **Add widget** and add a **GPS** widget for a movement sensor and a **time series** or **stat** widget for a sensor. + +{{% /tablestep %}} +{{% tablestep %}} +**3. Select a machine** + +Now, select a machine with which to make your teleop workspace come to life. +Select **Monitor** in the top right corner to leave editing mode. +Click **Select machine** and select your configured machine. + +Your dashboard now shows the configured widget for the data from your machine. +For example, a time series graph measuring noise over time: + +{{< imgproc src="/services/data/time-series.png" alt="Time series widget measuring noise over time." style="width:500px" resize="1200x" class="imgzoom fill" >}} + +{{% /tablestep %}} +{{< /table >}} + +## Third party tools + +Configure data query and use a third-party visualization tool like Grafana to visualize your sensor data. + +### Prerequisites + +{{% expand "Captured sensor data. Click to see instructions." %}} + +Follow the docs to [capture data](/data-ai/data/get-started/capture-sync/) from a sensor. + +{{% /expand%}} + +{{% expand "The Viam CLI to set up data query. Click to see instructions." %}} + +You must have the Viam CLI installed to configure querying with third-party tools. + +{{< readfile "/static/include/how-to/install-cli.md" >}} + +{{% /expand%}} + +### Configure data query + +If you want to query data from third party tools, you have to configure data query to obtain the credentials you need to connect to the third party service. + +{{< readfile "/static/include/how-to/query-data.md" >}} + +### Visualize data with third-party tools + +When you sync captured data to Viam, that data is stored in the Viam organization’s MongoDB Atlas Data Federation instance. +You can use third-party visualization tools, such as Grafana, to visualize your data. +Your chosen third-party visualization tool must be able to connect to a [MongoDB Atlas Data Federation](https://www.mongodb.com/docs/atlas/data-federation/query/sql/connect/) instance as its data store. + +{{}} + +Select a tab below to learn how to configure your visualization tool for use with Viam: + +{{< tabs >}} +{{< tab name="Grafana" >}} + +{{< table >}} +{{% tablestep %}} +**1. Choose Grafana instance** + +[Install](https://grafana.com/docs/grafana/latest/setup-grafana/installation/) or set up Grafana. +You can use either a local instance of Grafana Enterprise or Grafana Cloud, and can use the free trial version of Grafana Cloud if desired. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Install connector to MongoDB data source** + +Navigate to your Grafana web UI. +Go to **Connections > Add new connection** and add the [Grafana MongoDB data source](https://grafana.com/grafana/plugins/grafana-mongodb-datasource/) plugin to your Grafana instance. + +{{}} + +Install the datasource plugin. + +{{% /tablestep %}} +{{% tablestep %}} +**3. Configure a data connection** + +Navigate to the Grafana MongoDB data source that you just installed. +Select **Add new data source**. + +Enter the following information in the configuration UI for the plugin: + +- **Connection string**: Enter the following connection string, and replace `` with your database hostname as configured with the `viam data database configure` command, and replace `` with the desired database name to query. + For most use cases with Viam, this database name will be `sensorData`: + + ```sh {class="command-line" data-prompt="$"} + mongodb:///?directConnection=true&authSource=admin&tls=true + ``` + +- **User**: Enter the following username, substituting your organization ID as determined earlier, for ``: + + ```sh {class="command-line" data-prompt="$"} + db-user- + ``` + +- **Password**: Enter the password you provided earlier. + + {{}} + +{{< /tablestep >}} +{{% tablestep %}} +**4. Use Grafana for dashboards** + +With your data connection established, you can then build dashboards that provide insight into your data. + +Grafana additionally supports the ability to directly [query and transform your data](https://grafana.com/docs/grafana/latest/panels-visualizations/query-transform-data/) within a dashboard to generate more granular visualizations of specific data. +You might use this functionality to visualize only a single day's metrics, limit the visualization to a select machine or component, or to isolate an outlier in your reported data, for example. + +You can query your captured data within a Grafana dashboard using either {{< glossary_tooltip term_id="sql" text="SQL" >}} or {{< glossary_tooltip term_id="mql" text="MQL" >}}. + +For example, try the following query to obtain readings from a sensor, replacing `sensor-1` with the name of your component: + +```mql +sensorData.readings.aggregate([ + {$match: { + component_name: "sensor-1", + time_received: {$gte: ISODate(${__from})} + }}, + {$limit: 1000} + ] + ) +``` + +See the [guide on querying sensor data](/how-tos/sensor-data-query-with-third-party-tools/) for more information. + + + +{{% /tablestep %}} +{{< /table >}} + +{{% /tab %}} +{{< tab name="Other visualization tools" >}} + +{{< table >}} +{{% tablestep %}} +**1. Install connector to MongoDB data source** + +Some visualization clients are able to connect to the Viam MongoDB Atlas Data Federation instance natively, while others require that you install and configure an additional plugin or connector. +For example, Tableau requires both the [Atlas SQL JDBC Driver](https://www.mongodb.com/try/download/jdbc-driver) as well as the [Tableau Connector](https://www.mongodb.com/try/download/tableau-connector) in order to successfully connect and access data. + +Check with the documentation for your third-party visualization tool to be sure you have the required additional software installed to connect to a MongoDB Atlas Data Federation instance. + +{{% /tablestep %}} +{{% tablestep %}} +**2. Configure a data connection** + +Most third-party visualization tools require the _connection URI_ (also called the connection string) to that database server, and the _credentials_ to authenticate to that server in order to visualize your data. +Some third-party tools instead require a _hostname_ and _database name_ of the database server. +This is what they look like: + +{{< tabs >}} +{{% tab name="Connection URI and credentials" %}} + +If your client supports a connection URI, use the following format and replace `YOUR-PASSWORD-HERE` with your database password as configured with the `viam data database configure` command: + +```sh {class="command-line" data-prompt="$"} +mongodb://db-user-abcdef12-abcd-abcd-abcd-abcdef123456:YOUR-PASSWORD-HERE@data-federation-abcdef12-abcd-abcd-abcd-abcdef123456-e4irv.a.query.mongodb.net/?ssl=true&authSource=admin +``` + +You can also specify a desired database name in your connection URI, if desired. +For example, to use the `sensorData` database, the default database name for uploaded sensor data, your connection string would resemble: + +```sh {class="command-line" data-prompt="$"} +mongodb://db-user-abcdef12-abcd-abcd-abcd-abcdef123456:YOUR-PASSWORD-HERE@data-federation-abcdef12-abcd-abcd-abcd-abcdef123456-e4irv.a.query.mongodb.net/sensorData?ssl=true&authSource=admin +``` + +{{% /tab %}} +{{% tab name="Hostname and database name" %}} + +If you client doesn't use a connection URI, you can supply the hostname and database name of the database server instead. +Substitute the hostname returned from the `viam data database hostname` command for `` and the desired database name to query for ``: + +```sh {class="command-line" data-prompt="$"} +mongodb:///?directConnection=true&authSource=admin&tls=true +``` + +For example, to use the `sensorData` database, the default name for uploaded data, your connection string would resemble: + +```sh {class="command-line" data-prompt="$"} +mongodb://data-federation-abcdef12-abcd-abcd-abcd-abcdef123456-e4irv.a.query.mongodb.net/sensorData?directConnection=true&authSource=admin&tls=true +``` + +If you are using a connection URI, the hostname and database name are already included in the URI string. + +You database user name is of the following form: + +```sh {class="command-line" data-prompt="$"} +db-user- +``` + +Substitute your organization ID for ``. + +{{< /tab >}} +{{% /tabs %}} + +{{% /tablestep %}} +{{% tablestep %}} +**3. Use visualization tools for dashboards** + +Some third-party visualization tools support the ability to directly query your data within their platform to generate more granular visualizations of specific data. +You might use this functionality to visualize only a single day's metrics, limit the visualization to a select machine or component, or to isolate an outlier in your reported data, for example. + +While every third-party tool is different, you would generally query your data using either {{< glossary_tooltip term_id="sql" text="SQL" >}} or {{< glossary_tooltip term_id="mql" text="MQL" >}}. +See the [guide on querying sensor data](/how-tos/sensor-data-query-with-third-party-tools/) for more information. + + + +{{% /tablestep %}} +{{< /table >}} + +{{< /tab >}} +{{< /tabs >}} + +For more detailed instructions on using Grafana, including a full step-by-step configuration walkthrough, see [visualizing data with Grafana](/tutorials/services/visualize-data-grafana/). + +On top of visualizing sensor data with third-party tools, you can also [query it with the Python SDK](/appendix/apis/data-client/) or [query it with the Viam app](/data-ai/data/query/). + +To see full projects using visualization, check out these resources: + +{{< cards >}} +{{% card link="/tutorials/control/air-quality-fleet/" %}} +{{% manualcard link="https://www.viam.com/post/harnessing-the-power-of-tableau-to-visualize-sensor-data" img="services/data/tableau-preview.png" alt="Tableau dashboard" %}} + +### Visualize data with Tableau + +Turn a data dump into valuable insights that drive smarter decision-making and monitor sensor data in real-time. + +{{% /manualcard %}} +{{< /cards >}} diff --git a/docs/data-ai/get-started/capture-sync.md b/docs/data-ai/get-started/capture-sync.md index 4c0ed602f0..3e7cb583a7 100644 --- a/docs/data-ai/get-started/capture-sync.md +++ b/docs/data-ai/get-started/capture-sync.md @@ -1,9 +1,67 @@ --- linkTitle: "Capture edge data" title: "Capture and sync edge data" +tags: ["data management", "data", "services"] weight: 10 layout: "docs" type: "docs" -no_list: true -description: "TODO" +platformarea: ["data"] +description: "Capture data from a resource on your machine and sync the data to the cloud." +date: "2024-12-03" --- + +You can use data management service to capture and sync data from your machine to the cloud. +Once you have configured the data management service, you can specify the data you want to capture at a resource level. + +To configure data capture and cloud sync, you must have one of the following components and services configured on your machine: + +{{< readfile "/static/include/data/capture-supported.md" >}} + +## Configure the data management service + +To start, configure a data management service to capture and sync the resource data. + +From your machine's **CONFIGURE** tab in the [Viam app](https://app.viam.com), add the `data management` service. +On the panel that appears, configure data capture and sync attributes as applicable. +To both capture data and sync it to the cloud, keep both **Capturing** and **Syncing** switched on. + +Click the **Save** button in the top right corner of the page to save your config. + +{{< imgproc src="/tutorials/data-management/data-management-conf.png" alt="Data capture configuration card." resize="600x" >}} + +For more advanced attribute configuration information, see [Data management service configuration](/data-ai/reference/data/#data-management-service-configuration). + +## Configure data capture + +Scroll to the config card you wish to configure data capture and sync on. + +In the **Data capture** section: + +- Click the **Method** dropdown and select the method you want to capture. +- Set the frequency in hz, for example to `0.1` to capture an image every 10 seconds. + +For example, with a camera component capturing the `ReadImage` method every 3.03 seconds: + +{{< imgproc src="/tutorials/data-management/camera-data-capture.png" alt="Data capture configuration card." resize="600x" >}} + +Click the **Save** button in the top right corner of the page to save your config. + +For more advanced attribute configuration information, see [Resource data capture configuration](/data-ai/reference/data/#resource-data-capture-configuration). + +## Stop data capture + +If this is a test project, make sure you stop data capture to avoid charges for a large amount of unwanted data. + +In the **Data capture** section of your resource's configuration card, toggle the switch to **Off**. + +Click the **Save** button in the top right corner of the page to save your config. + +## View captured data + +To view all the captured data you have access to, go to the [**DATA** tab](https://app.viam.com/data/view) where you can filter by location, type of data, and more. + +You can also access data from a resource or machine part menu. + +## Next steps + +Now that you have captured data, you could [create a dataset](/data-ai/ai/create-dataset) and use this data to [train your own Machine Learning model](/data-ai/ai/train-tflite/) with the Viam platform. diff --git a/docs/data-ai/reference/data-client.md b/docs/data-ai/reference/data-client.md index f8e23e1057..ff74734765 100644 --- a/docs/data-ai/reference/data-client.md +++ b/docs/data-ai/reference/data-client.md @@ -4,5 +4,5 @@ linkTitle: "Data Client API" weight: 30 type: "docs" layout: "empty" -canonical: "/dev/apis/data-client/" +canonical: "/dev/reference/apis/data-client/" --- diff --git a/docs/data-ai/reference/data/_index.md b/docs/data-ai/reference/data/_index.md index c11448dd9e..c124e8ebfe 100644 --- a/docs/data-ai/reference/data/_index.md +++ b/docs/data-ai/reference/data/_index.md @@ -744,34 +744,7 @@ The following attributes are available for data capture configuration: The following components and services support data capture, for the following methods: -{{< tabs >}} -{{% tab name="viam-server" %}} - - -| Type | Method | -| ----------------------------------------------- | ------ | -| [Arm](/components/arm/) | `EndPosition`, `JointPositions` | -| [Board](/components/board/) | `Analogs`, `Gpios` | -| [Camera](/components/camera/) | `GetImages`, `ReadImage`, `NextPointCloud` | -| [Encoder](/components/encoder/) | `TicksCount` | -| [Gantry](/components/gantry/) | `Lengths`, `Position` | -| [Motor](/components/motor/) | `Position`, `IsPowered` | -| [Movement sensor](/components/movement-sensor/) | `AngularVelocity`, `CompassHeading`, `LinearAcceleration`, `LinearVelocity`, `Orientation`, `Position` | -| [Sensor](/components/sensor/) | `Readings` | -| [Servo](/components/servo/) | `Position` | -| [Vision service](/services/vision/) | `CaptureAllFromCamera` | - -{{% /tab %}} -{{% tab name="viam-micro-server" %}} - - -| Type | Method | -| ---- | ------ | -| [Movement Sensor](/components/movement-sensor/) | [`AngularVelocity`](/dev/reference/apis/components/movement-sensor/#getangularvelocity), [`LinearAcceleration`](/dev/reference/apis/components/movement-sensor/#getlinearacceleration), [`LinearVelocity`](/dev/reference/apis/components/movement-sensor/#getlinearvelocity) | -| [Sensor](/components/sensor/) | [`GetReadings`](/dev/reference/apis/components/sensor/#getreadings) | - -{{% /tab %}} -{{< /tabs >}} +{{< readfile "/static/include/data/capture-supported.md" >}} ## View captured data diff --git a/docs/data-ai/reference/ml-model-client.md b/docs/data-ai/reference/ml-model-client.md index bd68278b3b..208e700686 100644 --- a/docs/data-ai/reference/ml-model-client.md +++ b/docs/data-ai/reference/ml-model-client.md @@ -4,5 +4,5 @@ linkTitle: "ML Model API" weight: 30 type: "docs" layout: "empty" -canonical: "/dev/apis/data-client/" +canonical: "/dev/reference/apis/services/ml/" --- diff --git a/docs/data-ai/reference/ml-training-client.md b/docs/data-ai/reference/ml-training-client.md index 4cb9487449..ca72b6850c 100644 --- a/docs/data-ai/reference/ml-training-client.md +++ b/docs/data-ai/reference/ml-training-client.md @@ -4,5 +4,5 @@ linkTitle: "ML Training Client API" weight: 40 type: "docs" layout: "empty" -canonical: "/dev/apis/ml-training-client/" +canonical: "/dev/reference/apis/services/ml/" --- diff --git a/docs/data-ai/reference/ml.md b/docs/data-ai/reference/ml.md deleted file mode 100644 index 5da7d8985b..0000000000 --- a/docs/data-ai/reference/ml.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "ML Model Service" -linkTitle: "ML Model Service" -weight: 11 -type: "docs" -tags: ["data management", "ml", "model training"] -aliases: - - /manage/data/deploy-model/ - - /services/ml/ - - /ml/deploy/ - - /services/ml/deploy/ - - /manage/ml/ - - /ml/ -description: "Deploy machine learning models to a machine and use the vision service to detect or classify images or to create point clouds of identified objects." -modulescript: true -hide_children: true -icon: true -no_list: true -images: ["/services/icons/ml.svg"] -date: "2024-09-03" -# updated: "" # When the content was last entirely checked -# SME: Aaron Casas ---- - -Machine learning (ML) provides your machines with the ability to adjust their behavior based on models that recognize patterns or make predictions. - -Common use cases include: - -- Object detection, which enables machines to detect people, animals, plants, or other objects with bounding boxes, and to perform actions when they are detected. -- Object classification, which enables machines to separate people, animals, plants, or other objects into predefined categories based on their characteristics, and to perform different actions based on the classes of objects. -- Speech recognition, natural language processing, and speech synthesis, which enable machines to verbally communicate with us. - -The Machine Learning (ML) model service allows you to deploy [machine learning models](/registry/ml-models/) to your machine. -The service works with models trained inside and outside the Viam app: - -- You can [train](/how-tos/train-deploy-ml/) models on data from your machines. -- You can upload externally trained models on the [**MODELS** tab](https://app.viam.com/data/models) in the **DATA** section of the Viam app. -- You can use [ML models](https://app.viam.com/registry?type=ML+Model) from the [Viam Registry](https://app.viam.com/registry). -- You can use a [model](/registry/ml-models/) trained outside the Viam platform whose files are on your machine. - -## Configuration - -You must deploy an ML model service to use machine learning models on your machines. -Once you have deployed the ML model service, you can select an [ML model](#machine-learning-models-from-registry). - -After deploying your model, you need to configure an additional service to use the deployed model. -For example, you can configure an [`mlmodel` vision service](/services/vision/) to visualize the predictions your model makes. -For other use cases, consider [creating custom functionality with a module](/how-tos/create-module/). - -{{}} - -{{< alert title="Add support for other models" color="tip" >}} -ML models must be designed in particular shapes to work with the `mlmodel` [classification](/services/vision/mlmodel/) or [detection](/services/vision/mlmodel/) model of Viam's [vision service](/services/vision/). -See [ML Model Design](/registry/advanced/mlmodel-design/) to design a modular ML model service with models that work with vision. -{{< /alert >}} - -{{< alert title="Note" color="note" >}} -For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU. -{{< /alert >}} - -## Machine learning models from registry - -You can search the machine learning models that are available to deploy on this service from the registry here: - -{{}} - -## API - -The [ML model service API](/dev/reference/apis/services/ml/) supports the following methods: - -{{< readfile "/static/include/services/apis/generated/mlmodel-table.md" >}} - -## Next steps - -The ML model service only runs your model on the machine. -To use the inferences from the model, you must use an additional service such as a [vision service](/services/vision/): - -{{< cards >}} -{{% manualcard link="/services/vision/mlmodel/" title="Create a visual detector or classifier" noimage="True" %}} - -Use your model deployed with the ML model service by adding a vision service that can provide detections or classifications depending on your ML model. - -{{% /manualcard %}} -{{% card link="/how-tos/train-deploy-ml/" noimage="True" %}} -{{% card link="/how-tos/detect-people/" customTitle="Detect people" noimage="true" %}} - -{{< /cards >}} diff --git a/docs/data-ai/reference/vision-client.md b/docs/data-ai/reference/vision-client.md index 469b4d6c13..f966b092ea 100644 --- a/docs/data-ai/reference/vision-client.md +++ b/docs/data-ai/reference/vision-client.md @@ -4,5 +4,5 @@ linkTitle: "Vision Service API" weight: 30 type: "docs" layout: "empty" -canonical: "/dev/apis/vision-client/" +canonical: "/dev/reference/apis/services/vision/" --- diff --git a/layouts/404.html b/layouts/404.html index 8c1fd3050e..e38cd1e380 100644 --- a/layouts/404.html +++ b/layouts/404.html @@ -10,7 +10,7 @@

But maybe these could be helpful?

{{ partial "card.html" (dict "link" "/platform/" "class" "" "customTitle" "" "customDescription" "Get an overview of the Viam platform and how you can configure, program, operate, manage, and collect data from your machines." "customCanonicalLink" "" ) }} {{ partial "card.html" (dict "link" "/how-tos/drive-rover/" "class" "" "customTitle" "" "customDescription" "" "customCanonicalLink" "" ) }} {{ partial "card.html" (dict "link" "/installation/viam-server-setup/" "class" "" "customTitle" "" "customDescription" "" "customCanonicalLink" "" ) }} - {{ partial "card.html" (dict "link" "/configure/" "class" "" "customTitle" "" "customDescription" "" "customCanonicalLink" "" ) }} + {{ partial "card.html" (dict "link" "/operate/" "class" "" "customTitle" "" "customDescription" "" "customCanonicalLink" "" ) }} {{ partial "card.html" (dict "link" "/sdks/" "class" "" "customTitle" "" "customDescription" "" "customCanonicalLink" "" ) }} diff --git a/static/include/data/capture-supported.md b/static/include/data/capture-supported.md new file mode 100644 index 0000000000..5eba3f1fa6 --- /dev/null +++ b/static/include/data/capture-supported.md @@ -0,0 +1,29 @@ + +{{< tabs >}} +{{% tab name="viam-server" %}} + + +| Type | Method | +| ----------------------------------------------- | ------ | +| [Arm](/components/arm/) | `EndPosition`, `JointPositions` | +| [Board](/components/board/) | `Analogs`, `Gpios` | +| [Camera](/components/camera/) | `GetImages`, `ReadImage`, `NextPointCloud` | +| [Encoder](/components/encoder/) | `TicksCount` | +| [Gantry](/components/gantry/) | `Lengths`, `Position` | +| [Motor](/components/motor/) | `Position`, `IsPowered` | +| [Movement sensor](/components/movement-sensor/) | `AngularVelocity`, `CompassHeading`, `LinearAcceleration`, `LinearVelocity`, `Orientation`, `Position` | +| [Sensor](/components/sensor/) | `Readings` | +| [Servo](/components/servo/) | `Position` | +| [Vision service](/services/vision/) | `CaptureAllFromCamera` | + +{{% /tab %}} +{{% tab name="viam-micro-server" %}} + + +| Type | Method | +| ---- | ------ | +| [Movement Sensor](/components/movement-sensor/) | [`AngularVelocity`](/appendix/apis/components/movement-sensor/#getangularvelocity), [`LinearAcceleration`](/appendix/apis/components/movement-sensor/#getlinearacceleration), [`LinearVelocity`](/appendix/apis/components/movement-sensor/#getlinearvelocity) | +| [Sensor](/components/sensor/) | [`GetReadings`](/appendix/apis/components/sensor/#getreadings) | + +{{% /tab %}} +{{< /tabs >}}